Contributor(s): THL Staff.
Once you have created recordings, unfortunately you are still at the early phases of creating language instructional units. The next step involves technically processing the recordings and then linguistically processing them so that they are rendered into usable resources for study.Technical processing of the audio-video is needed in order to edit the segments one desires to use in the title, and then capture and compress those segments for use within a computer and over the Web. These digital files then must be transcribed and translated using THL's QuillDriver software, which allows for transcription, translation, annotation and time-coding of audio-video files.
The major thing to understand is that high quality visual renderings of the file will be all but impossible to use on the Web, and possibly difficult to use in a computer both from the view point of the storage and processing power required to cope with the huge amount of data embedded within a raw video file. One thus has to determine how one will dissiminate the recordings (via the Web, DVD, etc.) in terms of what type of compressions to use. It is essential that one maintains the time codes from the logging of the file, so that subsequently if one wants a different type of compression, one can easily generate exactly the same file with a different compression from the original tape.
Once an edited and compressed title has been created, it must be transcribed and translated. Transcription is done using THL software called "QuillDriver". QuillDriver provides a user friendly interface to transcribe audio-video in multiple scripts, as well as translate it. A crucial part of transcription is "time-coding", a key function of QuillDriver. Time-coding involves inserting "stop" and "start" times at regular intervals, such as for every clause, sentence, speaker or other interval. These time codes then can be used to synchronize the resultant transcription with the audio-video playback. Thus as one listens to the audio, a highlight will naturally scroll up and down the transcript to show the user where the audio currently is; conversely, one can click on a line in the transcript and it will play back precisely that part of the audio-video.
Transcription itself from a linguistic point of view involves two chief challenges. The first is that untrained transcribers often will skip over many words, such as "um", "ah", repetition and so forth. Thus it is important to stress with the transcribers that it is necessary to transcribe ever single sound - what they hear, not what they expect they will hear, or what they think the person should have said, or what they think is important.
The second challenge is spelling. Spoken Tibetan, for example, has many different dialects that dramatically diverge from each other, and in particular diverge from standard literary Tibetan. Often the pronunciation of words is quite far from the standard classical spelling, and in many cases it is quite difficult or impossible to determine a standard literary word corresponding to a spoken word. In the latter case, one must create a new spelling for a word. Thus transcribers must keep track of words that require revised or new spelling, follow consistent principles in establishing those spellings, and then consistently use those agreed upon spellings in transcription work.
Given the great divergence of dialects in Tibetan, there is no alternative but to create different standarized spellings for each dialect, but it is possible to follow the same principles. When pronunciation diverges markedly from the standard classical spelling of the term, one can apply explicit principles to determine the degree of divergence. We advocate a moderate practice which maintains classical spellings unless the divergence is greater than a specified degree, at which point we suggest using new spellings. When using new spellings in this case, or for spoken words that apparently have no literary equivalents, the main principle is to avoid spellings which would suggest false etymologies.