Index of a/v recordings

Front Page Forums Help Wanted or Offered Index of a/v recordings

This topic contains 17 replies, has 10 voices, and was last updated by  Alex M 2 weeks, 1 day ago.

Viewing 3 posts - 16 through 18 (of 18 total)
  • Author
  • #3226

    Alex M

    William W You are absolutely right about general direction. Details on how to do it depend on type of the wiki, choice of how data is structured, what external resources are used and how to link to them with timestamps. I think that it’d be useful to convert your annotations to wiki format as-is without timestamps and add a link to page with audio. Then timestamp new recordings, then get back and add timestamps where they are missing.

    Ted What’s current status of the wiki? Can I be of any help?


    Blake Barton

    Hello All,

    I think we have some good ideas here, and I have been pondering the pros and cons of each. I also have an IT background, although it has been a few years since I worked in the field. I wanted to cover the proposed ideas at a higher level without necessarily worrying about implementation details.

    1. Alex M.’s idea of dividing each recording into parts, and tagging them. Recordings are search able. Clicking on a search result starts audio/video player preset to play this fragment.

    Most convenient user friendly solution
    Most elegant solution

    It will take a great deal of work to manually divide and tag the audios
    Will probably require a large number of volunteers or it becomes a for
    profit web site
    Technically more complex than some other solutions
    Alex is not sure if he will be able to complete the project
    Many technical unknowns

    2. Take the work that William has started and apply the keywords and descriptions to all existing audios. Add some sort of search engine to the pages.

    May be the most likely to get implemented
    Fairly simple to implement
    The user might need to go through a large search result set, depending on the topic
    User would have find the topic within the video unless we add time stamps to topics
    which would be quite a bit of effort.
    Would take some effort to apply tags/keywords to existing audios

    3. Transcribe audios to text and add full text search capabilities. This could be done with automation like Dragon Speaks, or it could be done manually. There is also a hybrid solution where a person listens to an audio on headphones, and repeats the talk into her own microphone. It then produces a text file. This eliminates the issues with poor recording quality and difficult to hear questions on the audios. They claim that there would be less manual editing.

    Would make these teachings very accessible to the students
    Once audios are transcribed the rest of the project would be pretty straightforward to

    A manual transcription would take a lot of effort, and volunteers do not tend to stick
    with this project very long.
    It is not clear how well speech to text recognition would work with the existing
    recordings even with training to Culadasa’s voice, and how much manual editing would be
    A hybrid transcription might not take much longer than listening to the audio, but still
    a lot of work.

    Any feedback or additional ideas are welcome.


    Alex M

    Blake I agree and would like to add several things.

    For each recording there can be several types of additional data, they can coexist, though some of them make others somewhat obsolete:

    • William’s annotations for whole recording
    • Timestamps with short summary & tags. I think searching for questions in recording would be easy if waveform is visually presented. Usually sound level of questions is quite low, and there are pauses around. Though often Culadasa tells more than can be expected from the question.
    • Machine-generated transcription without any manual correction. It can be especially useful if it includes timestamps with sentence or word-level granularity. Can be used for fulltext search even if error rate is 20%. Though it’d not be useful for reading, user’d have to listen to the audio.
    • Manual transcription, either fully manual or based on machine transcription with manual correction. Probably the best thing, especially if it has timestamps for those who’d like to listen or watch. Though live speech is different from written material in structure, and if transcribed 1:1 may look a bit strange, I don’t know if it matters. So it’s worth providing audio along with transcriptions.

    So, first stage may be converting William’s annotations to a suitable format and adding hyperlinks. Then it may be enchanced by machine transcription, manual transcription or timestamping with annotations. After the first stage the project would be very helpful already. Later additions would add usability.

    Another unrelated thing that would be possible if all media is gathered in one place is automatic renaming and retagging of mp3s using a common scheme, and offering .zip downloads of multiple recordings at once. Like, all “teaching retreats” in one zip and “uposatha days” in another, though traffic & storage costs should be considered. (I had to write a script to download and rename most of recordings from dharmathreasure, there are 500+ of them and doing it by hand is slow and error-prone)

    I’ll go ahead and try to write a custom site and see how it goes. Currently the only risk is my time and it’s ok. There is no need to hold off or pause other efforts, since I may fail to produce satisfactory result for several reasons. If result turns out to be ok, then existing annotation/transcriptions/timestamps can be imported.

    • This reply was modified 2 weeks, 1 day ago by  Alex M.
Viewing 3 posts - 16 through 18 (of 18 total)

You must be logged in to reply to this topic.