From: Nelo <ne...@sa...> - 2004-01-22 12:42:14
|
Ian, AFAIK there is no way how to explore and analyse a piece of sound file in Flash and automatically synchronize with graphics with scripting. But it is very easy to synchronize graphics and sound directly - you see the sound's time domain in flash's timeline and when you go through the timeline (drag the red square, the sound synchronization must be set to "stream") you will hear the sound at current frame - so you will exactly hear where the word begins and ends. To synchronize a word with graphics this way takes few seconds and is just a matter of moving keyframes. I really doubt you find a simple way of automatic synchronizing in any other software. Nelo ----- Original Message ----- From: "Ian Vincent" <vi...@ig...> To: <ope...@li...> Sent: Wednesday, January 21, 2004 10:13 PM Subject: Re: [Openeeg-list] sound file timing > Joe, interesting, I will look at that app. However, in the longer term I > don't want to have to do this task manually, whatever the app. I believe > that it must be possible to auto generate a set of time keys and > potentially auto sync to those. This has to be doable. Currently it takes > me on average around 15 minutes to set up one single word, by time I do all > of the things that I need to do to each keyword. So this is an attempt to > reduce that by having the time markers auto generated, then having flash > script do the loading and positioning based on that data. > > I don't think I can implement this for the winterbrain presentation (no > time) but for future documentation tasks that I have planned, it has to be > the wtg. I am sure that there is someone on this list that can do this and > is looking for a way to contribute. > > Ian > > > At 01:37 PM 1/21/2004 -0500, you wrote: > >Hi Ian; > > > >On sourceforge look for an app called Audacity. I use this for > >manipulating large .wav files and mixing to MP3's with binaural tracks. It > >is cool because it somehow works off the disk instead of stuffing > >everything in memory. Anyways you can see the time domain signal display > >and the gaps are visually obvious. There is a time scale on the top and > >you can zoom in or out for accuracy, and just make notes as you go through > >the file. > > > >Joe > > > >Ian Vincent wrote: > > > >>Hi > >> > >>I am working on converting voice over scripts about the various aspects > >>of the OpenEEG project to Flash movie clips but am finding the work > >>involved with obtaining the timing information to be laborious and time > >>consuming and wonder if that task could be partially automated. (I know > >>it can with the right tool) > >> > >>If you take a speech file and look at it closely, you will see that there > >>are gaps between the various words. Not always, but often there is. > >> > >>What I am trying to do, and need a hand with, is to synchronise graphics > >>in my flash presentation with the words from the soundfile. Currently I > >>have to take the soundfile, and use nero wave editor to identify the > >>timing for the various words, and it is one very laborious process. It is > >>not so bad when there is only a short passage of text but if the file > >>gets bigger, and the level of detail greater, > >>then it becomes a real problem. > >> > >>So I got to thinking that it must be possible to have a software tool > >>that runs through the soundfile and identifies any gaps, using some > >>predefined gap definition, and builds a data file that contains the start > >>and stop times of those gaps. Of course it would not only be identifying > >>the gaps but also the soundbite that is sandwiched between the gaps. > >> > >> From this information it should be possible to syncronise a flash movie > >> clip. so what I need is a tool that runs through the file looking for > >> gaps, and when it finds one, exporting the times to a simple file. How > >> to identify a gap is another issue, but say anything below ??db on the > >> average signal level that exists for ??msec. The file entries are > >> effectively event markers that signal a transition from soundbite to gap > >> and vice versa. > >> > >>Then the question arises as to how to go about doing this. It is > >>obviously beyond my software ability, but wonder is something like > >>LabView might be useful for this. > >> > >>I hope someone with a little time and the right knowledge might be able > >>to cobble something together. > >> > >>Thanks > >>Ian > >> > >> > > > > > > > > > >------------------------------------------------------- > >The SF.Net email is sponsored by EclipseCon 2004 > >Premiere Conference on Open Tools Development and Integration > >See the breadth of Eclipse activity. February 3-5 in Anaheim, CA. > >http://www.eclipsecon.org/osdn > >_______________________________________________ > >Openeeg-list mailing list > >Ope...@li... > >https://lists.sourceforge.net/lists/listinfo/openeeg-list > >Go to the above address to change your > >subscription options, e.g unsubscribe. > > > > ------------------------------------------------------- > The SF.Net email is sponsored by EclipseCon 2004 > Premiere Conference on Open Tools Development and Integration > See the breadth of Eclipse activity. February 3-5 in Anaheim, CA. > http://www.eclipsecon.org/osdn > _______________________________________________ > Openeeg-list mailing list > Ope...@li... > https://lists.sourceforge.net/lists/listinfo/openeeg-list > Go to the above address to change your > subscription options, e.g unsubscribe. |