|
From: nitin a. <nit...@gm...> - 2010-06-17 02:13:04
|
Hello, Your description is exactly what I should have said. :-) Yes, in the even that the file needs to be re-digitized, I would want to be able to have some reasonable level of confidence that I have a File #2 that syncs with the first file File#1. Let's say File #1 is hypothetically destroyed and or corrupt and backups are also suffering from the same misfortune. If the descriptive metadata (essentially the cataloging information XML file) is still intact, this cataloging info would refer to durations and start/stop times of regions within File #1. Without peak position, etc. that cataloging info is almost surely invalid for File #2. Hence, the desire to collect more info an try to "work backwards" and come up with more or less the same file from start to finish. Throw in the possibility that the audio is an oral history interview and transcripts have been synced with the audio, then it becomes even more critical to have the new File #2 sync to the old File #1 otherwise even more work will have to get redone - i.e. the transcript will have to get synced again to the File #2. Leveling File #2 the same as File #1 would be nice, too. But I think the sync issue is far more important. Most dig libraries record technical info about the audio file to try and document enough data so that hopefully the container (WAV) can be reopened if its no longer a sustained format one day, but there seems to be little effort in collecting technical data about the audio within the container itself. That is to say, the technical data that is typically collected would be identical (sans file name and file date differences) for two files of exactly 10 minutes each that are both say 24/96 stereo WAVs. Having more full audio statistics would help make this technical data more unique per file in addition to the sync issue above. ps: sorry about top posting. I never got the first messages to this thread in my email and so couldnt' reply. Hope I got it right this time. peace, |