Re: [Audacity-devel] Sample-level time accuracy
A free multi-track audio editor and recorder
Brought to you by:
aosiniao
From: Martyn S. <mar...@gm...> - 2010-09-30 00:31:22
|
Hi Roger Thanks for your input here, food for thought. On 25/09/2010 14:26, Roger Dannenberg wrote: > > This does rather beg the question: Why is mOffset (of a Clip) a > > double, not a sampleCount? > > One reason might be that labels and note track data are not quantized > like samples. I have been ignoring Label tracks in this as I know they should keep time values as doubles, they are at the 'project' level. Since I haven't looked into note tracks, I have, by default, been ignoring them also in this investigation. I'm really looking at WaveTracks and WaceClips, probably just WaveClips. I appreciate the rest of your comments and will consider them further. TTFN Martyn NoteTracks now use mOffset too. It could be useful for > labels to represent for example interpolated subsample times of signal > peaks. In signal processing terms, there is nothing that prevents a > shift in time by a fraction of a sample -- it's computationally > expensive and generally the wrong thing to do, but suppose you were > going to upsample a 22K track to 44K anyway. An mOffset equivalent to > 0.5 samples at 22K would just mean that you insert an initial zero > sample after upsampling to 44K. > > It would be good to formulate some invariants or principles that answer > questions like this (and I guess this is really the question.) E.g. in > Nyquist, the design principle is the following: the start time in the > environment is computed as a double representing continuous time in > seconds. When a behavior (returning a sound) is evaluated within an > environment, the samples generated by the behavior are shifted in time > to the nearest sample time based on the sound's sample rate. If the > behavior is, for example, a sequence of other behaviors, the start/end > times of the behaviors are maintained as un-quantized doubles to avoid > the accumulation of rounding errors. A consequence is that you can > splice together N grains, each with a specified duration D, and the > resulting sound will have round(N*D*samplerate) samples, even if > D*sample_rate is not an integer. I won't promise this is correctly > implemented everywhere, but at least there's a principle or > specification to decide if the implementation is correct or not. > > The analogy in Audacity, I think, would be that audio clips start at > arbitrary times. When sound is rendered to a file or playback, clips are > resampled as necessary, and the resulting sample sequence is shifted > +/-0.5 sample periods to conform to the sample times of the output stream. > > -Roger > > > > ------------------------------------------------------------------------------ > Start uncovering the many advantages of virtual appliances > and start using them to simplify application deployment and > accelerate your shift to cloud computing. > http://p.sf.net/sfu/novell-sfdev2dev > _______________________________________________ > audacity-devel mailing list > aud...@li... > https://lists.sourceforge.net/lists/listinfo/audacity-devel |