Thread: [Audacity-devel] Sample-level time accuracy
A free multi-track audio editor and recorder
Brought to you by:
aosiniao
From: Martyn S. <mar...@gm...> - 2010-09-12 23:47:03
|
Hi there I have recently been looking into sample-level accuracy of envelopes, clips and the like, and made some progress, I feel. Feel free to contradict me. WaveClips have an offset as a double in the code, 8 dps in an AUP file. Either way, not always a whole number of samples (especially if you paste into a track at a different rate). And so everything to do with copying/pasting/displaying/rendering WaveClips should be using QUANTIZED_TIME at the appropriate 'rate'??? (Aside: should the definition of QUANTIZED_TIME have (double)(time) instead of (time) in case both passed-in 'time' and 'rate;' are integers? I don't know.) TTFN Martyn |
From: Al D. <bus...@gm...> - 2010-09-13 01:38:53
|
On Sunday, September 12, 2010 16:46:54 Martyn Shaw wrote: > Hi there > > I have recently been looking into sample-level accuracy of > envelopes, clips and the like, and made some progress, I feel. > Feel free to contradict me. > Thanks for working on this. I've liked what I've seen from your changes, though I haven't spent much time testing them. For what it's worth, I've read a lot of it, and it makes sense to me. > WaveClips have an offset as a double in the code, 8 dps in an AUP > file. Either way, not always a whole number of samples (especially > if you paste into a track at a different rate). And so everything > to do with copying/pasting/displaying/rendering WaveClips should > be using QUANTIZED_TIME at the appropriate 'rate'??? > Are we going to try to guarantee that a clip's offset falls "at" (very close to) a sample boundary for the track's sample rate? If so, then we could use QUANTIZED_TIME for all the operations and be pretty sure it was the right thing to do. > (Aside: should the definition of QUANTIZED_TIME have (double)(time) > instead of (time) in case both passed-in 'time' and 'rate;' are > integers? I don't know.) > Can't hurt. - Al > TTFN > Martyn > > ------------------------------------------------------------------- > ----------- Start uncovering the many advantages of virtual > appliances and start using them to simplify application deployment > and accelerate your shift to cloud computing > http://p.sf.net/sfu/novell-sfdev2dev > _______________________________________________ > audacity-devel mailing list > aud...@li... > https://lists.sourceforge.net/lists/listinfo/audacity-devel |
From: Vaughan J. <va...@au...> - 2010-09-13 18:53:14
|
On 9/12/2010 6:38 PM, Al Dimond wrote: > <snip> > >> (Aside: should the definition of QUANTIZED_TIME have (double)(time) >> instead of (time) in case both passed-in 'time' and 'rate;' are >> integers? I don't know.) >> > > Can't hurt. > +1. Shortcoming of using macros is that we can't strong-type the parameters, so generally, it's up to the developer to do the type-cast in the usage, but casting it in the macro definition ensures that. - V |
From: Martyn S. <mar...@gm...> - 2010-09-14 00:30:51
|
committed On 13/09/2010 19:53, Vaughan Johnson wrote: > On 9/12/2010 6:38 PM, Al Dimond wrote: >> <snip> >> >>> (Aside: should the definition of QUANTIZED_TIME have (double)(time) >>> instead of (time) in case both passed-in 'time' and 'rate;' are >>> integers? I don't know.) >>> >> >> Can't hurt. >> > > +1. > > Shortcoming of using macros is that we can't strong-type the parameters, > so generally, it's up to the developer to do the type-cast in the usage, > but casting it in the macro definition ensures that. > > - V > > ------------------------------------------------------------------------------ > Start uncovering the many advantages of virtual appliances > and start using them to simplify application deployment and > accelerate your shift to cloud computing > http://p.sf.net/sfu/novell-sfdev2dev > _______________________________________________ > audacity-devel mailing list > aud...@li... > https://lists.sourceforge.net/lists/listinfo/audacity-devel |
From: Martyn S. <mar...@gm...> - 2010-09-14 00:25:29
|
On 13/09/2010 02:38, Al Dimond wrote: > On Sunday, September 12, 2010 16:46:54 Martyn Shaw wrote: >> Hi there >> >> I have recently been looking into sample-level accuracy of >> envelopes, clips and the like, and made some progress, I feel. >> Feel free to contradict me. >> > > Thanks for working on this. I've liked what I've seen from your > changes, though I haven't spent much time testing them. For what it's > worth, I've read a lot of it, and it makes sense to me. Ta >> WaveClips have an offset as a double in the code, 8 dps in an AUP >> file. Either way, not always a whole number of samples (especially >> if you paste into a track at a different rate). And so everything >> to do with copying/pasting/displaying/rendering WaveClips should >> be using QUANTIZED_TIME at the appropriate 'rate'??? >> > > Are we going to try to guarantee that a clip's offset falls "at" (very > close to) a sample boundary for the track's sample rate? If so, then > we could use QUANTIZED_TIME for all the operations and be pretty sure > it was the right thing to do. So use QUANTIZED_TIME to round every t in every method that uses a passed-in time in WaveTrack? And then the same for WaveClip? It sounds like a plan, but is it reasonable? I'm thinking so. Scanning down WaveTrack.h that would be (first lines only): virtual void SetOffset (double o); virtual bool Cut (double t0, double t1, Track **dest); virtual bool Copy (double t0, double t1, Track **dest); virtual bool Clear(double t0, double t1); virtual bool Paste(double t0, Track *src); virtual bool ClearAndPaste(double t0, double t1, virtual bool Silence(double t0, double t1); virtual bool InsertSilence(double t, double len); virtual bool SplitAt(double t); virtual bool Split( double t0, double t1 ); virtual bool CutAndAddCutLine(double t0, double t1, Track **dest); virtual bool ClearAndAddCutLine(double t0, double t1); virtual bool SplitCut (double t0, double t1, Track **dest); virtual bool SplitDelete(double t0, double t1); virtual bool Join (double t0, double t1); virtual bool Disjoin (double t0, double t1); virtual bool Trim (double t0, double t1); bool HandleClear(double t0, double t1, bool addCutLines, bool virtual bool SyncLockAdjust(double oldT1, double newT1); bool IsEmpty(double t0, double t1); and we perhaps have to take care with: void GetEnvelopeValues(double *buffer, int bufferLen, bool GetMinMax(float *min, float *max, bool GetRMS(float *rms, double t0, double t1); bool CanOffsetClip(WaveClip* clip, double amount, double bool ExpandCutLine(double cutLinePosition, double* cutlineStart bool RemoveCutLine(double cutLinePosition); Similarly in WaveClip: void SetOffset(double offset); void Offset(double delta) { SetOffset(GetOffset() + delta); } bool WithinClip(double t) const; bool BeforeClip(double t) const; bool AfterClip(double t) const; bool CreateFromCopy(double t0, double t1, WaveClip* other); bool GetWaveDisplay(float *min, float *max, float *rms,int* bl, bool GetSpectrogram(float *buffer, sampleCount *where, bool GetMinMax(float *min, float *max, double t0, double t1); bool GetRMS(float *rms, double t0, double t1); bool Clear(double t0, double t1); bool ClearAndAddCutLine(double t0, double t1); bool Paste(double t0, WaveClip* other); bool InsertSilence(double t, double len); bool FindCutLine(double cutLinePosition, bool ExpandCutLine(double cutLinePosition); bool RemoveCutLine(double cutLinePosition); void OffsetCutLines(double t0, double len); I'm guessing that there shouldn't be other place to worry about this. For example when a project gets read in it presumably uses these methods to place tracks/clips into the project. Maybe in Effects? So what do you think? TTFN Martyn |
From: Al D. <bus...@gm...> - 2010-09-14 06:49:28
|
On Monday, September 13, 2010 17:25:23 Martyn Shaw wrote: > On 13/09/2010 02:38, Al Dimond wrote: > > On Sunday, September 12, 2010 16:46:54 Martyn Shaw wrote: > >> Hi there > >> > >> I have recently been looking into sample-level accuracy of > >> envelopes, clips and the like, and made some progress, I feel. > >> Feel free to contradict me. > > > > Thanks for working on this. I've liked what I've seen from your > > changes, though I haven't spent much time testing them. For what > > it's worth, I've read a lot of it, and it makes sense to me. > > Ta > > >> WaveClips have an offset as a double in the code, 8 dps in an > >> AUP file. Either way, not always a whole number of samples > >> (especially if you paste into a track at a different rate). > >> And so everything to do with > >> copying/pasting/displaying/rendering WaveClips should be using > >> QUANTIZED_TIME at the appropriate 'rate'??? > > > > Are we going to try to guarantee that a clip's offset falls "at" > > (very close to) a sample boundary for the track's sample rate? > > If so, then we could use QUANTIZED_TIME for all the operations > > and be pretty sure it was the right thing to do. > > So use QUANTIZED_TIME to round every t in every method that uses a > passed-in time in WaveTrack? And then the same for WaveClip? It > sounds like a plan, but is it reasonable? I'm thinking so. > I haven't really thought this through very well... my biggest concern with using something like QUANTIZED_TIME to get "sample accuracy" is that if clips aren't placed at quantized times it's wrong for every sample. So the places I'm concerned with rounding to QUANTIZED_TIME is any place where clips are directly moved. If clips are directly moved in all those places, then I guess they'd all need to have certain times quantized. - Al > Scanning down WaveTrack.h that would be (first lines only): > > virtual void SetOffset (double o); > virtual bool Cut (double t0, double t1, Track **dest); > virtual bool Copy (double t0, double t1, Track **dest); > virtual bool Clear(double t0, double t1); > virtual bool Paste(double t0, Track *src); > virtual bool ClearAndPaste(double t0, double t1, > virtual bool Silence(double t0, double t1); > virtual bool InsertSilence(double t, double len); > virtual bool SplitAt(double t); > virtual bool Split( double t0, double t1 ); > virtual bool CutAndAddCutLine(double t0, double t1, Track > **dest); virtual bool ClearAndAddCutLine(double t0, double t1); > virtual bool SplitCut (double t0, double t1, Track **dest); > virtual bool SplitDelete(double t0, double t1); > virtual bool Join (double t0, double t1); > virtual bool Disjoin (double t0, double t1); > virtual bool Trim (double t0, double t1); > bool HandleClear(double t0, double t1, bool addCutLines, bool > virtual bool SyncLockAdjust(double oldT1, double newT1); > bool IsEmpty(double t0, double t1); > > and we perhaps have to take care with: > void GetEnvelopeValues(double *buffer, int bufferLen, > bool GetMinMax(float *min, float *max, > bool GetRMS(float *rms, double t0, double t1); > bool CanOffsetClip(WaveClip* clip, double amount, double > bool ExpandCutLine(double cutLinePosition, double* cutlineStart > bool RemoveCutLine(double cutLinePosition); > > Similarly in WaveClip: > > void SetOffset(double offset); > void Offset(double delta) { SetOffset(GetOffset() + delta); } > bool WithinClip(double t) const; > bool BeforeClip(double t) const; > bool AfterClip(double t) const; > bool CreateFromCopy(double t0, double t1, WaveClip* other); > bool GetWaveDisplay(float *min, float *max, float *rms,int* bl, > bool GetSpectrogram(float *buffer, sampleCount *where, > bool GetMinMax(float *min, float *max, double t0, double t1); > bool GetRMS(float *rms, double t0, double t1); > bool Clear(double t0, double t1); > bool ClearAndAddCutLine(double t0, double t1); > bool Paste(double t0, WaveClip* other); > bool InsertSilence(double t, double len); > bool FindCutLine(double cutLinePosition, > bool ExpandCutLine(double cutLinePosition); > bool RemoveCutLine(double cutLinePosition); > void OffsetCutLines(double t0, double len); > > I'm guessing that there shouldn't be other place to worry about > this. For example when a project gets read in it presumably uses > these methods to place tracks/clips into the project. Maybe in > Effects? > > So what do you think? > TTFN > Martyn > > ------------------------------------------------------------------- > ----------- Start uncovering the many advantages of virtual > appliances and start using them to simplify application deployment > and accelerate your shift to cloud computing. > http://p.sf.net/sfu/novell-sfdev2dev > _______________________________________________ > audacity-devel mailing list > aud...@li... > https://lists.sourceforge.net/lists/listinfo/audacity-devel |
From: Vaughan J. <va...@au...> - 2010-09-14 21:37:37
|
That's sure a lot of places to add QUANTIZED_TIME. Does it make sense to add class member vars in some cases, to also store the quantized values, rather than calculate repeatedly with QUANTIZED_TIME? For example, ViewInfo::sel0 and ViewInfo::quantizedSel0, AudioIO::mT1 and AudioIO::mQuantizedT1? Or is it better (clearer code and coding ease) to keep time values all as the real values and consistently convert? - Vaughan On 9/13/2010 5:25 PM, Martyn Shaw wrote: > > > On 13/09/2010 02:38, Al Dimond wrote: >> On Sunday, September 12, 2010 16:46:54 Martyn Shaw wrote: >>> Hi there >>> >>> I have recently been looking into sample-level accuracy of >>> envelopes, clips and the like, and made some progress, I feel. >>> Feel free to contradict me. >>> >> >> Thanks for working on this. I've liked what I've seen from your >> changes, though I haven't spent much time testing them. For what it's >> worth, I've read a lot of it, and it makes sense to me. > > Ta > >>> WaveClips have an offset as a double in the code, 8 dps in an AUP >>> file. Either way, not always a whole number of samples (especially >>> if you paste into a track at a different rate). And so everything >>> to do with copying/pasting/displaying/rendering WaveClips should >>> be using QUANTIZED_TIME at the appropriate 'rate'??? >>> >> >> Are we going to try to guarantee that a clip's offset falls "at" (very >> close to) a sample boundary for the track's sample rate? If so, then >> we could use QUANTIZED_TIME for all the operations and be pretty sure >> it was the right thing to do. > > So use QUANTIZED_TIME to round every t in every method that uses a > passed-in time in WaveTrack? And then the same for WaveClip? It > sounds like a plan, but is it reasonable? I'm thinking so. > > Scanning down WaveTrack.h that would be (first lines only): > > virtual void SetOffset (double o); > virtual bool Cut (double t0, double t1, Track **dest); > virtual bool Copy (double t0, double t1, Track **dest); > virtual bool Clear(double t0, double t1); > virtual bool Paste(double t0, Track *src); > virtual bool ClearAndPaste(double t0, double t1, > virtual bool Silence(double t0, double t1); > virtual bool InsertSilence(double t, double len); > virtual bool SplitAt(double t); > virtual bool Split( double t0, double t1 ); > virtual bool CutAndAddCutLine(double t0, double t1, Track **dest); > virtual bool ClearAndAddCutLine(double t0, double t1); > virtual bool SplitCut (double t0, double t1, Track **dest); > virtual bool SplitDelete(double t0, double t1); > virtual bool Join (double t0, double t1); > virtual bool Disjoin (double t0, double t1); > virtual bool Trim (double t0, double t1); > bool HandleClear(double t0, double t1, bool addCutLines, bool > virtual bool SyncLockAdjust(double oldT1, double newT1); > bool IsEmpty(double t0, double t1); > > and we perhaps have to take care with: > void GetEnvelopeValues(double *buffer, int bufferLen, > bool GetMinMax(float *min, float *max, > bool GetRMS(float *rms, double t0, double t1); > bool CanOffsetClip(WaveClip* clip, double amount, double > bool ExpandCutLine(double cutLinePosition, double* cutlineStart > bool RemoveCutLine(double cutLinePosition); > > Similarly in WaveClip: > > void SetOffset(double offset); > void Offset(double delta) { SetOffset(GetOffset() + delta); } > bool WithinClip(double t) const; > bool BeforeClip(double t) const; > bool AfterClip(double t) const; > bool CreateFromCopy(double t0, double t1, WaveClip* other); > bool GetWaveDisplay(float *min, float *max, float *rms,int* bl, > bool GetSpectrogram(float *buffer, sampleCount *where, > bool GetMinMax(float *min, float *max, double t0, double t1); > bool GetRMS(float *rms, double t0, double t1); > bool Clear(double t0, double t1); > bool ClearAndAddCutLine(double t0, double t1); > bool Paste(double t0, WaveClip* other); > bool InsertSilence(double t, double len); > bool FindCutLine(double cutLinePosition, > bool ExpandCutLine(double cutLinePosition); > bool RemoveCutLine(double cutLinePosition); > void OffsetCutLines(double t0, double len); > > I'm guessing that there shouldn't be other place to worry about this. > For example when a project gets read in it presumably uses these > methods to place tracks/clips into the project. Maybe in Effects? > > So what do you think? > TTFN > Martyn > > ------------------------------------------------------------------------------ > Start uncovering the many advantages of virtual appliances > and start using them to simplify application deployment and > accelerate your shift to cloud computing. > http://p.sf.net/sfu/novell-sfdev2dev > _______________________________________________ > audacity-devel mailing list > aud...@li... > https://lists.sourceforge.net/lists/listinfo/audacity-devel > |
From: Martyn S. <mar...@gm...> - 2010-09-22 00:06:38
|
Hi Vaughan I'm awaiting your response on this one, but I'm sure you have other things on as well. I'm willing to go through all these methods and check that they do as they should with real time vs samples. I'll put in the minimal number of QUANTIZED_TIME things that I think are needed, if you don't think it will slow down processing too much. This isn't a sample-by-sample thing, just at the start of each WT/WC method. I think the problem that I saw last night is probably also related. TTFN Martyn On 15/09/2010 00:31, Martyn Shaw wrote: > Well there's the problem. Which class? ViewInfo::sel0 is a > high-level thing (part of Project?) where 'time' is real. It only > needs to / should be quantised in Tracks and Clips when we use it, and > when we display 'samples' in the Project (Selection Toolbar with > 'samples' selected). > > Each Track could have a different rate to the project, and each other. > So we need to maintain the real values until we actually do most of > these operations as we may be doing them in Tracks/Clips of different > rates. > > (Sorry, I think I just said the same thing twice.) > > QUANTIZED_TIME is only little and these methods are only used a 'few' > times, when we are doing UI-type operations, not when playing audio > and/or time-consuming effects, I think (although I haven't checked > this). So any run-time timing considerations shouldn't be an issue. > > In response to Al's concern, I think that Clips in a Track should > always be at 'sample accuracy' (to double precision) and not allowed > to have arbitrary offsets (like they are now(?)). Any reason why they > should have arbitrary offsets? I don't think we use that when > rendering / playing them. > > TTFN > Martyn > > On 14/09/2010 22:37, Vaughan Johnson wrote: >> That's sure a lot of places to add QUANTIZED_TIME. Does it make >> sense to >> add class member vars in some cases, to also store the quantized >> values, >> rather than calculate repeatedly with QUANTIZED_TIME? For example, >> ViewInfo::sel0 and ViewInfo::quantizedSel0, AudioIO::mT1 and >> AudioIO::mQuantizedT1? Or is it better (clearer code and coding >> ease) to >> keep time values all as the real values and consistently convert? >> >> - Vaughan >> >> >> On 9/13/2010 5:25 PM, Martyn Shaw wrote: >>> >>> >>> On 13/09/2010 02:38, Al Dimond wrote: >>>> On Sunday, September 12, 2010 16:46:54 Martyn Shaw wrote: >>>>> Hi there >>>>> >>>>> I have recently been looking into sample-level accuracy of >>>>> envelopes, clips and the like, and made some progress, I feel. >>>>> Feel free to contradict me. >>>>> >>>> >>>> Thanks for working on this. I've liked what I've seen from your >>>> changes, though I haven't spent much time testing them. For what it's >>>> worth, I've read a lot of it, and it makes sense to me. >>> >>> Ta >>> >>>>> WaveClips have an offset as a double in the code, 8 dps in an AUP >>>>> file. Either way, not always a whole number of samples (especially >>>>> if you paste into a track at a different rate). And so everything >>>>> to do with copying/pasting/displaying/rendering WaveClips should >>>>> be using QUANTIZED_TIME at the appropriate 'rate'??? >>>>> >>>> >>>> Are we going to try to guarantee that a clip's offset falls "at" >>>> (very >>>> close to) a sample boundary for the track's sample rate? If so, then >>>> we could use QUANTIZED_TIME for all the operations and be pretty sure >>>> it was the right thing to do. >>> >>> So use QUANTIZED_TIME to round every t in every method that uses a >>> passed-in time in WaveTrack? And then the same for WaveClip? It >>> sounds like a plan, but is it reasonable? I'm thinking so. >>> >>> Scanning down WaveTrack.h that would be (first lines only): >>> >>> virtual void SetOffset (double o); >>> virtual bool Cut (double t0, double t1, Track **dest); >>> virtual bool Copy (double t0, double t1, Track **dest); >>> virtual bool Clear(double t0, double t1); >>> virtual bool Paste(double t0, Track *src); >>> virtual bool ClearAndPaste(double t0, double t1, >>> virtual bool Silence(double t0, double t1); >>> virtual bool InsertSilence(double t, double len); >>> virtual bool SplitAt(double t); >>> virtual bool Split( double t0, double t1 ); >>> virtual bool CutAndAddCutLine(double t0, double t1, Track **dest); >>> virtual bool ClearAndAddCutLine(double t0, double t1); >>> virtual bool SplitCut (double t0, double t1, Track **dest); >>> virtual bool SplitDelete(double t0, double t1); >>> virtual bool Join (double t0, double t1); >>> virtual bool Disjoin (double t0, double t1); >>> virtual bool Trim (double t0, double t1); >>> bool HandleClear(double t0, double t1, bool addCutLines, bool >>> virtual bool SyncLockAdjust(double oldT1, double newT1); >>> bool IsEmpty(double t0, double t1); >>> >>> and we perhaps have to take care with: >>> void GetEnvelopeValues(double *buffer, int bufferLen, >>> bool GetMinMax(float *min, float *max, >>> bool GetRMS(float *rms, double t0, double t1); >>> bool CanOffsetClip(WaveClip* clip, double amount, double >>> bool ExpandCutLine(double cutLinePosition, double* cutlineStart >>> bool RemoveCutLine(double cutLinePosition); >>> >>> Similarly in WaveClip: >>> >>> void SetOffset(double offset); >>> void Offset(double delta) { SetOffset(GetOffset() + delta); } >>> bool WithinClip(double t) const; >>> bool BeforeClip(double t) const; >>> bool AfterClip(double t) const; >>> bool CreateFromCopy(double t0, double t1, WaveClip* other); >>> bool GetWaveDisplay(float *min, float *max, float *rms,int* bl, >>> bool GetSpectrogram(float *buffer, sampleCount *where, >>> bool GetMinMax(float *min, float *max, double t0, double t1); >>> bool GetRMS(float *rms, double t0, double t1); >>> bool Clear(double t0, double t1); >>> bool ClearAndAddCutLine(double t0, double t1); >>> bool Paste(double t0, WaveClip* other); >>> bool InsertSilence(double t, double len); >>> bool FindCutLine(double cutLinePosition, >>> bool ExpandCutLine(double cutLinePosition); >>> bool RemoveCutLine(double cutLinePosition); >>> void OffsetCutLines(double t0, double len); >>> >>> I'm guessing that there shouldn't be other place to worry about this. >>> For example when a project gets read in it presumably uses these >>> methods to place tracks/clips into the project. Maybe in Effects? >>> >>> So what do you think? >>> TTFN >>> Martyn >>> >>> ------------------------------------------------------------------------------ >>> >>> Start uncovering the many advantages of virtual appliances >>> and start using them to simplify application deployment and >>> accelerate your shift to cloud computing. >>> http://p.sf.net/sfu/novell-sfdev2dev >>> _______________________________________________ >>> audacity-devel mailing list >>> aud...@li... >>> https://lists.sourceforge.net/lists/listinfo/audacity-devel >>> >> >> ------------------------------------------------------------------------------ >> >> Start uncovering the many advantages of virtual appliances >> and start using them to simplify application deployment and >> accelerate your shift to cloud computing. >> http://p.sf.net/sfu/novell-sfdev2dev >> _______________________________________________ >> audacity-devel mailing list >> aud...@li... >> https://lists.sourceforge.net/lists/listinfo/audacity-devel |
From: Vaughan J. <va...@au...> - 2010-09-22 00:40:47
|
Sorry, Martyn. Didn't realize you were waiting on me. My silence meant I didn't have any further comment. :-) I just wanted to raise the idea of storing it where possible, to reduce computation, but I think you have a good grasp of what's going on. Regarding arbitrary offsets, though, do those need to be maintained for tracks/clips at different rates, e.g., saving a track at a particular rate (especially a non-standard one) and importing it to another project? - Vaughan On 9/21/2010 5:06 PM, Martyn Shaw wrote: > Hi Vaughan > > I'm awaiting your response on this one, but I'm sure you have other > things on as well. > > I'm willing to go through all these methods and check that they do as > they should with real time vs samples. I'll put in the minimal number > of QUANTIZED_TIME things that I think are needed, if you don't think > it will slow down processing too much. This isn't a sample-by-sample > thing, just at the start of each WT/WC method. > > I think the problem that I saw last night is probably also related. > > TTFN > Martyn > > On 15/09/2010 00:31, Martyn Shaw wrote: >> Well there's the problem. Which class? ViewInfo::sel0 is a >> high-level thing (part of Project?) where 'time' is real. It only >> needs to / should be quantised in Tracks and Clips when we use it, and >> when we display 'samples' in the Project (Selection Toolbar with >> 'samples' selected). >> >> Each Track could have a different rate to the project, and each other. >> So we need to maintain the real values until we actually do most of >> these operations as we may be doing them in Tracks/Clips of different >> rates. >> >> (Sorry, I think I just said the same thing twice.) >> >> QUANTIZED_TIME is only little and these methods are only used a 'few' >> times, when we are doing UI-type operations, not when playing audio >> and/or time-consuming effects, I think (although I haven't checked >> this). So any run-time timing considerations shouldn't be an issue. >> >> In response to Al's concern, I think that Clips in a Track should >> always be at 'sample accuracy' (to double precision) and not allowed >> to have arbitrary offsets (like they are now(?)). Any reason why they >> should have arbitrary offsets? I don't think we use that when >> rendering / playing them. >> >> TTFN >> Martyn >> >> On 14/09/2010 22:37, Vaughan Johnson wrote: >>> That's sure a lot of places to add QUANTIZED_TIME. Does it make >>> sense to >>> add class member vars in some cases, to also store the quantized >>> values, >>> rather than calculate repeatedly with QUANTIZED_TIME? For example, >>> ViewInfo::sel0 and ViewInfo::quantizedSel0, AudioIO::mT1 and >>> AudioIO::mQuantizedT1? Or is it better (clearer code and coding >>> ease) to >>> keep time values all as the real values and consistently convert? >>> >>> - Vaughan >>> >>> >>> On 9/13/2010 5:25 PM, Martyn Shaw wrote: >>>> >>>> >>>> On 13/09/2010 02:38, Al Dimond wrote: >>>>> On Sunday, September 12, 2010 16:46:54 Martyn Shaw wrote: >>>>>> Hi there >>>>>> >>>>>> I have recently been looking into sample-level accuracy of >>>>>> envelopes, clips and the like, and made some progress, I feel. >>>>>> Feel free to contradict me. >>>>>> >>>>> >>>>> Thanks for working on this. I've liked what I've seen from your >>>>> changes, though I haven't spent much time testing them. For what it's >>>>> worth, I've read a lot of it, and it makes sense to me. >>>> >>>> Ta >>>> >>>>>> WaveClips have an offset as a double in the code, 8 dps in an AUP >>>>>> file. Either way, not always a whole number of samples (especially >>>>>> if you paste into a track at a different rate). And so everything >>>>>> to do with copying/pasting/displaying/rendering WaveClips should >>>>>> be using QUANTIZED_TIME at the appropriate 'rate'??? >>>>>> >>>>> >>>>> Are we going to try to guarantee that a clip's offset falls "at" >>>>> (very >>>>> close to) a sample boundary for the track's sample rate? If so, then >>>>> we could use QUANTIZED_TIME for all the operations and be pretty sure >>>>> it was the right thing to do. >>>> >>>> So use QUANTIZED_TIME to round every t in every method that uses a >>>> passed-in time in WaveTrack? And then the same for WaveClip? It >>>> sounds like a plan, but is it reasonable? I'm thinking so. >>>> >>>> Scanning down WaveTrack.h that would be (first lines only): >>>> >>>> virtual void SetOffset (double o); >>>> virtual bool Cut (double t0, double t1, Track **dest); >>>> virtual bool Copy (double t0, double t1, Track **dest); >>>> virtual bool Clear(double t0, double t1); >>>> virtual bool Paste(double t0, Track *src); >>>> virtual bool ClearAndPaste(double t0, double t1, >>>> virtual bool Silence(double t0, double t1); >>>> virtual bool InsertSilence(double t, double len); >>>> virtual bool SplitAt(double t); >>>> virtual bool Split( double t0, double t1 ); >>>> virtual bool CutAndAddCutLine(double t0, double t1, Track **dest); >>>> virtual bool ClearAndAddCutLine(double t0, double t1); >>>> virtual bool SplitCut (double t0, double t1, Track **dest); >>>> virtual bool SplitDelete(double t0, double t1); >>>> virtual bool Join (double t0, double t1); >>>> virtual bool Disjoin (double t0, double t1); >>>> virtual bool Trim (double t0, double t1); >>>> bool HandleClear(double t0, double t1, bool addCutLines, bool >>>> virtual bool SyncLockAdjust(double oldT1, double newT1); >>>> bool IsEmpty(double t0, double t1); >>>> >>>> and we perhaps have to take care with: >>>> void GetEnvelopeValues(double *buffer, int bufferLen, >>>> bool GetMinMax(float *min, float *max, >>>> bool GetRMS(float *rms, double t0, double t1); >>>> bool CanOffsetClip(WaveClip* clip, double amount, double >>>> bool ExpandCutLine(double cutLinePosition, double* cutlineStart >>>> bool RemoveCutLine(double cutLinePosition); >>>> >>>> Similarly in WaveClip: >>>> >>>> void SetOffset(double offset); >>>> void Offset(double delta) { SetOffset(GetOffset() + delta); } >>>> bool WithinClip(double t) const; >>>> bool BeforeClip(double t) const; >>>> bool AfterClip(double t) const; >>>> bool CreateFromCopy(double t0, double t1, WaveClip* other); >>>> bool GetWaveDisplay(float *min, float *max, float *rms,int* bl, >>>> bool GetSpectrogram(float *buffer, sampleCount *where, >>>> bool GetMinMax(float *min, float *max, double t0, double t1); >>>> bool GetRMS(float *rms, double t0, double t1); >>>> bool Clear(double t0, double t1); >>>> bool ClearAndAddCutLine(double t0, double t1); >>>> bool Paste(double t0, WaveClip* other); >>>> bool InsertSilence(double t, double len); >>>> bool FindCutLine(double cutLinePosition, >>>> bool ExpandCutLine(double cutLinePosition); >>>> bool RemoveCutLine(double cutLinePosition); >>>> void OffsetCutLines(double t0, double len); >>>> >>>> I'm guessing that there shouldn't be other place to worry about this. >>>> For example when a project gets read in it presumably uses these >>>> methods to place tracks/clips into the project. Maybe in Effects? >>>> >>>> So what do you think? >>>> TTFN >>>> Martyn >>>> >>>> ------------------------------------------------------------------------------ >>>> >>>> Start uncovering the many advantages of virtual appliances >>>> and start using them to simplify application deployment and >>>> accelerate your shift to cloud computing. >>>> http://p.sf.net/sfu/novell-sfdev2dev >>>> _______________________________________________ >>>> audacity-devel mailing list >>>> aud...@li... >>>> https://lists.sourceforge.net/lists/listinfo/audacity-devel >>>> >>> >>> ------------------------------------------------------------------------------ >>> >>> Start uncovering the many advantages of virtual appliances >>> and start using them to simplify application deployment and >>> accelerate your shift to cloud computing. >>> http://p.sf.net/sfu/novell-sfdev2dev >>> _______________________________________________ >>> audacity-devel mailing list >>> aud...@li... >>> https://lists.sourceforge.net/lists/listinfo/audacity-devel > > ------------------------------------------------------------------------------ > Start uncovering the many advantages of virtual appliances > and start using them to simplify application deployment and > accelerate your shift to cloud computing. > http://p.sf.net/sfu/novell-sfdev2dev > _______________________________________________ > audacity-devel mailing list > aud...@li... > https://lists.sourceforge.net/lists/listinfo/audacity-devel > |
From: Martyn S. <mar...@gm...> - 2010-09-14 23:32:03
|
Well there's the problem. Which class? ViewInfo::sel0 is a high-level thing (part of Project?) where 'time' is real. It only needs to / should be quantised in Tracks and Clips when we use it, and when we display 'samples' in the Project (Selection Toolbar with 'samples' selected). Each Track could have a different rate to the project, and each other. So we need to maintain the real values until we actually do most of these operations as we may be doing them in Tracks/Clips of different rates. (Sorry, I think I just said the same thing twice.) QUANTIZED_TIME is only little and these methods are only used a 'few' times, when we are doing UI-type operations, not when playing audio and/or time-consuming effects, I think (although I haven't checked this). So any run-time timing considerations shouldn't be an issue. In response to Al's concern, I think that Clips in a Track should always be at 'sample accuracy' (to double precision) and not allowed to have arbitrary offsets (like they are now(?)). Any reason why they should have arbitrary offsets? I don't think we use that when rendering / playing them. TTFN Martyn On 14/09/2010 22:37, Vaughan Johnson wrote: > That's sure a lot of places to add QUANTIZED_TIME. Does it make sense to > add class member vars in some cases, to also store the quantized values, > rather than calculate repeatedly with QUANTIZED_TIME? For example, > ViewInfo::sel0 and ViewInfo::quantizedSel0, AudioIO::mT1 and > AudioIO::mQuantizedT1? Or is it better (clearer code and coding ease) to > keep time values all as the real values and consistently convert? > > - Vaughan > > > On 9/13/2010 5:25 PM, Martyn Shaw wrote: >> >> >> On 13/09/2010 02:38, Al Dimond wrote: >>> On Sunday, September 12, 2010 16:46:54 Martyn Shaw wrote: >>>> Hi there >>>> >>>> I have recently been looking into sample-level accuracy of >>>> envelopes, clips and the like, and made some progress, I feel. >>>> Feel free to contradict me. >>>> >>> >>> Thanks for working on this. I've liked what I've seen from your >>> changes, though I haven't spent much time testing them. For what it's >>> worth, I've read a lot of it, and it makes sense to me. >> >> Ta >> >>>> WaveClips have an offset as a double in the code, 8 dps in an AUP >>>> file. Either way, not always a whole number of samples (especially >>>> if you paste into a track at a different rate). And so everything >>>> to do with copying/pasting/displaying/rendering WaveClips should >>>> be using QUANTIZED_TIME at the appropriate 'rate'??? >>>> >>> >>> Are we going to try to guarantee that a clip's offset falls "at" (very >>> close to) a sample boundary for the track's sample rate? If so, then >>> we could use QUANTIZED_TIME for all the operations and be pretty sure >>> it was the right thing to do. >> >> So use QUANTIZED_TIME to round every t in every method that uses a >> passed-in time in WaveTrack? And then the same for WaveClip? It >> sounds like a plan, but is it reasonable? I'm thinking so. >> >> Scanning down WaveTrack.h that would be (first lines only): >> >> virtual void SetOffset (double o); >> virtual bool Cut (double t0, double t1, Track **dest); >> virtual bool Copy (double t0, double t1, Track **dest); >> virtual bool Clear(double t0, double t1); >> virtual bool Paste(double t0, Track *src); >> virtual bool ClearAndPaste(double t0, double t1, >> virtual bool Silence(double t0, double t1); >> virtual bool InsertSilence(double t, double len); >> virtual bool SplitAt(double t); >> virtual bool Split( double t0, double t1 ); >> virtual bool CutAndAddCutLine(double t0, double t1, Track **dest); >> virtual bool ClearAndAddCutLine(double t0, double t1); >> virtual bool SplitCut (double t0, double t1, Track **dest); >> virtual bool SplitDelete(double t0, double t1); >> virtual bool Join (double t0, double t1); >> virtual bool Disjoin (double t0, double t1); >> virtual bool Trim (double t0, double t1); >> bool HandleClear(double t0, double t1, bool addCutLines, bool >> virtual bool SyncLockAdjust(double oldT1, double newT1); >> bool IsEmpty(double t0, double t1); >> >> and we perhaps have to take care with: >> void GetEnvelopeValues(double *buffer, int bufferLen, >> bool GetMinMax(float *min, float *max, >> bool GetRMS(float *rms, double t0, double t1); >> bool CanOffsetClip(WaveClip* clip, double amount, double >> bool ExpandCutLine(double cutLinePosition, double* cutlineStart >> bool RemoveCutLine(double cutLinePosition); >> >> Similarly in WaveClip: >> >> void SetOffset(double offset); >> void Offset(double delta) { SetOffset(GetOffset() + delta); } >> bool WithinClip(double t) const; >> bool BeforeClip(double t) const; >> bool AfterClip(double t) const; >> bool CreateFromCopy(double t0, double t1, WaveClip* other); >> bool GetWaveDisplay(float *min, float *max, float *rms,int* bl, >> bool GetSpectrogram(float *buffer, sampleCount *where, >> bool GetMinMax(float *min, float *max, double t0, double t1); >> bool GetRMS(float *rms, double t0, double t1); >> bool Clear(double t0, double t1); >> bool ClearAndAddCutLine(double t0, double t1); >> bool Paste(double t0, WaveClip* other); >> bool InsertSilence(double t, double len); >> bool FindCutLine(double cutLinePosition, >> bool ExpandCutLine(double cutLinePosition); >> bool RemoveCutLine(double cutLinePosition); >> void OffsetCutLines(double t0, double len); >> >> I'm guessing that there shouldn't be other place to worry about this. >> For example when a project gets read in it presumably uses these >> methods to place tracks/clips into the project. Maybe in Effects? >> >> So what do you think? >> TTFN >> Martyn >> >> ------------------------------------------------------------------------------ >> Start uncovering the many advantages of virtual appliances >> and start using them to simplify application deployment and >> accelerate your shift to cloud computing. >> http://p.sf.net/sfu/novell-sfdev2dev >> _______________________________________________ >> audacity-devel mailing list >> aud...@li... >> https://lists.sourceforge.net/lists/listinfo/audacity-devel >> > > ------------------------------------------------------------------------------ > Start uncovering the many advantages of virtual appliances > and start using them to simplify application deployment and > accelerate your shift to cloud computing. > http://p.sf.net/sfu/novell-sfdev2dev > _______________________________________________ > audacity-devel mailing list > aud...@li... > https://lists.sourceforge.net/lists/listinfo/audacity-devel |
From: Richard A. <ri...@au...> - 2010-09-15 19:26:03
|
On Wed, 2010-09-15 at 00:31 +0100, Martyn Shaw wrote: > In response to Al's concern, I think that Clips in a Track should > always be at 'sample accuracy' (to double precision) and not allowed > to have arbitrary offsets (like they are now(?)). Any reason why they > should have arbitrary offsets? I don't think we use that when > rendering / playing them. There is only any point in allowing it if it works ... I have one use case for offsetting audio by less than one sample, which is correcting recordings made on old Sony hardware (PCM F1/501/701), which used a single A to D running at 88.2kHz to sample both Left and Right channels (alternately), and did the same on playback. This was fine if you play on the original hardware, but if you get the data stream out digitally, one channel is 1/2 a sample out of sync, which can have implications if you subsequently sum to mono. There are ways round this however, the easiest (crudest) being to upsample to 88.2 kHz and move by one sample. If you are really fussed (and trust your resampler), only do it to one channel and leave the other untouched - that's the best you can do in any case, because the shifted channel has to be resampled to get sub-sample offsets. The last point is a good reason to make clips always sit on the track's sample points - as otherwise a lot of (largely unnecessary) re-sampling would be needed. Richard |
From: Martyn S. <mar...@gm...> - 2010-09-22 00:13:48
|
On 15/09/2010 20:25, Richard Ash wrote: > On Wed, 2010-09-15 at 00:31 +0100, Martyn Shaw wrote: >> In response to Al's concern, I think that Clips in a Track should >> always be at 'sample accuracy' (to double precision) and not allowed >> to have arbitrary offsets (like they are now(?)). Any reason why they >> should have arbitrary offsets? I don't think we use that when >> rendering / playing them. > > There is only any point in allowing it if it works ... True enough, and you are implying that it does not currently work properly. So I think we should force people to take the simple approach that you detail. > I have one use case for offsetting audio by less than one sample, which > is correcting recordings made on old Sony hardware (PCM F1/501/701), > which used a single A to D running at 88.2kHz to sample both Left and > Right channels (alternately), and did the same on playback. This was > fine if you play on the original hardware, but if you get the data > stream out digitally, one channel is 1/2 a sample out of sync, which can > have implications if you subsequently sum to mono. > > There are ways round this however, the easiest (crudest) being to > upsample to 88.2 kHz and move by one sample. If you are really fussed > (and trust your resampler), only do it to one channel and leave the > other untouched - that's the best you can do in any case, because the > shifted channel has to be resampled to get sub-sample offsets. > > The last point is a good reason to make clips always sit on the track's > sample points - as otherwise a lot of (largely unnecessary) re-sampling > would be needed. Thanks. I thinks it's the way to be. We must take care of legacy aup's as well though. TTFN Martyn > Richard > > > ------------------------------------------------------------------------------ > Start uncovering the many advantages of virtual appliances > and start using them to simplify application deployment and > accelerate your shift to cloud computing. > http://p.sf.net/sfu/novell-sfdev2dev > _______________________________________________ > audacity-devel mailing list > aud...@li... > https://lists.sourceforge.net/lists/listinfo/audacity-devel |
From: Martyn S. <mar...@gm...> - 2010-09-23 00:49:01
|
For anybody that's still with this... On 14/09/2010 01:25, Martyn Shaw wrote: ... > So use QUANTIZED_TIME to round every t in every method that uses a > passed-in time in WaveTrack? And then the same for WaveClip? It sounds > like a plan, but is it reasonable? I'm thinking so. > > Scanning down WaveTrack.h that would be (first lines only): > > virtual void SetOffset (double o); well this just calls... > Similarly in WaveClip: > > void SetOffset(double offset); and I see that it's WaveClips that actually have an offset, so I figure that I'll fix it there only. What should we do with overlapping waveclips in an aup??? Easily possible to create with Audacity and a text editor (just set the <waveclip offset=" to overlapping values), and probably 'out there' in legacy files. We draw them OK, but possibly fail elsewhere. TTFN Martyn |
From: Vaughan J. <va...@au...> - 2010-09-24 22:03:44
|
On 9/22/2010 5:48 PM, Martyn Shaw wrote: > For anybody that's still with this... > > On 14/09/2010 01:25, Martyn Shaw wrote: > ... >> So use QUANTIZED_TIME to round every t in every method that uses a >> passed-in time in WaveTrack? And then the same for WaveClip? It sounds >> like a plan, but is it reasonable? I'm thinking so. >> >> Scanning down WaveTrack.h that would be (first lines only): >> >> virtual void SetOffset (double o); > > well this just calls... > >> Similarly in WaveClip: >> >> void SetOffset(double offset); > > and I see that it's WaveClips that actually have an offset, so I > figure that I'll fix it there only. Yes, I think that's right. Also, I took a bit closer look at your original list in this thread of WaveTrack and WaveClip methods that probably need changing. I think you'll run into lots of cases similar to WaveTrack::SetOffset() and WaveClip::SetOffset(), where many do not need to have the QUANTIZED_TIME change, as the real work is done at a lower level. For example, WaveTrack::Cut() just calls WaveTrack::Copy() and WaveTrack::Clear() (which just calls WaveTrack::HandleClear()), and so the changes need to be in WaveTrack::Copy() and WaveTrack::HandleClear(), not in WaveTrack::Cut(). Of course, those workhorses are the more complicated methods to fix! > > What should we do with overlapping waveclips in an aup??? Easily > possible to create with Audacity and a text editor (just set the > <waveclip offset=" to overlapping values), I question whether we should support such perversion. AUP files should be written by Audacity, not text editors. :-) >and probably 'out there' in > legacy files. We draw them OK, but possibly fail elsewhere. > I think legacy files have no notion of clips. Dominic, have you been following this thread? As the original architect, I think he might have valuable input on this whole thread, so if he doesn't respond here, Martyn, I recommend summarizing it and emailing him directly. Maybe Markus, too, as he did the adaptation to clips. That applies to your question in the "r10691 committed" email about why mOffset (of a Clip) is a double, not a sampleCount. I think your reasoning on all this is right, but there may be something I'm missing. - Vaughan |
From: Martyn S. <mar...@gm...> - 2010-09-30 00:20:14
|
Hi there On 24/09/2010 23:04, Vaughan Johnson wrote: > On 9/22/2010 5:48 PM, Martyn Shaw wrote: <snip> >> and I see that it's WaveClips that actually have an offset, so I >> figure that I'll fix it there only. > > Yes, I think that's right. That's still my conclusion, although I have a nagging doubt. I'm no longer convinced that the last change I made to WaveClip.cpp (r10691) is 'correct' and don't intend on making further changes until I figure that out. And I've happened across (again) related problems in Envelope (which has been an ongoing issue for years IIRC, but I may now have a handle on). > Also, I took a bit closer look at your original list in this thread of > WaveTrack and WaveClip methods that probably need changing. I think > you'll run into lots of cases similar to WaveTrack::SetOffset() and > WaveClip::SetOffset(), where many do not need to have the QUANTIZED_TIME > change, as the real work is done at a lower level. For example, > WaveTrack::Cut() just calls WaveTrack::Copy() and WaveTrack::Clear() > (which just calls WaveTrack::HandleClear()), and so the changes need to > be in WaveTrack::Copy() and WaveTrack::HandleClear(), not in > WaveTrack::Cut(). Of course, those workhorses are the more complicated > methods to fix! This is true, and I meant that I would drill down into those methods to find where QUANTIZED_TIME should be used, and where it isn't needed (which may be most of them in WT, as you say). >> What should we do with overlapping waveclips in an aup??? Easily >> possible to create with Audacity and a text editor (just set the >> <waveclip offset=" to overlapping values), > > I question whether we should support such perversion. AUP files should > be written by Audacity, not text editors. :-) true but... >> and probably 'out there' in >> legacy files. We draw them OK, but possibly fail elsewhere. >> > > I think legacy files have no notion of clips. By 'legacy' I meant 'any AUP files written by any previous versions (including betas)'. We have allowed "random" <waveclip offset="x.xx"> things in AUPs before, we shouldn't fail to give an acceptable result (or at least not a project that will crash) on reading them in the future. I have 'belief' but not 'proof' (I haven't tried) that earlier versions could create a series of clips that don't fit into the time available for true (sample accurate) non-overlapping clips. For example: Clip 1: 2 samples, offset 0. Clip 2: 2 samples, offset 1.5 samples Clip 3: 2 samples, offset 3 samples or something like that, I don't know. Maybe I'll try and create that without r10691. > Dominic, have you been following this thread? As the original architect, > I think he might have valuable input on this whole thread, so if he > doesn't respond here, Martyn, I recommend summarizing it and emailing > him directly. Maybe Markus, too, as he did the adaptation to clips. That > applies to your question in the "r10691 committed" email about why > mOffset (of a Clip) is a double, not a sampleCount. I think your > reasoning on all this is right, but there may be something I'm missing. I want to do that but can't, at present, formulate a sensible summary / question. TTFN Martyn > - Vaughan > > ------------------------------------------------------------------------------ > Start uncovering the many advantages of virtual appliances > and start using them to simplify application deployment and > accelerate your shift to cloud computing. > http://p.sf.net/sfu/novell-sfdev2dev > _______________________________________________ > audacity-devel mailing list > aud...@li... > https://lists.sourceforge.net/lists/listinfo/audacity-devel |
From: Al D. <bus...@gm...> - 2010-09-30 06:11:18
|
On Wednesday, September 29, 2010 17:20:09 Martyn Shaw wrote: > Hi there > > On 24/09/2010 23:04, Vaughan Johnson wrote: > > On 9/22/2010 5:48 PM, Martyn Shaw wrote: > <snip> > > >> and I see that it's WaveClips that actually have an offset, so I > >> figure that I'll fix it there only. > > > > Yes, I think that's right. > > That's still my conclusion, although I have a nagging doubt. I'm > no longer convinced that the last change I made to WaveClip.cpp > (r10691) is 'correct' and don't intend on making further changes > until I figure that out. > > And I've happened across (again) related problems in Envelope > (which has been an ongoing issue for years IIRC, but I may now > have a handle on). > > > Also, I took a bit closer look at your original list in this > > thread of WaveTrack and WaveClip methods that probably need > > changing. I think you'll run into lots of cases similar to > > WaveTrack::SetOffset() and WaveClip::SetOffset(), where many do > > not need to have the QUANTIZED_TIME change, as the real work is > > done at a lower level. For example, WaveTrack::Cut() just calls > > WaveTrack::Copy() and WaveTrack::Clear() (which just calls > > WaveTrack::HandleClear()), and so the changes need to be in > > WaveTrack::Copy() and WaveTrack::HandleClear(), not in > > WaveTrack::Cut(). Of course, those workhorses are the more > > complicated methods to fix! > > This is true, and I meant that I would drill down into those > methods to find where QUANTIZED_TIME should be used, and where it > isn't needed (which may be most of them in WT, as you say). > > >> What should we do with overlapping waveclips in an aup??? > >> Easily possible to create with Audacity and a text editor (just > >> set the <waveclip offset=" to overlapping values), > > > > I question whether we should support such perversion. AUP files > > should be written by Audacity, not text editors. :-) > > true but... > > >> and probably 'out there' in > >> legacy files. We draw them OK, but possibly fail elsewhere. > > > > I think legacy files have no notion of clips. > > By 'legacy' I meant 'any AUP files written by any previous versions > (including betas)'. We have allowed "random" <waveclip > offset="x.xx"> things in AUPs before, we shouldn't fail to give an > acceptable result (or at least not a project that will crash) on > reading them in the future. I have 'belief' but not 'proof' (I > haven't tried) that earlier versions could create a series of > clips that don't fit into the time available for true (sample > accurate) non-overlapping clips. For example: > Clip 1: 2 samples, offset 0. > Clip 2: 2 samples, offset 1.5 samples > Clip 3: 2 samples, offset 3 samples > or something like that, I don't know. Maybe I'll try and create > that without r10691. > If tracks like these can currently exist, be loaded without incident, and play without incident, then we could probably even have the same situation but with clip 2 quantized, also without incident. There might be inconsistencies -- some code might honor one clip's value and some code another. But I haven't seen any evidence that it causes big problems for Audacity. If we are going to start imposing order onto these old projects, there are two choices: try our best to honor the sample counts, or try our best to honor the offsets. To determine which is correct, we have to find, if Audacity is indeed generating such projects, whether the sample counts or offsets are in reality closer to correct. It would be a lot easier to verify project loading, and make sure that track edits make sense *within* WaveTrack functions. - Al > > Dominic, have you been following this thread? As the original > > architect, I think he might have valuable input on this whole > > thread, so if he doesn't respond here, Martyn, I recommend > > summarizing it and emailing him directly. Maybe Markus, too, as > > he did the adaptation to clips. That applies to your question in > > the "r10691 committed" email about why mOffset (of a Clip) is a > > double, not a sampleCount. I think your reasoning on all this is > > right, but there may be something I'm missing. > > I want to do that but can't, at present, formulate a sensible > summary / question. > > TTFN > Martyn > > > - Vaughan > > > > ----------------------------------------------------------------- > > ------------- Start uncovering the many advantages of virtual > > appliances and start using them to simplify application > > deployment and accelerate your shift to cloud computing. > > http://p.sf.net/sfu/novell-sfdev2dev > > _______________________________________________ > > audacity-devel mailing list > > aud...@li... > > https://lists.sourceforge.net/lists/listinfo/audacity-devel > > ------------------------------------------------------------------- > ----------- Start uncovering the many advantages of virtual > appliances and start using them to simplify application deployment > and accelerate your shift to cloud computing. > http://p.sf.net/sfu/novell-sfdev2dev > _______________________________________________ > audacity-devel mailing list > aud...@li... > https://lists.sourceforge.net/lists/listinfo/audacity-devel |
From: Vaughan J. <va...@au...> - 2010-09-30 21:37:15
|
On 9/29/2010 11:11 PM, Al Dimond wrote: > On Wednesday, September 29, 2010 17:20:09 Martyn Shaw wrote: >> Hi there >> >> On 24/09/2010 23:04, Vaughan Johnson wrote: >>> On 9/22/2010 5:48 PM, Martyn Shaw wrote: >> <snip> >> >>>> and I see that it's WaveClips that actually have an offset, so I >>>> figure that I'll fix it there only. >>> >>> Yes, I think that's right. >> >> That's still my conclusion, although I have a nagging doubt. I'm >> no longer convinced that the last change I made to WaveClip.cpp >> (r10691) is 'correct' and don't intend on making further changes >> until I figure that out. I now think it isn't correct. See below. >> >> And I've happened across (again) related problems in Envelope >> (which has been an ongoing issue for years IIRC, but I may now >> have a handle on). >> >>> Also, I took a bit closer look at your original list in this >>> thread of WaveTrack and WaveClip methods that probably need >>> changing. I think you'll run into lots of cases similar to >>> WaveTrack::SetOffset() and WaveClip::SetOffset(), where many do >>> not need to have the QUANTIZED_TIME change, as the real work is >>> done at a lower level. For example, WaveTrack::Cut() just calls >>> WaveTrack::Copy() and WaveTrack::Clear() (which just calls >>> WaveTrack::HandleClear()), and so the changes need to be in >>> WaveTrack::Copy() and WaveTrack::HandleClear(), not in >>> WaveTrack::Cut(). Of course, those workhorses are the more >>> complicated methods to fix! >> >> This is true, and I meant that I would drill down into those >> methods to find where QUANTIZED_TIME should be used, and where it >> isn't needed (which may be most of them in WT, as you say). >> >>>> What should we do with overlapping waveclips in an aup??? >>>> Easily possible to create with Audacity and a text editor (just >>>> set the <waveclip offset=" to overlapping values), >>> >>> I question whether we should support such perversion. AUP files >>> should be written by Audacity, not text editors. :-) >> >> true but... >> >>>> and probably 'out there' in >>>> legacy files. We draw them OK, but possibly fail elsewhere. >>> >>> I think legacy files have no notion of clips. >> >> By 'legacy' I meant 'any AUP files written by any previous versions >> (including betas)'. We have allowed "random" <waveclip >> offset="x.xx"> things in AUPs before, we shouldn't fail to give an >> acceptable result (or at least not a project that will crash) on >> reading them in the future. I have 'belief' but not 'proof' (I >> haven't tried) that earlier versions could create a series of >> clips that don't fit into the time available for true (sample >> accurate) non-overlapping clips. For example: >> Clip 1: 2 samples, offset 0. >> Clip 2: 2 samples, offset 1.5 samples >> Clip 3: 2 samples, offset 3 samples >> or something like that, I don't know. Maybe I'll try and create >> that without r10691. >> > > If tracks like these can currently exist, be loaded without incident, > and play without incident, then we could probably even have the same > situation but with clip 2 quantized, also without incident. There > might be inconsistencies -- some code might honor one clip's value and > some code another. But I haven't seen any evidence that it causes big > problems for Audacity. > > If we are going to start imposing order onto these old projects, there > are two choices: try our best to honor the sample counts, or try our > best to honor the offsets. To determine which is correct, we have to > find, if Audacity is indeed generating such projects, whether the > sample counts or offsets are in reality closer to correct. > > It would be a lot easier to verify project loading, and make sure that > track edits make sense *within* WaveTrack functions. I agree with what I think Al's saying. :-) That is, I think we should store all the time values in AUP files as done previously, and use QUANTIZED_TIME only at run time. And since mOffset gets written to the file, it shouldn't be set to a QUANTIZED_TIME value. Rather the places it's used should be QUANTIZED_TIME-ed, or store another member var that doesn't get written to the AUP, mQuantizedOffset. If we do need to change the format of AUP files, we probably should change AUDACITY_FILE_FORMAT_VERSION (currently "1.3.0") in Audacity.h, so XML reading methods can know what they're dealing with. *And*, I think, fix the other places where its value is currently hard-coded, e.g., line 2836 in AudacityProject::WriteXMLHeader() in Project.cpp: wxString dtdName = wxT("-//audacityproject-1.3.0//DTD//EN"); Can anybody confirm or deny those are supposed to match? I'll go ahead and fix it if no denials. - Vaughan > > - Al > >>> Dominic, have you been following this thread? As the original >>> architect, I think he might have valuable input on this whole >>> thread, so if he doesn't respond here, Martyn, I recommend >>> summarizing it and emailing him directly. Maybe Markus, too, as >>> he did the adaptation to clips. That applies to your question in >>> the "r10691 committed" email about why mOffset (of a Clip) is a >>> double, not a sampleCount. I think your reasoning on all this is >>> right, but there may be something I'm missing. >> >> I want to do that but can't, at present, formulate a sensible >> summary / question. >> >> TTFN >> Martyn >> >>> - Vaughan >>> >>> ----------------------------------------------------------------- >>> ------------- Start uncovering the many advantages of virtual >>> appliances and start using them to simplify application >>> deployment and accelerate your shift to cloud computing. >>> http://p.sf.net/sfu/novell-sfdev2dev >>> _______________________________________________ >>> audacity-devel mailing list >>> aud...@li... >>> https://lists.sourceforge.net/lists/listinfo/audacity-devel >> >> ------------------------------------------------------------------- >> ----------- Start uncovering the many advantages of virtual >> appliances and start using them to simplify application deployment >> and accelerate your shift to cloud computing. >> http://p.sf.net/sfu/novell-sfdev2dev >> _______________________________________________ >> audacity-devel mailing list >> aud...@li... >> https://lists.sourceforge.net/lists/listinfo/audacity-devel > > ------------------------------------------------------------------------------ > Start uncovering the many advantages of virtual appliances > and start using them to simplify application deployment and > accelerate your shift to cloud computing. > http://p.sf.net/sfu/novell-sfdev2dev > _______________________________________________ > audacity-devel mailing list > aud...@li... > https://lists.sourceforge.net/lists/listinfo/audacity-devel > |
From: ChengGwo <ste...@co...> - 2010-10-03 06:51:25
|
I don't think you can achieve sample-level time accuracy unless sample-rate dependencies are deferred until it is the last computation made. I think QUANTIZED_TIME should be computed only once and in one place and last. In other words, fractions of a second should always be computed without reference to the sample-rate. Just as in Nyquist, treat the wave as a theoretically continuous wave rather than a digitally sampled wave. Otherwise, of course you will run into accumulated rounding errors. Also, it is nerve wracking to think that multiple methods must be changed in the same way. This defies conventional wisdom regarding good programming style, does it not? The methods shouldn't be dependent on things like this in the first place. -- View this message in context: http://audacity.238276.n2.nabble.com/Sample-level-time-accuracy-tp5524594p5595635.html Sent from the audacity-devel mailing list archive at Nabble.com. |
From: Michael C. <mc...@gm...> - 2010-10-05 13:45:41
|
Hi All, I read this thread on two previous occassions and wanted to reply but didn't feel like I could. I think it's because it's hard to visualize all the cases where we mix integer and double time representations. After the third time through I thought I should just spit out my thoughts. Deferring quantized time might be intuitive, but without thinking it through it might not include some use cases. Do all cases work with this? For example, this could require that we never need to save a value from when we are dealing with a duration, we may want to use a value quantized to the destination track. Maybe someone has a clue on this? I really prefer just having all tracks quantized to some integral time base (sample rate), and having "floating point" time be only a function of the view when the user needs to interact. Conversion/translation between two tracks would then require a resampling that would always end up on the same integer sample for all platforms. Labels and other tracks might currently have a time-double representation, but it wouldn't be hard to make them integral at the project rate or finer, and I don't think this would affect the user in any way. On the other hand the use cases mentioned by Richard and Roger make me realize there's some things I'm not very familiar with. When we currently have a 0.5 sample offset, do we actually currently do anything special for playback or mixdown? I don't think we do but if so I would like to be educated if someone knows where this happens in code. Michael |