Re: [Audacity-devel] refactoring
A free multi-track audio editor and recorder
Brought to you by:
aosiniao
From: Dominic M. <do...@mi...> - 2003-10-29 02:40:59
|
Hi Josh, Looks like you've made some great progress. I'm still behind on catching up on your Ogg progress; I've been working on 1.2.0 issues and playing with Mac OS X 10.3 "Panther" and XCode (both of which totally rock, BTW). On Oct 25, 2003, at 10:34 PM, Joshua Haberman wrote: > I've been working a lot today on the beginning stages of what will > become the long-discussed Track rewrite. I don't have anything ready > to > show yet, but I wanted to throw some ideas out that I've been thinking > about through these initial changes. > > My first task has been reworking TrackList. The two main changes are > that it now uses wxList so we don't have to maintain our own linked > list > implementation, and we get the benefit of more methods already being > written. That sounds good. Before it's too late, what would you think about making it a dynamic array instead of a list? Conceptually a list seems okay but realistically certain operations, like move up/down, are a pain to implement for lists. > The more beneficial change is a new type-safe interface to > getting specific kinds of tracks from the TrackList: > > WX_DEFINE_ARRAY(WaveTrack*, WaveTrackArray) > WX_DEFINE_ARRAY(NoteTrack*, NoteTrackArray) > WX_DEFINE_ARRAY(LabelTrack*, LabelTrackArray) > WaveTrackArray GetWaveTracks(bool selectedOnly); > NoteTrackArray GetNoteTracks(bool selectedOnly); > LabelTrackArray GetLabelTracks(bool selectedOnly); Those are great. The fact that all of these methods return arrays makes me even more sure that it would be better if TrackList is really TrackArray. > Not only does this make code simpler since you don't have to iterate > over a list of tracks checking t->GetKind(), but it's safer because you > no longer have to statically downcast from Track* to the more specific > type. This will actually become crucial once we create the new > hierarchy we have planned, because for some reason (that I don't yet > understand) the compiler won't let me statically downcast from a base > to > a virtually derived class. As a refresher, we are planning to > virtually > derive from Track so that GuiTrack and WaveTrack both share the same > copy of Track. > > You may wonder how we can downcast at all if not statically. Well the > answer is to introduce dynamic_cast. We haven't used any RTTI to this > point (AFAIK) but I think we are beginning to need it. I know it's > supported on both VC++ and g++, so there's no reason to avoid it that I > know of. In reality we were using a sort of ad-hoc RTTI already by > having the virtual GetKind() method for GetTrack; now we're just > letting > the compiler do the work for us, and benefiting from the extra safety. > > Back to TrackList for a second, the intention is that TrackList (as > opposed to {Wave,Note,Label}TrackArray) is still the primary structure > for passing around lists of tracks, but that it is easy to select the > more specific tracks at any time. For example, I think passing a > WaveTrackArray to AudioIO is a bad idea, because (for example) the > AudioIO may want different kinds of tracks also. It already wants the > TimeTrack, and in the future it may want NoteTracks. So prefer > TrackList for parameters to functions, using > {Wave,Note,Label}TrackArray > temporarily in localized algorithms. > > --------- > > A shortcoming of our current approach that has been clear for a while > is > the way we deal with stereo tracks. Making all code that deals with > lists of tracks be aware of the "linked" flag is not great abstraction > and it makes for some ugly code. We've discussed having a StereoTrack > that simply contains two WaveTracks and calls methods on both of them. > However I am beginning to think this is the wrong approach. First of > all, it introduces more complexity into what will already be a plenty > complex inheritance graph. Also, it just doesn't make sense based on > the way we look at tracks: > > a track: > * has one label area and set of controls to control it > * is the smallest selectable entity > * is the unit of exchange between core functions of the > infrastructure > > Based on these characteristics, it doesn't make any sense to have one > track "contain" another. For example, the two GuiWaveTracks contained > by a GuiStereoTrack are both going to want to draw the label and > receive > label mouse clicks, because that's what GuiWaveTracks do. What I think > makes sense is to have only one kind of WaveTrack, that can contain > one, > two (stereo) or more complex arrangements (5.1) of channels. Each > channel is then a sequence. > > While we're at it, we could get even more ambitious and make Audacity > region-capable. Then the model would be: > > WaveTrack HAS MANY > Channel(s) HAVE MANY > Region(s) at arbitrary (but nonoverlapping) offsets > > This would make each region a Sequence. How fortunate that Sequence > has > already been separated from WaveTrack! :) I like that idea, it would keep WaveTrack simpler. I also think there's no question that we should move towards support for regions. For the short term, I'd suggest that we have one region per channel, and get all of that logic working (but go ahead and create data structures for multiple regions per channel, just don't worry about the UI or the logic at this time). > To consider for a second the effect this would have on other parts of > the program: > > AudioIO would get a list of WaveTracks to play, and would break them > apart into their channels to play them. It already has to do about the > same amount of work, because it has to iterate over the list and make > decisions based on whether each track is a LeftChannel, RightChannel or > MonoChannel and whether or not it is linked. Agreed. Perhaps that should be a method of TrackList (or TrackArray), to return a WaveChannelArray of all of the WaveChannels. > Exporters would also get a list of WaveTracks to export, and would > break > them down to decide what kind of file to write (some output formats may > actually support things like 5.1) and proceed accordingly. Sounds good - keep in mind that most of the logic for this is in Mix.cpp - both Audio I/O and all exporters make heavy use of this class to figure out what to do with a collection of WaveTracks. So just make Mix smart enough to handle this and you're basically done. > Effects are the part I'm the least sure about because I'm the least > familiar with that code. About half of the effects, plus all of the plug-in effects, are pretty straightforward in the way they access tracks - they just read in chunks to be processed and write back the same size chunks. But the other effects do thinks like copy Sequence Blocks around (Repeat), output a different amount of audio than they input (Change Pitch/Tempo), access special methods (Amplify), etc. and they would need to be rewritten. - Dominic > Thoughts? > > Josh > > > ------------------------------------------------------- > This SF.net email is sponsored by: The SF.net Donation Program. > Do you like what SourceForge.net is doing for the Open > Source Community? Make a contribution, and help us add new > features and functionality. Click here: http://sourceforge.net/donate/ > _______________________________________________ > Audacity-devel mailing list > Aud...@li... > https://lists.sourceforge.net/lists/listinfo/audacity-devel |