Re: [Audacity-devel] Real-time effects stacks
A free multi-track audio editor and recorder
Brought to you by:
aosiniao
|
From: Richard A. <ri...@au...> - 2008-05-06 20:48:57
|
On Mon, 2008-05-05 at 22:14 -0400, Jorge Rodriguez wrote: > I also agree with LRN's eight bullet points. I honestly can't really > find any way to improve them, and if nobody else disagrees, we could > use them as a basis for moving forward. However, that looks like a > whole lot of work, and I think we need to break the task into chunks > that can be tackled one at a time. That was the point of what I said about deriving from the existing undo architecture (at an internal level - how it is interfaced is a separate issue, and certainly CTRL+Z should always undo the last item, but it doesn't mean there aren't other operations you can do which turn out to use the undo stack underneath). There is already a lot of very complex, very well written code underlying the audacity project system which enables it not only to perform edits quickly but maintain a loss-less, reference-counting undo system that is disk-efficient. This structure is capable of supporting all the operations that have been talked about without major changes. The render changes will alter how some code access the project structure, but fundamentally the whole system remains a large, tree-based, copy-on-write, disk backed data buffer in which all the audio data is kept. The missing feature is out-of-order stack access, which I believe to be a matter of implementation rather than design - it could be added relatively easily now a demand exists. Ditto an option not to discard history when a project is closed - this is useful for most users, but not in any way inherent in the design. What I'm trying to avoid is replicating the way other software works just because that's the way other software works, not because it's actually a good solution. For most audio tasks the most likely changes to want to undo or alter are those made most recently. This means that any version control system (which is essentially what we are aiming at here - version management within an audacity project) should store the most recent changes efficiently at the expense of older changes. The approach taken by other software of "master recordings are the ultimate, everything else is secondary" is a hang-over from analogue, tape-based systems where generation loss was a major concern. Audacity is not some kind of digital tape recorder. It's a non-linear, random access edit system from day one, and the design reflects that. So to make effect layering possible, we leverage what is already there, not try and splice on a design from somewhere else over the top. The interface may hide the design completely - for the sake of migrating users we may want to emulate a linear, pseudo-tape environment in the user interface, but that shouldn't dominate how we design the data structures to deliver it. We also shouldn't add obstacles that don't need to exist. Why should a user have to understand which effects are deemed "linear" enough to be added to stacks and which aren't? Where do you draw the line between 2% speed correction on a track from tape, and a double-speed guitar solo? Technically none of the audacity effects except invert and amplify can be applied to a sample in isolation - every one requires a buffer of samples either side to calculate it effectively. This is why effects are applied in a linear manner at the moment. Without the constraint of rendering on the fly, the linear / non-linear / latency battle is meaningless, so we shouldn't try to re-introduce it just because other people's architectural models can't cope without it. A large amount of software design is plagued by a continual design to clone other software not develop something new. This is at it's worst when the people doing the design have had long careers in the field - they design what they know how to use, not what is actually logical, or easy to use if you are starting from scratch. I don't want audacity to become another DAW, mainly because I find the psuedo-tapedeck approach inflexible and unhelpful for what I do. Most of the time, that isn't creating linear tonal transformations of musical performances - it's highly non-linear re-working of mixed recordings. Frequently my workflow looks more like a modern video edit session than a traditional audio session, yet I'm doing essentially the same thing - transforming locally time-linear media. Why does a "professional" audio package look so different to the sound section of a "professional" video edit package? I see it mostly as an accident of the people who design these systems and what they were brought up on - open reel magnetic tape in one case, storyboards fed into tape-to-tape edit controllers in the other. Thankfully, I've never had to use either in anger, and probably never will do. I'd prefer my software tools not to ape them either. Sorry about the rant, but I've spent weeks reading proposals from people with no idea how audacity works internally who want to make more drastic changes in 3 months than the entire project has achieved in 3 years. Along the way a fair number have given the impression that they not only don't know how audacity works now, but don't care, and have no intention of trying to understand or fit in, we should throw it out and follow their master plan. One of the problems with open source is that code usually precedes design, which can prevent coherent thinking from emerging. Richard |