Re: [Audacity-devel] Normalize effect extended
A free multi-track audio editor and recorder
Brought to you by:
aosiniao
|
From: Dominic M. <do...@au...> - 2006-04-05 17:59:03
|
Markus Meyer wrote: > Vaughan, > > first, thank you for your clear words. I know that sometimes I can be a > bit "snappy" (is this the real word for it?) in discussions. Also, my > incomplete knowledge of the English language does not always allow me to > phrase things as polite as I should. I hope this is not misunderstood. I hope there have never been any misunderstandings. I think that strong debate on the list is a good thing, so in general I think it is better to speak your mind and explain why you think one thing is a good or bad idea, rather than hold back in fear of offending anyone. Plus, compared to many other open-source mailing lists, I think the Audacity lists are always very friendly. > You're right that I'm a proponent for beginners here. Infact, I think > Audacity is (becoming) the perfect tool for beginners to edit audio. > This is especially important as there are no real alternatives in the > "beginner" market. For example, though it is a good thing if a > professional sound studio can use Audacity for mixing and mastering, > they will have no problem buying ProTools (or Samplitude or anything > else). But even if the beginner spends the money to buy > ProTools/Samplitude/Whatever it doesn't help because it is too > complicated for casual use. So Audacity is filling a real gap here. > Thinking about this a bit more, we should really try to come up with > ways to make audio editing easier. For example, when making a > presentation with PowerPoint, I can say "make this page slide in from > the left". I do not have to say, "set the x offset of the main layer to > -screen_size_x, then start a timer with t=50ms, and everytime the timer > ticks, add 1 to the x offset of the main layer, then when the x offset > is at 0 again, stop the timer". But with audio programs I can't say > "make this vocal sound brighter", or "make this instrument appear in the > background" or "give this lead guitar solo more pressure". All I can do > is say is things like "I want to add 3db to all frequencies in the > 1,5khz to 3khz range", "I want a reverb applied with this-and-that > predelay" etc. All of this is totally unrelated to the actual tasks I > want to perform. I'm not saying, a professional shouldn't have access to > all those parameters. But I shouldn't have to read a course in signal > processing just to make a vocal sound brighter and more present. Very much agreed. However, I would like to say that I think the real strength of Audacity is not just that it's "easy" for beginners, but that it is easy without trying to "dumb it down". Therefore while I'm a big fan of making it more clear, to disagree with Vaughan I don't think that using the word "distorting" instead of "clipping" helps. I'd prefer to have a "?" icon - a context-sensitive help button - next to that word and let people click on it to learn what "clipping" means. More below: > Sorry for digressing, back to the actual discussion: > > Vaughan Johnson schrieb: > >> * Move "Remove DC Offset" to Amplify, or to its own single-purpose >> effect (somewhat less desirable because the Effects menu is >> already long). I know lots of novices wonder what DC Offset is. >> * Change Normalize to have no controls other than Preview, Cancel, >> and OK buttons, and static text that says "Normalize maximum >> amplitude to 0dB", or maybe even "Make selection as loud as >> possible without distorting." (avoiding the word "clipping"). It >> doesn't need a checkbox because all it can do is Normalize. > > This sounds fine to me, though I think we should not have an Effect > dialog that just contains Preview/Cancel/OK and nothing else. But we > could make it behave like the Fade-In/Fade-Out effects, it would just be > applied without showing a dialog at all. > > I also think that it is a good idea to always apply the DC offset when > normalizing (Martyn suggested that). I'm not familiar with the DC offset > removal algorithm. Is this something that "just works" or is it a > heuristic algorithm than can easily fail on certain input data? If the > latter, we shouldn't perform it automatically everytime we do a normalize. Consider also that I'm hoping to move in the direction of support for real-time effects. In that case there is a more clear distinction: to normalize and remove DC offset requires analyzing the entire audio, and can't be done in real-time. Amplify, though, doesn't require any pre-analysis and could be real-time. - Dominic |