|
From: robs <aq...@ya...> - 2012-03-11 23:04:01
|
----- Original Message ----- > From: Ulrich Klauer <ul...@ch...> > I tend to agree, seeing how much processing time is burned by > SOX_ROUND_CLIP_COUNT alone. On the other hand, format conversion > without applying any effects would take longer in most cases if all > data had to take a floating-point transit. > Yes, clipping detection is slow, but it can be useful to know that clipping has occurred in a particular effect as if you're pretty sure that your effect chain shouldn't clip then its a flag to let you know you've mistyped something (or some other process has gone wrong) before or at the point of clipping. It can also be a concious decision to allow clipping to occur in an effect (acting as a crude limiter) to avoid normalisation (perhaps pushing down the volume of the whole track for the sake of one rogue peak). Without clipping detection, the signal is effectively uncontrolled and might be at an arbitrary level at the input to effects which work on absolute levels (compand for example) so other control mechanisms might be necessary. AGC might be possible, but we can't rely on a normalisation stage, for streamed audio (e.g. recording) at least, and ultimately, out-of-range samples have to be brought in range if writing to an integer-based format. So playing "fast and loose" with levels in the chain may just be storing up problems for later? It might be instructive to see what successful commercial tools do here, or to look at the design of standard effects plug-in architectures (VST, LV2, etc.)—it has been suggested in the past to ditch the current effects arch. and replace it with a standard one. Food for thought :) Cheers, Rob |