On 28/01/2013 02:21, Steve the Fiddle wrote:
> Thanks for reviewing the patch Martyn,
> On 28 January 2013 00:39, Martyn Shaw <martynshaw99@...> wrote:
>> Hi Steve
>> I've had a look over your patch and done a few tests of my own. The patch
>> is basically good, but is it needed / wanted?
> I'd say yes, which is why I wrote it :=)
Fair enough :)
> I know that I previously raised a question about possible speaker
> damage if the low frequency range was extended too far, but that was
> considering an extreme case of Brownian noise with a low apparent
> "loudness" but high amplitude sub-sonic frequencies (or even DC). With
> this patch I don't believe there is a problem because the
> "constrained" brown noise has a reasonable loudness and the sub-sonic
> content is not extreme.
>> The pink noise differences do extend the bandwidth of 'pink' down an octave
>> or two, but does anyone really care about that or want it? Or would it harm
>> them? I think that was part of your original question. Putting the
>> roll-off an octave or two below my hearing seems a bit mad to me.
> The bandwidth of the pink noise is not really much different from now,
> but the roll-off is a little more precise. This is probably the least
> "necessary" change, but there is a measurable improvement in quality
> with a negligible impact on speed, so why not?
OK, I can go with that.
>> On the Brown/Brownian noise, I don't like the clipping test and would rather
>> have a lower overall level.
> I was unsure at first about the clipping test, but after a lot of
> thought I decided to go with it.
> There is the theoretical / philosophical argument that perhaps we
> shouldn't mess with the randomness, but I don't think that this
> criticism is justified. When considering Brownian motion of water
> molecules, it does not suddenly cease to be "Brownian" just because
> the water is in a container, and yet the walls of the container impose
> a limit on the motion of the particles in similar fashion to the
> clipping test. Essentially, sample values "bounce off" the 0 dB limit
> rather than being clipped.
Hmm. OK. But why have the container so small? We can be closer to
'ideal' (no container) if we make the container bigger (== the signal
smaller). It's always going to be a compromise. I'd rather have a
smaller percentage of samples hitting the limit (and I have been
testing with 10 million samples)
> We have no qualms about constraining the absolute values of white
(This is a misconception about 'white' noise. White noise has no
correlation from sample to sample but has no restriction on the pdf.
I used to use AWGN (additive white Gaussian noise) in a project I
worked on, since it was the most appropriate; independent noise
samples but with a Gaussian (Normal) distribution. It's still 'white'.)
All we are actually doing is biasing the random increment for a
> tiny percentage of sample values so that the waveform is kept within
> reasonable bounds.
We are arguing about the definition of 'tiny' here, I think. Your
implementation does 'look' clipped to me (and Gale).
Without such a test there is a small but finite
> chance that the waveform could drift off into space.
so we have put a
> lid on it to prevent such "evaporation".
Indeed, and the method seems reasonable, in the sense that it is easy
and I can't think of anything better at the moment.
> The patched version is a lot less constrained than Brownian noise from
I have not looked at that.
but even with the very modest amount used in this patch the
> overall "loudness" can be increased by more than 4 dB. I think that
> the benefit of not clipping heavily outweighs arguments against, as
> any clipping will create much more "damage" than constraining the
> absolute sample values for a few peaks.
That seems reasonable. But again it's just a matter of at what level
we do something.
>> And again I see no real reason to extend the LF
>> cutoff much below 20Hz.
> I agree, which is why the 6 dB/octave slope is "only" extended to 20 Hz.
>> We could change that in the original code by
>> setting fc at a different value, I think.
> I did try that, but to achieve a similar degree of linearity down to
> 20 Hz, the amount of sub-sonic frequencies was about 3 dB greater than
> the current patch.
OK, thanks. You have spent much more time on this than me, and I
trust your observations.
>> The fixes for the limitations at 2^18 samples are good, from what I can see.
>> Would you like to make a patch just for that, so I can commit it?
Oh, go on! Just for a start!
>> For the 'brown' (brownian) noise, the original code had
>> and we can see how the 'magic numbers' were generated.
>> is an example. I am inclined to move that fc to 20Hz, and readjust the
>> normalisation factor, but I haven't tried that.
> I did try that, and also a more complex low pass filter, but neither
> worked as well as the leaky integrator method. One problem with
> filtering white noise is that the amplitude is highly dependent on the
> sample rate. For a track sample rate of 192 kHz the peak amplitude of
> Brown noise is about 3dB too low, whereas at for a sample rate of 8000
> Hz the peak level is about 10 dB too high.
OK, but where do you get the 'magic numbers' (0.997f and 0.05) from?
If they are in some way derived from / related to the sample rate then
we should include that in the code so that it works at all sample
rates. If they are experimental and derived at 44100 could we scale
them by fs/44100?
> The only adverse effect that sample rate has on the patched version is
> that at high sample rates the bandwidth does not extend as low as at
> 44.1 kHz, but even at 192 kHz sample rate it is no worse than the
> current version. Even this limitation has a benefit, in that if a user
> requires low frequency Brownian noise, they can produce it by
> generating with a low sample rate.
> The current Audacity "Brown" noise is definitely too light on the bass
> when listening on good speakers, so I think that this improvement is
> definitely justified. Now that the potential risk of excessive
> sub-sonic frequencies has been alleviated I'm very much inclined to go
> with the previously stated opinion of giving the user what it says on
> the tin.
I am 'for' changes along these lines. It is a minority sport and
possibly won't get revisited in a while so just looking to get it the
best we can at the moment.
Thanks for all your input here Steve.
>> Perhaps a solution to this would be to have an extra option on the noise
>> generator dialog to set the LF roll-off of these generators to some
>> frequency. That way users could get what they want. This would be a
>> 'rather advanced' option however. Are there any user request for this?
>> PS Steve sent the following useful Nyquist code to run at the "Nyquist
>> Prompt" effect
>> ;; print the % of clipped samples (mono tracks only)
>> (setq clipped
>> (do ((count 0)
>> (val (snd-fetch s)(snd-fetch s)))
>> ((not val) count)
>> (if (> (abs val) 1)(setq count (1+ count)))))
>> (print (* 100 (/ clipped len)))
>> On 26/01/2013 20:58, Steve the Fiddle wrote:
>>> Patch to upgrade pink noise and brown (Brownian) noise.
>>> The pink noise uses Paul Kellet's high quality pink noise filter.
>>> "Brown" noise has been renamed "Brownian" and uses "leaky
>>> integration". Peaks are constrained to within 0dB (a little trick that
>>> I picked up from the SoX code) so the Brownian noise is now a bit
>>> louder than it would otherwise be, without any clipping.
>>> The glitch that was occurring each 2^18 samples in both pink and brown
>>> noise has been fixed.
>>> Master Visual Studio, SharePoint, SQL, ASP.NET, C# 2012, HTML5, CSS,
>>> with LearnDevNow - 3,200 step-by-step video tutorials by Microsoft
>>> MVPs and experts. ON SALE this month only -- learn more at:
>>> audacity-devel mailing list
> Master Visual Studio, SharePoint, SQL, ASP.NET, C# 2012, HTML5, CSS,
> with LearnDevNow - 3,200 step-by-step video tutorials by Microsoft
> MVPs and experts. ON SALE this month only -- learn more at:
> audacity-devel mailing list