When running sox --i on a wav file using WAVE_FORMAT_EXTENSIBLE and representing 32-bit floating data, I get this:
Input File : '.\sample.wav'
Channels : 1
Sample Rate : 48000
Precision : 25-bit
Duration : 00:00:08.00 = 384000 samples ~ 600 CDDA sectors
File Size : 1.54M
Bit Rate : 1.54M
Sample Encoding: 32-bit Floating Point PCM
Shouldn't the precision for a 32-bit float be 24 bits, not 25?
http://docs.oracle.com/cd/E19957-01/806-3568/ncg_goldberg.html#810
No, for the same reason that –2^15 to 2^15 – 1 is called 16 bit precision, not 15.
That's not correct for floats.
If we're defining precision as "range of numbers expressed" then you'd also
include the exponent bits of the float, giving you 32 bits of precision,
since you'd have 2^32 possible combinations.
Precision on a float is generally meant as "how close to this fraction can
I get", which is the bits of the fraction + the "hidden" bit because floats
are expressed in scientific notation (so it's assumed it's always of the
form 1.xxx), so you pick up a bit from the implicit one. For a 32-bit
float, there are 23-bits of fraction, which gives us 24 bits of precision.
Regards,
Sean
Last edit: Mans Rullgard 2020-08-05
The reported value can be understood as the number of bits a signed fixed-point format would require to represent the the range -1 to +1 with the same worst-case precision. For floating-point formats, the worst precision is at the ends of this range. Starting at +1.0 (0x3f800000), expressing the next smaller value (0x3f7fffff) as signed fixed-point does indeed require 25 bits.
Alternatively, consider the 24-bit signed integer representation. Its maximum value (0x7fffff) as a 32-bit float is 0x4afffffe. The adjacent float values differ by 0.5. It follows that the floating-point format has a precision equivalent to 25-bit signed integer.