That's not correct for floats. If we're defining precision as "range of numbers expressed" then you'd also include the exponent bits of the float, giving you 32 bits of precision, since you'd have 2^32 possible combinations. Precision on a float is generally meant as "how close to this fraction can I get", which is the bits of the fraction + the "hidden" bit because floats are expressed in scientific notation (so it's assumed it's always of the form 1.xxx), so you pick up a bit from the implicit...
Precision of 32-bit float appears incorrect