Do reduced binary encoding on floats
Brought to you by:
joe_fowler
Add ability to do reduced binary encoding on floating point numbers. Start with 32-bit floats, treating internally as two separate channels, maybe? Probably requires a completely different algorithm type within the code? Imagine one sub-channel to encode the 8-bit exponent and another to handle the 24 bits of sign + mantissa?
Or possibly if exponent varies only over 8 consecutive values, then can scale up mantissa to a 32-bit number and store that by reduced binary as a single channel? Then exponent outside the normal range causes overflow?