> -----Original Message-----
> From: dri-devel-admin@...
[mailto:dri-devel-admin@... Behalf Of Allen Akin
> Sent: Monday, October 02, 2000 8:58 AM
> To: Philip Brown
> Cc: dri-devel@...
> Subject: Re: [Dri-devel] Who is working on Texture Compression (other
> On Mon, Sep 25, 2000 at 11:26:03AM -0700, Philip Brown wrote:
> | After all, increasing color from 8-16-32bit stopped at 32bit cause
> | all the eye can see.
> Lots of 12-bit-per-color-channel (48-bit RGBA) systems exist, and even
> 16-bit gray-scale systems are useful. Glossing over *many* details,
> the reason for this is that the eye's response is logarithmic and a
> linear brightness scale with 256 entries doesn't approximate the eye's
> response curve very effectively. This is particularly apparent at the
> low end of the brightness scale, where banding is obvious on 32-bit
> RGBA displays, not so obvious on 48-bit RGBA displays. Gamma
> correction helps, but is not sufficient to eliminate this problem;
> only using more color bits solves it.
> So in short, there are circumstances where the eye can perceive more
> than 8 bits per color channel, and these cases are even commercially
> Dri-devel mailing list
Agreed Allen, the 32 bit impasse is mostly economical. In the simulation
training industry we often had to settle for 10 bits because of the
practicality of commodity RAMDACs - 10 is considered a compromise in the
strife for realism in low light and sensor simulation scenarios. At least 16
bits internal was required as well as a 24 bit or better gray scale channel.
Of course the color operations have to be accurate, including being
perspectively correct and there were nasty corner cases to deal with. Only
recently have the game cards been improving in their internal accuracy but
the cost in gates is huge. It doesn't seem to be too long ago when most game
cards were 16 bit. Oddly enough the industry I have seen pushing for color
depth now is the broadcast industry - as I understand it they are behind the
push for the 64 bit pixel.