Thread: RE: [Algorithms] Gamma correction an HDR images?
Brought to you by:
vexxed72
|
From: <c.s...@ph...> - 2005-06-01 21:39:14
|
Technically a HDR texture is a collection of numbers with enough = precision and range beyond the 0..1 range. Whether it is in gamma space or linear space, only the content creator = knows. Display cards for PC's have a history of outputting voltages that are = proportional to RGB values. Due to the fabric of the CRT circuitry the resultant light power gets in = effect raised to a power near 2. So greyscale 128 is about 1/4 the intensity of greyscale 255. You can veryfy this by comparing a checkerboard pattern with a solid = greyscale of 181, which corresponds to sqrt(1/2). They should be ~about~ equal. The gamma phenomenon introduces the convenience of allocating more = precision to the dark colors for precision limited 8-bit RGB. Since the darkest grey representable in 8 bit RGB is not 1/255, but = really it is ~ 1/65025, which would need 16-bit precision in linear = space. So gamma is also a form of compression. You will get ugly banding = if you try to output an 8-bit precision framebuffer "linearly". All our infrastructure today (Photoshop, digital cameras, scanners, etc) = evolved around exchange of information in gamma-space, so we have to = deal with it. The bit-exact gamma value is not so much important (is different from = device to device anyway) as it is more to remember is that gamma is in = the first order a power of 2, which also makes things easy to calculate. = Linear conversion before summing pixels is done by ^2 and re-gamma after = summing pixels is done by sqrt(). -----Original Message----- From: gda...@li... = [mailto:gda...@li...] On Behalf Of = Bra...@Pl... Sent: Wednesday, June 01, 2005 8:00 PM To: gda...@li... Subject: [Algorithms] Gamma correction an HDR images? So I'm continuing work on our texture pipeline, doing gamma corrected=20 (linear space) box filtering to generate mip maps, and everything is=20 grand. However, now there's the issue of HDR images, and whether or not = to gamma correct them before mip mapping. Now, I've always been under = the=20 impression that HDR images are already in linear space and don't need to = be gamma corrected. However, how does the graphics card "know" that it = is=20 in linear space? For example, is an HDR texture that only contains = pixels=20 in the range of 0.0 to 1.0 (an LDR texture that happens to have an HDR=20 pixel format) somehow treated differently than a regular LDR texture? I = feel like I'm overlooking some vital piece of information here. So, should I or should I not gamma-correct HDR images before mipping? Thanks, and sorry if I munged the terminology in posing the question. = My=20 understanding of linear vs. gamma space and related things is still=20 somewhat cursory. Brad... ------------------------------------------------------- This SF.Net email is sponsored by Yahoo. Introducing Yahoo! Search Developer Network - Create apps using Yahoo! Search APIs Find out how you can build Yahoo! directly into your own Applications - visit = http://developer.yahoo.net/?fr=3Doffad-ysdn-ostg-q22005 _______________________________________________ GDAlgorithms-list mailing list GDA...@li... https://lists.sourceforge.net/lists/listinfo/gdalgorithms-list Archives: http://sourceforge.net/mailarchive/forum.php?forum_id=3D6188 |
|
From: James M. <Jam...@vi...> - 2005-06-02 12:05:20
|
The=20Myst=20games=20used=20to=20always=20do=20this.=20Riven=20certainly=20= did. -- James=20Milne=20=20 >=20-----Original=20Message----- >=20From:=20g...@li...=20 >=20[mailto:gda...@li...]=20On=20 >=20Behalf=20Of=20Jonathan=20Blow >=20Sent:=2002=20June=202005=2012:45 >=20To:=20g...@li... >=20Subject:=20Re:=20[Algorithms]=20Gamma=20correction=20an=20HDR=20images= ? >=20 >=20 >=20>=20I=20think=20the=20only=20useful=20thing=20in=20a=20game=20would=20= be=20to=20include=20 >=20a=20"gamma=20 >=20>=20correction=20wizard"=20for=20the=20player=20that=20is=20about=20to= =20play. >=20>=20At=20least=20he=20takes=20responsibility=20if=20he's=20going=20to=20= play=20in=20complete=20 >=20>=20darkness=20or=20in=20a=20sea=20of=20bright=20pixels.=20:))) >=20 >=20 >=20There=20have=20been=20a=20couple=20of=20games=20that=20do=20this,=20us= ing=20a=20 >=20system=20similar=20 >=20to=20what=20a=20previous=20poster=20mentioned.=20=20(Having=20the=20us= er=20 >=20compare=20a=20solid=20 >=20rectangle=20to=20a=20stippled=20rectangle=20and=20adjust=20brightness=20= until=20 >=20they=20are=20 >=20equal).=20=20I=20don't=20remember=20which=20games=20though.=20=20Some=20= of=20them=20 >=20are=20recent=20 >=20but=20I=20remember=20this=20being=20done=20a=20very=20long=20time=20ag= o,=20like=20 >=20Commodore=2064=20days. >=20 >=20 >=20 >=20------------------------------------------------------- >=20This=20SF.Net=20email=20is=20sponsored=20by=20Yahoo. >=20Introducing=20Yahoo!=20Search=20Developer=20Network=20-=20Create=20app= s=20using=20Yahoo! >=20Search=20APIs=20Find=20out=20how=20you=20can=20build=20Yahoo!=20direct= ly=20into=20your=20own >=20Applications=20-=20visit=20 >=20http://developer.yahoo.net/?fr=3Doffad-ysdn-ostg-q22005 >=20_______________________________________________ >=20GDAlgorithms-list=20mailing=20list >=20G...@li... >=20https://lists.sourceforge.net/lists/listinfo/gdalgorithms-list >=20Archives: >=20http://sourceforge.net/mailarchive/forum.php?forum_id=3D6188 >=20 >=20______________________________________________________________________= >=20This=20email=20has=20been=20scanned=20by=20the=20MessageLabs=20Email=20= Security=20System. >=20For=20more=20information=20please=20visit=20http://www.messagelabs.com= /email=20 >=20______________________________________________________________________= >=20 >=20 ______________________________________________________________________ This=20email=20has=20been=20scanned=20by=20the=20MessageLabs=20Email=20Sec= urity=20System. For=20more=20information=20please=20visit=20http://www.messagelabs.com/ema= il=20 ______________________________________________________________________ |
|
From: Jay S. <Ja...@va...> - 2005-06-07 19:01:07
|
> So the steps are: > 1. Convert from 2.2 to 1.0 gamma (You do this by raising the=20 > pixels to the > 2.2 power). > 2. Generate mips. > 3. Convert from 1.0 gamma to 2.2 gamma (Do this by raising=20 > the pixels to the 1.0/2.2 power). >=20 > Correct? Or did I invert the 2.2 vs. 1.0/2.2 bit? I think=20 > someone already said this, but I want to be sure. This is all fine, but realize that you can't store a texture as RGB888 in gamma 1.0 without losing a lot of information. So at Valve, we do this by expanding to float[3] for the gamma 1.0 textures and generate mips on floats, then we quantize back down to 8-bits when we convert back to gamma 2.2. > Now, how do you deal with an environment where you have=20 > diffuse textures stored in LDR 2.2 gamma space and HDR cube=20 > maps stored in 1.0 gamma > (linear) space? Obviously, combining texels from both=20 > sources with any math is going to be incorrect. So, I had this idea: This is what the sRGB stuff is for in the pixel shaders. You want to convert on texture sample (ideally before billinear filtering, but that doesn't always happen) to linear (gamma 1.0 is linear) format. Then all of your pixel math is in linear (with float per component again) until you write to the framebuffer and convert back to 2.2. Usually in HDR renderers you'll want the framebuffer to use more precision than RGB888 as well and not convert back on write, but it may not be necessary unless you need to do multi-pass or deferred rendering. > 1. Convert LDR images from 2.2 to 1.0 gamma space and generate mips. > 2. *Leave* them in 1.0 gamma on disk. That way they are in=20 > the same space as your HDR textures. > 3. All your pixel shading / blending / whatever takes place=20 > in linear space. > 4. In your last full screen pixel shader pass, raise every=20 > pixel to the > 1.0/2.2 power to go back to gamma space. >=20 > Will this work? Or will I get nasty banding or other junk going on? Sure, except now they are float per channel. Or you'll get lots of ugly quantization errors like extreme color banding in the darker colors. > Also, what gamma should artists calibrate their monitors to? =20 > 2.2 or 1.0?=20 2.2 we assume our textures are authored at a 2.2 gamma (which is unfortunate for artists with LCD monitors since they are typically gamma 1.7 - 1.9) > It kinda seems like if they worked in gamma 1.0 that=20 > everything would be rosy, but I don't think the hardware is=20 > set up that way and you have to calibrate to something around=20 > 2.2. Right? Sure, except for 1.0 you'd need more than 8-bits per component to get reasonable image quality at the expected dynamic range of the monitor. Gamma is there to fix this problem. An intuitive way to think of it is like perceptual compression/encoding. The reason monitors are not at gamma 1.0 is because the human visual system isn't either. It's really hard to tell the difference between incrementally brighter pixels at the bright end of the spectrum, but pretty easy at the dark end, so gamma 2.2 lets you use more bits at the dark end. Jay |
|
From: Jonathan B. <jo...@nu...> - 2005-06-07 19:08:31
|
>Sure, except for 1.0 you'd need more than 8-bits per component to get >reasonable image quality at the expected dynamic range of the monitor. >Gamma is there to fix this problem. An intuitive way to think of it is >like perceptual compression/encoding. The reason monitors are not at >gamma 1.0 is because the human visual system isn't either. It's really >hard to tell the difference between incrementally brighter pixels at the >bright end of the spectrum, but pretty easy at the dark end, so gamma >2.2 lets you use more bits at the dark end. > > Which is also a good point, what I was saying about monitors outputting at gamma=1 ideally if they actually had the power to produce enough dynamic range physically, sort of comes from some worldview of non-scarcity where it doesn't matter how many bits per pixel you are storing. Which doesn't seem to be true yet, or for a while (since we keep raising resolutions and don't even have enough video RAM for full HDR rendering on one of the next-generation consoles, without doing icky tiling, ahem ahem.) |
|
From: <c.s...@ph...> - 2005-06-08 01:32:01
|
2. *Leave* them in 1.0 gamma on disk. That way they are in the same = space=20 as your HDR textures. It is tempting to do this, because it sounds sooo convenient. But be aware of the storage requirements for this. RGB8 isn't going to = cut it for linear range, so you need at least 16bit integer textures. There are no compression formats for high precision textures that are as = easy as DXT, so essentially your storage requirement goes x4 or x8! Textures which were authored in 2.2 should be left on disk in this = space. The expansion should be done in the shader, either via = SRGBTEXTURE sampler state or a square math operation, like: half3 colorMap =3D tex2D( colorTexture, IN.texcoord ); colorMap *=3D colorMap; // expand to linear space |
|
From: Jay S. <Ja...@va...> - 2005-06-08 20:32:18
|
> The reason monitors=20 > are not at=20 > >gamma 1.0 is because the human visual system isn't either. =20 > It's really=20 > >hard to tell the difference between incrementally brighter pixels at=20 > >the bright end of the spectrum, but pretty easy at the dark end, so=20 > >gamma > >2.2 lets you use more bits at the dark end. > > =20 > > >=20 > Which is also a good point, what I was saying about monitors=20 > outputting at gamma=3D1 ideally if they actually had the power=20 > to produce enough dynamic range physically, sort of comes=20 > from some worldview of non-scarcity where it doesn't matter=20 > how many bits per pixel you are storing. Which doesn't seem=20 > to be true yet, or for a while (since we keep raising=20 > resolutions and don't even have enough video RAM for full HDR=20 > rendering on one of the next-generation consoles, without=20 > doing icky tiling, ahem ahem.) >=20 It's worth mentioning that floating point makes exactly the same tradeoff as gamma encoding (numbers closer to zero use bits to represent more precise quantities). So with only a few more bits per channel than 8 you should be able to do good quality linear HDR pixels provided each channel is stored in floating point. Linear integers using the same number of bits would still have unacceptable artifacts. =20 Jay |