Hello,
I tried to analyse Stephan's gamma when blending colors. The things seem to
be much simpler and much more obvious when you understand the essence. Let
alone those discussions about nonlinearity in sRGB, electronic circuits,
CRT phosphorus, eyes and brain. We don't need it. This "nonlinearity" is
derived from pure geometry. Take the traditional blending formula:
C' = Csrc * Asrc + (1  Asrc) * Cdst
(for the simplicity we assume RGB, not RGBA). It means that if we blend Red
(1,0,0) on Green (0,1,0) with alpha 0.5 we will have 0.5 green and 0.5 red,
that is (0.5, 0.5, 0). That is totally wrong and complete failure!
Imagine an RGB color space as a cube. In this cube any color is defined as a
vector from (0,0,0) to some point in 3D space. And we assume that the
luminosity is proportional to the length of the vector. I know that Red,
Green and Blue contribute differently to the perceptual luminosity, but
that's not important. What important is that the visual luminosity is
proportional to the length of the vector. It's not quite correct either, but
we just assume that.
Also, for the sake of clarity let's assume we blend Red on Green, so, we can
graphically take a plane instead of 3D space:
http://antigrain.com/research/alpha_blending/alpha_blending01.gif
Here all possible resulting colors fall to the straight line between (1,0)
and (0,1). We assume a "normalized" color space where G=1 means
Luminosity=1, etc. For RGB=1 it is sqrt(1+1+1)=1.732. I believe now it's
obvious why antialiased edges of a red polygon over a green one look darker
than they are supposed to.
We simply need to use an arc instead of a straight line:
http://antigrain.com/research/alpha_blending/alpha_blending02.gif
And the modified formula looks like this:
C' = sqrt(Csrc^2 * Asrc + (1  Asrc) * Cdst^2)
How to achieve it? Very easy. If we use Stephan's gammacorrection with
gamma=2.0 we will have exactly this result! There's a problem with
quantization on small values, but I have solved that too, using a high
resolution reverse gamma LUT (essentially the same as in the original
Stephan's code).
Then, it's not that important if we have different contributions of green,
red, and blue. It simply means that the color space is not normalized, but
it will work in the very same way. We'll just have elliptical arcs instead
of circular, or ellipsoid arcs in 3D:
http://antigrain.com/research/alpha_blending/alpha_blending03.gif
Remember that "floating point linear space" Bill Spitzak invented. I feel
his space is simply equal to what I described here. Image filtering is
essentially the same blending. Gaussian blur, sharpen filters are blending
too! And this is why the "Normal Blur" makes the image darker:
http://www.cinenet.net/~spitzak/conversion/blur.html
I believe this is applicable to any other color interpolation too, such as
gradients and Gouraud shading.
McSeem
