|
From: Stephan A. <sup...@gm...> - 2004-06-16 19:40:21
|
Hello,
for my application, I tried to find a solution to the problem of
blending RGB colors. The problem is that the RGB color space is not
perceptually uniform. This means that the human eye disagrees with the
mathematical results of mixing two colors.
I got very interested in the Lab color space, because it is supposed to
be perceptually uniform. During my first attempts to implement the RGB-
>Lab->RGB conversion, I missed the critical detail to convert the
> normal RGB colors to "linear" RGB colors. That is to apply a gamma
conversion function before doing the Lab conversion. Without the gamma
conversion, blending colors in Lab had the same problems as blending in
normal RGB. So I figured, hey lets try and see what happens if blending
in "linear" RGB, and to my surprise, it looked much better and in fact
the same as when doing the Lab conversion correctly. The problem is
that if you use linear RGB internally in your app to store bitmaps, you
loose many of the colors of your display color space, wich is normal
RGB. The solution which I have found so far, is to apply the gamma
conversion function on the fly, but using a [0..25500] lookup table
when blending. On the conversion back, I use a lookup table that has
25500 entries with values [0..255] which have been produced by
rounding. This gives satisfactory results in that the colors actually
stay the same (more stable results with 100 times the precision) and
the wrong intermediate colors are eliminated.
I have attached two screenshots to see the difference between blending
in linear and normal RGB space, I hope everyone would agree the linear
one to look much better. So much that it pays to take the performance
hit. And performance is the interesting bit, which motivates this
email. I would like to know, if others had similar concernes and how
they solved them. Please consider, that blending takes place in a lot
of cases, not only alpha compositing. Image transformations with
filters, a blur filter, whenever you take two colors and compute some
intermediate color. Here is some code of what I do. I think it may
contain flaws, and it could definitely use speed up. So I would be
grateful if you make any suggestions you see.
const float kGamma = 2.2;
const float kInverseGamma = 1.0 / kGamma;
uint16* kGammaTable = NULL;
uint8* kInverseGammaTable = NULL;
void
init_gamma_blending()
{
// init LUT R'G'B' [0...255] -> RGB [0...25500]
if (!kGammaTable)
kGammaTable = new uint16[256];
for (uint32 i = 0; i < 256; i++)
kGammaTable[i] = (uint16)(powf((float)i / 255.0, kGamma) *
25500.0 + 0.5);
// init LUT RGB [0...25500] -> R'G'B' [0...255]
if (!kInverseGammaTable)
kInverseGammaTable = new uint8[25501];
for (uint32 i = 0; i < 25501; i++)
kInverseGammaTable[i] = (uint8)(powf((float)i / 25500.0,
kInverseGamma) * 255.0
+ 0.5);
}
And here is a blending routine, that uses the LUTs:
inline void
blend_gamma(uint16 b1, uint16 b2, uint16 b3, uint8 ba, // bottom
components
uint16 t1, uint16 t2, uint16 t3, uint8 ta, // top
components
uint8* d1, uint8* d2, uint8* d3, uint8* da) // dest
components
{
if (ba == 255) {
uint32 destAlpha = 255 - ta;
*d1 = kInverseGammaTable[(b1 * destAlpha + t1 * ta) / 255];
*d2 = kInverseGammaTable[(b2 * destAlpha + t2 * ta) / 255];
*d3 = kInverseGammaTable[(b3 * destAlpha + t3 * ta) / 255];
*da = 255;
} else {
uint8 alphaRest = 255 - ta;
uint32 alphaTemp = (65025 - alphaRest * (255 - ba));
uint32 alphaDest = ba * alphaRest;
uint32 alphaSrc = 255 * ta;
*d1 = kInverseGammaTable[(b1 * alphaDest + t1 * alphaSrc) /
alphaTemp];
*d2 = kInverseGammaTable[(b2 * alphaDest + t2 * alphaSrc) /
alphaTemp];
*d3 = kInverseGammaTable[(b3 * alphaDest + t3 * alphaSrc) /
alphaTemp];
*da = alphaTemp / 255;
}
}
I have not talked about associated alpha yet, but my subject says I'm
going to... These functions assume non-associated alpha and *produce*
non-associated alpha. I have tried to look into the alpha blending AGG
does. When I replace my normal RGB blending code with AGG code, I can
achieve some significant speed up. AGG blending does only 4 multiplies
and a couple of shifts. Here are some numbers:
Blending 800x600 pixels, bottom layer with alpha = 50, top layer with
alpha = 50:
My normal RGB code: 41 ms
My linear RGB code: 54 ms
AGG code: 11 ms
However, I'm not sure if alpha is unassociated *after* the blending has
been done in the AGG case. To spare you to look up the code, here is
the agg version of the above function:
inline void
blend(uint16 b1, uint16 b2, uint16 b3, uint8 ba, // bottom components
uint16 t1, uint16 t2, uint16 t3, uint8 ta, // top components
uint8* d1, uint8* d2, uint8* d3, uint8* da) // dest components
{
if (ta) {
if (ta == 255) {
*d1 = t1;
*d2 = t2;
*d3 = t3;
*da = ta;
} else {
int r = b1;
int g = b2;
int b = b3;
int a = ba;
*d1 = (uint8)((((t1 - r) * ta) + (r << 8)) >> 8);
*d2 = (uint8)((((t2 - g) * ta) + (g << 8)) >> 8);
*d3 = (uint8)((((t3 - b) * ta) + (b << 8)) >> 8);
*da = (uint8)((((ta + a) << 8) - ta * a) >> 8);
}
} else {
*d1 = b1;
*d2 = b2;
*d3 = b3;
*da = ba;
}
}
I hope I haven't confused this and copied the wrong part from AGG
blending...
Hm. When I think about this... is the above AGG code expecting the
bottom pixel to be completely opaque? It looks flawed somehow. Bottom
pixel alpha is only considered in the calculation of resulting alpha?
Anyways, I hope to have given some food for thought. Especially the bit
where I say that even image filters and stuff would have to be done in
linear RGB space. Maybe this is all already possible with AGG, but I
have just missed it...I'm not sure. :-)
Best regards,
-Stephan
|