|
From: Maxim S. <mcs...@ya...> - 2004-06-17 15:45:08
|
Hi Stephan,
Yes, your "linear" alpha-blending does look much better than what we use now in
AGG. I suppose the difference in performance is because you use "v/255" rather
than "v>>8". It works much much slower. But the problem is "v>>8" is "v/256"
which is theoretically incorrect, but in practice the difference is not
noticeable. The only case we should process separately is when alpha is 255.
So, 255*255/256 is 254, so that the error will accumulate even if we draw
opaque polygons. This is the main reason of that "if(alpha == 255*255)...", the
second is a bit of optimization.
I think your idea with gamma for each channel is great! It definitely worth
having a modified version of the pixel format renderers. I'm not sure if we
need to use a LUT of 25500 values while actual precision of the color space
remains 8 bits per channel, but I'll need to experiment with it. You multiply
your "top" colors by 100, right? Why not by 256 then?
Actually, it's possible to implement any color space you want. For example, you
use agg::rgba (with doubles inside), but work with an 8-bit-per-channel canvas.
AFAIU it's similar to your multiplied by 100 values.
BTW, with this LUT we can avoid using divisions and obtain more correct result
with the shift operation. We need just to consider this feature when
initializing the LUT.
Where did you get that 2.2 gamma value? Is that kind of average standard for
human vision?
McSeem
--- Stephan Assmus <sup...@gm...> wrote:
> Hello,
>
> for my application, I tried to find a solution to the problem of
> blending RGB colors. The problem is that the RGB color space is not
> perceptually uniform. This means that the human eye disagrees with the
> mathematical results of mixing two colors.
> I got very interested in the Lab color space, because it is supposed to
> be perceptually uniform. During my first attempts to implement the RGB-
> >Lab->RGB conversion, I missed the critical detail to convert the
> > normal RGB colors to "linear" RGB colors. That is to apply a gamma
> conversion function before doing the Lab conversion. Without the gamma
> conversion, blending colors in Lab had the same problems as blending in
> normal RGB. So I figured, hey lets try and see what happens if blending
> in "linear" RGB, and to my surprise, it looked much better and in fact
> the same as when doing the Lab conversion correctly. The problem is
> that if you use linear RGB internally in your app to store bitmaps, you
> loose many of the colors of your display color space, wich is normal
> RGB. The solution which I have found so far, is to apply the gamma
> conversion function on the fly, but using a [0..25500] lookup table
> when blending. On the conversion back, I use a lookup table that has
> 25500 entries with values [0..255] which have been produced by
> rounding. This gives satisfactory results in that the colors actually
> stay the same (more stable results with 100 times the precision) and
> the wrong intermediate colors are eliminated.
> I have attached two screenshots to see the difference between blending
> in linear and normal RGB space, I hope everyone would agree the linear
> one to look much better. So much that it pays to take the performance
> hit. And performance is the interesting bit, which motivates this
> email. I would like to know, if others had similar concernes and how
> they solved them. Please consider, that blending takes place in a lot
> of cases, not only alpha compositing. Image transformations with
> filters, a blur filter, whenever you take two colors and compute some
> intermediate color. Here is some code of what I do. I think it may
> contain flaws, and it could definitely use speed up. So I would be
> grateful if you make any suggestions you see.
>
> const float kGamma = 2.2;
> const float kInverseGamma = 1.0 / kGamma;
>
> uint16* kGammaTable = NULL;
> uint8* kInverseGammaTable = NULL;
>
> void
> init_gamma_blending()
> {
> // init LUT R'G'B' [0...255] -> RGB [0...25500]
> if (!kGammaTable)
> kGammaTable = new uint16[256];
> for (uint32 i = 0; i < 256; i++)
> kGammaTable[i] = (uint16)(powf((float)i / 255.0, kGamma) *
> 25500.0 + 0.5);
>
> // init LUT RGB [0...25500] -> R'G'B' [0...255]
> if (!kInverseGammaTable)
> kInverseGammaTable = new uint8[25501];
> for (uint32 i = 0; i < 25501; i++)
> kInverseGammaTable[i] = (uint8)(powf((float)i / 25500.0,
> kInverseGamma) * 255.0
> + 0.5);
> }
>
> And here is a blending routine, that uses the LUTs:
>
> inline void
> blend_gamma(uint16 b1, uint16 b2, uint16 b3, uint8 ba, // bottom
> components
> uint16 t1, uint16 t2, uint16 t3, uint8 ta, // top
> components
> uint8* d1, uint8* d2, uint8* d3, uint8* da) // dest
> components
> {
> if (ba == 255) {
> uint32 destAlpha = 255 - ta;
> *d1 = kInverseGammaTable[(b1 * destAlpha + t1 * ta) / 255];
> *d2 = kInverseGammaTable[(b2 * destAlpha + t2 * ta) / 255];
> *d3 = kInverseGammaTable[(b3 * destAlpha + t3 * ta) / 255];
> *da = 255;
> } else {
> uint8 alphaRest = 255 - ta;
> uint32 alphaTemp = (65025 - alphaRest * (255 - ba));
> uint32 alphaDest = ba * alphaRest;
> uint32 alphaSrc = 255 * ta;
> *d1 = kInverseGammaTable[(b1 * alphaDest + t1 * alphaSrc) /
> alphaTemp];
> *d2 = kInverseGammaTable[(b2 * alphaDest + t2 * alphaSrc) /
> alphaTemp];
> *d3 = kInverseGammaTable[(b3 * alphaDest + t3 * alphaSrc) /
> alphaTemp];
> *da = alphaTemp / 255;
> }
> }
>
> I have not talked about associated alpha yet, but my subject says I'm
> going to... These functions assume non-associated alpha and *produce*
> non-associated alpha. I have tried to look into the alpha blending AGG
> does. When I replace my normal RGB blending code with AGG code, I can
> achieve some significant speed up. AGG blending does only 4 multiplies
> and a couple of shifts. Here are some numbers:
>
> Blending 800x600 pixels, bottom layer with alpha = 50, top layer with
> alpha = 50:
>
> My normal RGB code: 41 ms
> My linear RGB code: 54 ms
> AGG code: 11 ms
>
> However, I'm not sure if alpha is unassociated *after* the blending has
> been done in the AGG case. To spare you to look up the code, here is
> the agg version of the above function:
>
> inline void
> blend(uint16 b1, uint16 b2, uint16 b3, uint8 ba, // bottom components
> uint16 t1, uint16 t2, uint16 t3, uint8 ta, // top components
> uint8* d1, uint8* d2, uint8* d3, uint8* da) // dest components
> {
> if (ta) {
> if (ta == 255) {
> *d1 = t1;
> *d2 = t2;
> *d3 = t3;
> *da = ta;
> } else {
> int r = b1;
> int g = b2;
> int b = b3;
> int a = ba;
> *d1 = (uint8)((((t1 - r) * ta) + (r << 8)) >> 8);
> *d2 = (uint8)((((t2 - g) * ta) + (g << 8)) >> 8);
> *d3 = (uint8)((((t3 - b) * ta) + (b << 8)) >> 8);
> *da = (uint8)((((ta + a) << 8) - ta * a) >> 8);
> }
> } else {
> *d1 = b1;
> *d2 = b2;
> *d3 = b3;
> *da = ba;
> }
> }
>
> I hope I haven't confused this and copied the wrong part from AGG
> blending...
>
> Hm. When I think about this... is the above AGG code expecting the
> bottom pixel to be completely opaque? It looks flawed somehow. Bottom
> pixel alpha is only considered in the calculation of resulting alpha?
>
> Anyways, I hope to have given some food for thought. Especially the bit
> where I say that even image filters and stuff would have to be done in
> linear RGB space. Maybe this is all already possible with AGG, but I
> have just missed it...I'm not sure. :-)
>
> Best regards,
> -Stephan
>
> ATTACHMENT part 2.1 image/png name=LinearRGB.png
> ATTACHMENT part 2.2 application/x-be_attribute name=BeOS Attributes
> ATTACHMENT part 3.1 image/png name=NormalRGB.png
> ATTACHMENT part 3.2 application/x-be_attribute name=BeOS Attributes
|