From: Bernhard H. <bh...@in...> - 2002-04-13 19:11:38
|
David Boddie <da...@sl...> writes: > On Thursday 11 Apr 2002 9:39 pm, Bernhard Herzog wrote: > > > David Boddie <da...@sl...> writes: > > > - Currently, the alpha component of the pixels in the destination pixmap > > > are ignored when blending and I overwrite them with the alpha > > > components of pixels in the source pixmap. Rereading this, I realized that I only replied to the last part (overwriting the target alpha value) and not the first (blending). > I suppose that the overall effect depends on the product of the > translucencies of the two images, but it's possibly not something you > have to consider as you are rendering the images in two stages: > > 1. When the first image is painted onto the canvas then the contents of > the canvas are modified by the image depending on its translucency. > > 2. When the second image is painted, the first image is part of the image > on the canvas so only the translucency of the second image is important. > > The result is an image on the canvas with no inherent translucency. > The individual images are translucent, but the result is not. I may have > misunderstood what you meant. :-) We're talking about slightly different things, as mentioned in my reply above. When you're rendering on a opaque canvas, the canvas should indeed stay opaque. Whatever formula you use to compute the alpha channel of the canvas it has to produce an opaque alpha channel for the result. For the moment, rendering on an opaque canvas is all we need. It would be interesting though to use the libart based renderer to export raster images from Sketch instead of relying on ghostscript. For raster export an option to produce an image with an alpha channel so that e.g. all pixels that aren't covered by an object are completely transparent would be nice, though. > > This second meaning is used in libart to achieve anti-aliasing. E.g. > > at the edges the alpha from libart's SVP is somewhere between full > > opacity and full transparency. [...] > > Combining two alpha values of this kind means that you take the > > maximum of the two values, I guess. > > I would have thought that we would use the product of the two alpha > values in this case. The usage of alpha here is more like a generalized clip-mask and combining clip-masks usually means computing their intersection or union. Here this would mean the minimum (for the intersection) or the maximum (for the union) of the two alpha values. > > Things become very interesting and complex if you try to combine the > > two, which may mean that we have to keep two alpha channels around for > > the result image and possible temporary images. > > I assume that the destination pixmap is effectively the canvas. If this > is the case, then I don't believe that we need to worry about temporary > images. At the moment, no. When more transparency support is introduced, though, it may become necessary. Imagine being able to specify transparency for a group. For that you effectively have to render the group into a temporary image which is then composited with the canvas. > > I must admit that I copied the alpha table from libart or one of > > libart's example programs and I'm not 100% sure on this. It's used or at > > least supposed to be used to gamma-correct the translucency values for > > the antialiasing. > > In which case I can probably ignore it for the time being. Yes. Keep in mind, too, that all this transparency work started out to implement simple GIF-style bilevel transparency, which wouldn't be affected by gamma correction anyway. > > > I suspect that a value of 255 doesn't actually cause the > > > source image to be opaque. > > > > Not sure what you mean here. > > This line from skrender.c which puts the red component of a pixel into > the destination pixmap might indicate what I mean: > > v[0] = v[0] + (((fg[0] - v[0]) * fg[3] + 0x80) >> 8); I took that directly from libart :), probably thinking that Raph knows what he's doing and the formula is basically correct, assuming that (foo + 0x80) >> 8 is equivalent to foo / 255 (which apparently it isn't, see below). > Rewriting for clarity: > > dest_red = dest_red + ((( src_red - dest_red) * src_alpha + 0x80) >> 8) > > Ignoring the scaling down of the value, which I think I've done incorrectly > anyway, I believe that this roughly translates to > > dest_red = dest_red + ( src_alpha * ( src_red - dest_red ) ) > > where 0.0 <= src_alpha <= 1.0. So, when src_alpha = 1.0, only the src_red > is shown. However, this isn't exactly equivalent to the first line I gave. Indeed. The first version is off by 1 in exactly a quarter of all cases when src_alpha == 255 (the equivalent of 1.0). Tested with a small C-program: int main(void) { int id, is; unsigned char d, s, a; a = 255; for (id = 0; id < 256; id++) { d = id; for (is = 0; is < 256; is++) { int result1, result2; s = is; result1 = d + (((s - d) * a + 0x80) >> 8); result2 = d + (((s - d) * a) / 255); /*if (result1 != result2)*/ if (result1 != s) { printf("%02x, %02x: result1 = %02x, result2 = %02x\n", d, s, result1, result2); } } } return 0; } result2 is always correct, though, at least in this particular case. there might be other round-off errors in other cases. What does that mean for skrender.c? Well, we have to decide whether it's OK to produce a slightly incorrect result quickly or a correct result more slowly. Might be a good idea to wrap the actual blending operation into a macro so that it's easier to choose between the different possibilities at compile-time. E.g. dest = BLEND_COLOR(dest, src, alpha); Bernhard -- Intevation GmbH http://intevation.de/ Sketch http://sketch.sourceforge.net/ MapIt! http://www.mapit.de/ |