You can subscribe to this list here.
2003 
_{Jan}

_{Feb}

_{Mar}

_{Apr}
(13) 
_{May}
(70) 
_{Jun}
(68) 
_{Jul}
(52) 
_{Aug}
(26) 
_{Sep}
(42) 
_{Oct}
(87) 
_{Nov}
(155) 
_{Dec}
(89) 

2004 
_{Jan}
(8) 
_{Feb}
(13) 
_{Mar}
(13) 
_{Apr}
(50) 
_{May}
(66) 
_{Jun}
(94) 
_{Jul}
(32) 
_{Aug}
(75) 
_{Sep}
(127) 
_{Oct}
(130) 
_{Nov}
(121) 
_{Dec}
(26) 
2005 
_{Jan}
(116) 
_{Feb}
(201) 
_{Mar}
(140) 
_{Apr}
(136) 
_{May}
(185) 
_{Jun}
(86) 
_{Jul}
(149) 
_{Aug}
(108) 
_{Sep}
(183) 
_{Oct}
(97) 
_{Nov}
(86) 
_{Dec}
(84) 
2006 
_{Jan}
(159) 
_{Feb}
(79) 
_{Mar}
(154) 
_{Apr}
(119) 
_{May}
(86) 
_{Jun}
(225) 
_{Jul}
(109) 
_{Aug}
(53) 
_{Sep}
(74) 
_{Oct}
(94) 
_{Nov}
(48) 
_{Dec}
(76) 
2007 
_{Jan}
(27) 
_{Feb}
(46) 
_{Mar}
(41) 
_{Apr}
(33) 
_{May}
(22) 
_{Jun}
(56) 
_{Jul}
(111) 
_{Aug}
(53) 
_{Sep}
(73) 
_{Oct}
(34) 
_{Nov}
(66) 
_{Dec}
(11) 
2008 
_{Jan}
(62) 
_{Feb}
(76) 
_{Mar}
(44) 
_{Apr}
(23) 
_{May}
(64) 
_{Jun}
(15) 
_{Jul}
(63) 
_{Aug}
(121) 
_{Sep}
(121) 
_{Oct}
(29) 
_{Nov}
(47) 
_{Dec}
(17) 
2009 
_{Jan}
(62) 
_{Feb}
(41) 
_{Mar}
(33) 
_{Apr}
(35) 
_{May}
(31) 
_{Jun}
(25) 
_{Jul}
(13) 
_{Aug}
(52) 
_{Sep}
(15) 
_{Oct}
(12) 
_{Nov}
(27) 
_{Dec}
(11) 
2010 
_{Jan}
(20) 
_{Feb}
(21) 
_{Mar}
(2) 
_{Apr}
(1) 
_{May}
(8) 
_{Jun}
(116) 
_{Jul}
(14) 
_{Aug}
(21) 
_{Sep}
(16) 
_{Oct}

_{Nov}
(16) 
_{Dec}
(21) 
2011 
_{Jan}
(44) 
_{Feb}
(13) 
_{Mar}
(21) 
_{Apr}
(23) 
_{May}
(5) 
_{Jun}
(28) 
_{Jul}
(3) 
_{Aug}
(11) 
_{Sep}
(11) 
_{Oct}
(13) 
_{Nov}
(1) 
_{Dec}
(1) 
2012 
_{Jan}
(7) 
_{Feb}

_{Mar}
(3) 
_{Apr}
(8) 
_{May}
(9) 
_{Jun}

_{Jul}
(37) 
_{Aug}
(1) 
_{Sep}
(4) 
_{Oct}
(6) 
_{Nov}
(13) 
_{Dec}
(32) 
2013 
_{Jan}
(46) 
_{Feb}
(1) 
_{Mar}

_{Apr}
(5) 
_{May}

_{Jun}
(7) 
_{Jul}
(1) 
_{Aug}
(41) 
_{Sep}
(11) 
_{Oct}
(45) 
_{Nov}
(51) 
_{Dec}
(2) 
2014 
_{Jan}
(46) 
_{Feb}
(12) 
_{Mar}
(23) 
_{Apr}
(22) 
_{May}
(3) 
_{Jun}
(1) 
_{Jul}
(11) 
_{Aug}

_{Sep}

_{Oct}
(45) 
_{Nov}
(2) 
_{Dec}

2015 
_{Jan}

_{Feb}
(2) 
_{Mar}

_{Apr}
(1) 
_{May}
(8) 
_{Jun}

_{Jul}
(4) 
_{Aug}

_{Sep}

_{Oct}
(1) 
_{Nov}
(1) 
_{Dec}
(3) 
2016 
_{Jan}
(16) 
_{Feb}
(8) 
_{Mar}
(1) 
_{Apr}
(1) 
_{May}
(1) 
_{Jun}

_{Jul}

_{Aug}

_{Sep}
(4) 
_{Oct}

_{Nov}

_{Dec}

2017 
_{Jan}
(2) 
_{Feb}
(2) 
_{Mar}
(1) 
_{Apr}

_{May}

_{Jun}
(7) 
_{Jul}

_{Aug}

_{Sep}

_{Oct}

_{Nov}

_{Dec}

S  M  T  W  T  F  S 



1

2

3

4
(5) 
5
(3) 
6
(1) 
7
(2) 
8
(5) 
9
(9) 
10
(7) 
11
(1) 
12

13

14
(1) 
15
(1) 
16
(3) 
17
(3) 
18
(7) 
19
(2) 
20
(1) 
21
(6) 
22
(9) 
23
(10) 
24
(11) 
25
(2) 
26
(4) 
27

28
(1) 
29

30




From: Maxim Shemanarev <mcseemagg@ya...>  20040622 23:10:44

> Sorry for the bandwidth, > I understand better now I just found Stefans first email and I start to > understand it... Sorry for bandwidth again.. Just trying learn what's being > discussed. Marc, what are you sorry for?! This is an open discussion and you are always very welcome to take part in it. So, please don't hesitate to ask any questions here. In fact, I'm going to try your method of blending with simultaneous multiplications (if you don't mind, of course), the only thing is I want to make it work right first, then to work it fast. McSeem 
From: Marc Van Olmen <marcvanolmen@ea...>  20040622 21:42:58

Sorry for the bandwidth, I understand better now I just found Stefans first email and I start to understand it... Sorry for bandwidth again.. Just trying learn what's being discussed. mvo on 6/22/04 5:18 PM, Marc Van Olmen at marcvanolmen@... wrote: > Hi, > > I'm interested in this because I want to design for my RealTime video > application the best looking results, so I'm interesting in this whole > gamma, linear discussion. > > > I'm not as knowledgeable as you guys about this subject and I'm still trying > to grasp everything here being discussed. But speaking about luminance the > most common formula to convert a RGB to YUV (where Y is luminance is the > following: > > > RGB255>YUV >  Y   0.256788 0.504129 0.097906   R   16  >  U  =  0.148223 0.290993 0.439216  x  G  +  128  > V   0.439216 0.367789 0.071427   B   128  > > OR > >  Y   8414 16519 3208   R   32768   16  >  U  =  4857 9535 14392  x  G  /  32768  +  128  >  V   14392 12052 2341   B   32768   128  > > Sources I used for this: > > http://www.fourcc.org/ is a source > > but also Apple's icefloe's pages: > > http://developer.apple.com/quicktime/icefloe/dispatch019.html > > > As you can see RG are more important then B for luminance. > > I have written altivec (PowePC) routines to do this kind of things quickly > so that's my experience > > I really want to learn why you say RGB is linear. Any URL that you can give > me that would explain this more deeper. > > Marc > Ps; > I will read tonight before for going to bed some chapter about gamma in my > bible <http://www.poynton.com/DVAI/index.html>; maybe that will help me more > to understand this issue. > > 
From: Marc Van Olmen <marcvanolmen@ea...>  20040622 21:18:19

Hi, I'm interested in this because I want to design for my RealTime video application the best looking results, so I'm interesting in this whole gamma, linear discussion. I'm not as knowledgeable as you guys about this subject and I'm still trying to grasp everything here being discussed. But speaking about luminance the most common formula to convert a RGB to YUV (where Y is luminance is the following: RGB255>YUV  Y   0.256788 0.504129 0.097906   R   16   U  =  0.148223 0.290993 0.439216  x  G  +  128  V   0.439216 0.367789 0.071427   B   128  OR  Y   8414 16519 3208   R   32768   16   U  =  4857 9535 14392  x  G  /  32768  +  128   V   14392 12052 2341   B   32768   128  Sources I used for this: http://www.fourcc.org/ is a source but also Apple's icefloe's pages: http://developer.apple.com/quicktime/icefloe/dispatch019.html As you can see RG are more important then B for luminance. I have written altivec (PowePC) routines to do this kind of things quickly so that's my experience I really want to learn why you say RGB is linear. Any URL that you can give me that would explain this more deeper. Marc Ps; I will read tonight before for going to bed some chapter about gamma in my bible <http://www.poynton.com/DVAI/index.html>; maybe that will help me more to understand this issue. on 6/22/04 4:45 PM, Stephan Assmus at superstippi@... wrote: > Hi Maxim, > > thanks for continuing to take time for this! > >>> Alpha blending is just one form of linear interpolation. My >>> experiments >>> have shown that calculating the linear interpolation of two colors >>> in >>> the "normal" RGB space leads to "wrong" intermediate colors. The >>> reason >>> is simply that the "normal" RGB space is nonlinear itself, so you >>> cannot expect correct results from linear interpolation of two non >>> linear values. To get correct results, you have to "linearize" the >>> RGB >>> colors by applying the gamma function, then do the linear >>> interpolation, then apply the reverse gamma function. >> >> That's correct. I achieved the correct result only when I did that >> (direct >> gamma before and the reverse gamma after): >> >> int r = m_dir_gamma[p[Order::R]]; >> int g = m_dir_gamma[p[Order::G]]; >> int b = m_dir_gamma[p[Order::B]]; >> r = m_inv_gamma[(((m_dir_gamma[c.r]  r) * alpha) + (r << 16)) >> >> 16]; >> g = m_inv_gamma[(((m_dir_gamma[c.g]  g) * alpha) + (g << 16)) >> >> 16]; >> b = m_inv_gamma[(((m_dir_gamma[c.b]  b) * alpha) + (b << 16)) >> >> 16]; >> >> The first question is about the difference in plain and premultiplied >> color >> spaces. In the plain, all the color values are always in range >> 0...255 >> (regardless of alpha), in the premultiplied they are "condensed" to >> 0...Alpha. >> I suspect we'll have to expand the colors back before applying direct >> gamma, >> that is, to perform any gamma correction in the plain color space, >> which can be >> too expensive and in fact, compromises the whole idea of >> premultiplied colors. > > Yes, gamma corrected blending and premultiplied blending should be > mutually exclusive. > >> The other question is when we filter the image we just sum the RGBA >> values of >> 4, 16, 36, etc pixels with respective weights (according to the >> filtering >> function). You want to say that for each component we need to apply >> gamma, then >> multiply them by their weights, then sum them and calculate the >> resulting >> value, and then apply the inverse gamma. Is that correct? > > Actually, yes, exactly this! :) > > The point is: An image library should be usable for different > applications. For a game, I might only be interested in the fastest > possible way to do alpha blending, and I would want premultiplied > alpha. > In an image application however, I might be interested in non > premultiplied alpha, since I might have to implement blending of > different layers with entirely different modes, for example the > "multiply" mode of Photoshop needs access to normal colors. At least I > think it does (that's how I implemented it in my application). > Finally I'm one of those guys occupied with the idea of ultimate > quality, that's why I would go for blending in the gamma corrected > linear RGB space. I don't think I can have both, the speed of > premultiplied alpha and the visually correct results. > > So the question is not that of findig the ideal solution to alpha > blending or lerping, but to offer a level of flexibility in AGG as a > multiple purpose image library. Your existing concept of pixelformats > covers this fairly well I must say, so there is no problem. I just > tried to promote offering three different options (pixelformats) for > different needs. > >> And the third question is what do we do with the Alpha channel when >> blending on >> the destination canvas and when filtering images? > > Do you mean the existing alpha channel that you blend on top on? For > premultiplied alpha, there is no problem, as the color values can be > assumed to already be multiplied with the existing alpha. > For the nonpremultiplied case I insist that you need an additional > case for when bottom alpha != 255. In that case you need to multily the > bottom color by the bottom alpha, I posted one version of the code for > that in an earlier mail. > >>> I think that the >>> problem of "wrong" colors stems from the fact that green is much >>> more >>> important for luminance than red, which is in turn much more >>> important >>> than blue. So depending on how red, green and blue are mixed, the >>> problem of lerping two colors with differnt mixes of red, green and >>> blue in their nonlinear state can be more visible or less. >> >> Hmm, Maybe you are right, but I'm not sure I can agree with that. >> Because this >> gamma is applied to all R,G, and B components uniformly. > > Yes, but since the different components contribute unevenly to the > perceived luminance, you have a problem when lerping from say red to > green. The intermediate browns will be too dark as can be seen in the > images I posted. I can't really discribe it with strict looking logic, > but the luminance of the components is distributed logarithmically(?) > across the [0..255] range, by applying the gamma function, you make the > luminance linear, so you can actually calculate a linear transition > from one color to another with correct luminance of the resulting > color. Otherwise you end up with a color that has the correct relation > of the components, but wrong overall luminance. > >> Different luminance of >> the components is important when converting a color image to B&W, or, >> when >> converting it to another color space, to display it on another type >> of device. > > I think it also plays an important role in lerping colors. Here is > another logic: If doing the alpha blending with gamma corrected colors > solved the problem of wrong intermediate colors (specifically "to dark" > intermediate colors), then it must also solve the same problem in > performing image filters, since the operation within filtering an image > and alpha blending is essentially the same, namely lerping one color to > another. I will make an image that should show the problem exist and > post it to the list. > > Regards, > Stephan > > >  > This SF.Net email sponsored by Black Hat Briefings & Training. > Attend Black Hat Briefings & Training, Las Vegas July 2429  > digital self defense, top technical experts, no vendor pitches, > unmatched networking opportunities. Visit http://www.blackhat.com > _______________________________________________ > Vectoragggeneral mailing list > Vectoragggeneral@... > https://lists.sourceforge.net/lists/listinfo/vectoragggeneral 
From: Stephan Assmus <superstippi@gm...>  20040622 21:09:44

Hi again, I attached two images that demonstrate the "wrong" colors produced by image filtering without gamma correction. Especially between the red and green stripes, there are intermediate browns that are definitely darker then either the green or red of the original image. But the purple between the red and blue is too dark as well. Regards, Stephan 
From: Stephan Assmus <superstippi@gm...>  20040622 20:45:43

Hi Maxim, thanks for continuing to take time for this! > > Alpha blending is just one form of linear interpolation. My > > experiments > > have shown that calculating the linear interpolation of two colors > > in > > the "normal" RGB space leads to "wrong" intermediate colors. The > > reason > > is simply that the "normal" RGB space is nonlinear itself, so you > > cannot expect correct results from linear interpolation of two non > > linear values. To get correct results, you have to "linearize" the > > RGB > > colors by applying the gamma function, then do the linear > > interpolation, then apply the reverse gamma function. > > That's correct. I achieved the correct result only when I did that > (direct > gamma before and the reverse gamma after): > > int r = m_dir_gamma[p[Order::R]]; > int g = m_dir_gamma[p[Order::G]]; > int b = m_dir_gamma[p[Order::B]]; > r = m_inv_gamma[(((m_dir_gamma[c.r]  r) * alpha) + (r << 16)) >> > 16]; > g = m_inv_gamma[(((m_dir_gamma[c.g]  g) * alpha) + (g << 16)) >> > 16]; > b = m_inv_gamma[(((m_dir_gamma[c.b]  b) * alpha) + (b << 16)) >> > 16]; > > The first question is about the difference in plain and premultiplied > color > spaces. In the plain, all the color values are always in range > 0...255 > (regardless of alpha), in the premultiplied they are "condensed" to > 0...Alpha. > I suspect we'll have to expand the colors back before applying direct > gamma, > that is, to perform any gamma correction in the plain color space, > which can be > too expensive and in fact, compromises the whole idea of > premultiplied colors. Yes, gamma corrected blending and premultiplied blending should be mutually exclusive. > The other question is when we filter the image we just sum the RGBA > values of > 4, 16, 36, etc pixels with respective weights (according to the > filtering > function). You want to say that for each component we need to apply > gamma, then > multiply them by their weights, then sum them and calculate the > resulting > value, and then apply the inverse gamma. Is that correct? Actually, yes, exactly this! :) The point is: An image library should be usable for different applications. For a game, I might only be interested in the fastest possible way to do alpha blending, and I would want premultiplied alpha. In an image application however, I might be interested in non premultiplied alpha, since I might have to implement blending of different layers with entirely different modes, for example the "multiply" mode of Photoshop needs access to normal colors. At least I think it does (that's how I implemented it in my application). Finally I'm one of those guys occupied with the idea of ultimate quality, that's why I would go for blending in the gamma corrected linear RGB space. I don't think I can have both, the speed of premultiplied alpha and the visually correct results. So the question is not that of findig the ideal solution to alpha blending or lerping, but to offer a level of flexibility in AGG as a multiple purpose image library. Your existing concept of pixelformats covers this fairly well I must say, so there is no problem. I just tried to promote offering three different options (pixelformats) for different needs. > And the third question is what do we do with the Alpha channel when > blending on > the destination canvas and when filtering images? Do you mean the existing alpha channel that you blend on top on? For premultiplied alpha, there is no problem, as the color values can be assumed to already be multiplied with the existing alpha. For the nonpremultiplied case I insist that you need an additional case for when bottom alpha != 255. In that case you need to multily the bottom color by the bottom alpha, I posted one version of the code for that in an earlier mail. > > I think that the > > problem of "wrong" colors stems from the fact that green is much > > more > > important for luminance than red, which is in turn much more > > important > > than blue. So depending on how red, green and blue are mixed, the > > problem of lerping two colors with differnt mixes of red, green and > > blue in their nonlinear state can be more visible or less. > > Hmm, Maybe you are right, but I'm not sure I can agree with that. > Because this > gamma is applied to all R,G, and B components uniformly. Yes, but since the different components contribute unevenly to the perceived luminance, you have a problem when lerping from say red to green. The intermediate browns will be too dark as can be seen in the images I posted. I can't really discribe it with strict looking logic, but the luminance of the components is distributed logarithmically(?) across the [0..255] range, by applying the gamma function, you make the luminance linear, so you can actually calculate a linear transition from one color to another with correct luminance of the resulting color. Otherwise you end up with a color that has the correct relation of the components, but wrong overall luminance. > Different luminance of > the components is important when converting a color image to B&W, or, > when > converting it to another color space, to display it on another type > of device. I think it also plays an important role in lerping colors. Here is another logic: If doing the alpha blending with gamma corrected colors solved the problem of wrong intermediate colors (specifically "to dark" intermediate colors), then it must also solve the same problem in performing image filters, since the operation within filtering an image and alpha blending is essentially the same, namely lerping one color to another. I will make an image that should show the problem exist and post it to the list. Regards, Stephan 
From: Maxim Shemanarev <mcseemagg@ya...>  20040622 14:47:02

> Alpha blending is just one form of linear interpolation. My experiments > have shown that calculating the linear interpolation of two colors in > the "normal" RGB space leads to "wrong" intermediate colors. The reason > is simply that the "normal" RGB space is nonlinear itself, so you > cannot expect correct results from linear interpolation of two non > linear values. To get correct results, you have to "linearize" the RGB > colors by applying the gamma function, then do the linear > interpolation, then apply the reverse gamma function. That's correct. I achieved the correct result only when I did that (direct gamma before and the reverse gamma after): int r = m_dir_gamma[p[Order::R]]; int g = m_dir_gamma[p[Order::G]]; int b = m_dir_gamma[p[Order::B]]; r = m_inv_gamma[(((m_dir_gamma[c.r]  r) * alpha) + (r << 16)) >> 16]; g = m_inv_gamma[(((m_dir_gamma[c.g]  g) * alpha) + (g << 16)) >> 16]; b = m_inv_gamma[(((m_dir_gamma[c.b]  b) * alpha) + (b << 16)) >> 16]; The first question is about the difference in plain and premultiplied color spaces. In the plain, all the color values are always in range 0...255 (regardless of alpha), in the premultiplied they are "condensed" to 0...Alpha. I suspect we'll have to expand the colors back before applying direct gamma, that is, to perform any gamma correction in the plain color space, which can be too expensive and in fact, compromises the whole idea of premultiplied colors. The other question is when we filter the image we just sum the RGBA values of 4, 16, 36, etc pixels with respective weights (according to the filtering function). You want to say that for each component we need to apply gamma, then multiply them by their weights, then sum them and calculate the resulting value, and then apply the inverse gamma. Is that correct? And the third question is what do we do with the Alpha channel when blending on the destination canvas and when filtering images? > I think that the > problem of "wrong" colors stems from the fact that green is much more > important for luminance than red, which is in turn much more important > than blue. So depending on how red, green and blue are mixed, the > problem of lerping two colors with differnt mixes of red, green and > blue in their nonlinear state can be more visible or less. Hmm, Maybe you are right, but I'm not sure I can agree with that. Because this gamma is applied to all R,G, and B components uniformly. Different luminance of the components is important when converting a color image to B&W, or, when converting it to another color space, to display it on another type of device. > BTW, by "linear interpolation" I mean the following calculation: > inerpolate a to b by wheight c [0..1]: result = a*(1c) + c*b > So even a sinc filter uses linerar interpolation, only the function for > getting the "weights" is the sinc function. Right, we calculate a weighted average of a number of pixels. McSeem 
From: Stephan Assmus <superstippi@gm...>  20040622 09:38:35

Hi Maxim, > > Can the additional ColorT > > parameter for the image transformer be used so that the image > > filters > > implement linear interpolation of two colors using my gamma > > converted > > method? > > Well, I'm not sure if we really need to use gamma when filtering > images (but > we might do). The situation looks similar to premultipled colors, > that is, we > need to know if that gamma correction is already *in* there or not. > And > something tells me that when interpolating (take a linear filter, for > example), > there should be just a plain average. It will be "gammacorrected" > afterwards, > when compositing with the canvas. But I can be mistaking. Alpha blending is just one form of linear interpolation. My experiments have shown that calculating the linear interpolation of two colors in the "normal" RGB space leads to "wrong" intermediate colors. The reason is simply that the "normal" RGB space is nonlinear itself, so you cannot expect correct results from linear interpolation of two non linear values. To get correct results, you have to "linearize" the RGB colors by applying the gamma function, then do the linear interpolation, then apply the reverse gamma function. I think that the problem of "wrong" colors stems from the fact that green is much more important for luminance than red, which is in turn much more important than blue. So depending on how red, green and blue are mixed, the problem of lerping two colors with differnt mixes of red, green and blue in their nonlinear state can be more visible or less. BTW, by "linear interpolation" I mean the following calculation: inerpolate a to b by wheight c [0..1]: result = a*(1c) + c*b So even a sinc filter uses linerar interpolation, only the function for getting the "weights" is the sinc function. Best regards, Stephan 
From: Maxim Shemanarev <mcseemagg@ya...>  20040622 01:56:06

> I haven't used AGG for anything fancy yet, so I don't know if I would > regret it later, but I vote for option (2). I would do that too, but the reason is different. The plain and premultiplied color spaces are essentially different. Technically, there can be any number of color spaces and so, any number of types to represent colors. But the policy is that any color type must be able to be consructed from agg::rgba type (4 doubles in range 0...1). That is, it must have a constructor with "const rgba&" That allows for rendering controls in any possible color space (and in general, it it unifies cases when the advantage of a particular color space is not critical, but we just need to draw something simple). In case of uniform agg::rgba8 for plain and premultiplied colors there's no way to distinguish between them. > Can the additional ColorT > parameter for the image transformer be used so that the image filters > implement linear interpolation of two colors using my gamma converted > method? Well, I'm not sure if we really need to use gamma when filtering images (but we might do). The situation looks similar to premultipled colors, that is, we need to know if that gamma correction is already *in* there or not. And something tells me that when interpolating (take a linear filter, for example), there should be just a plain average. It will be "gammacorrected" afterwards, when compositing with the canvas. But I can be mistaking. McSeem 
From: Stephan Assmus <superstippi@gm...>  20040622 00:15:40

Hi Maxim, > So, what would you prefer, (1) or (2)? I haven't used AGG for anything fancy yet, so I don't know if I would regret it later, but I vote for option (2). Can the additional ColorT parameter for the image transformer be used so that the image filters implement linear interpolation of two colors using my gamma converted method? Regards, Stephan 