You can subscribe to this list here.
2000 
_{Jan}

_{Feb}

_{Mar}

_{Apr}

_{May}

_{Jun}

_{Jul}
(390) 
_{Aug}
(767) 
_{Sep}
(940) 
_{Oct}
(964) 
_{Nov}
(819) 
_{Dec}
(762) 

2001 
_{Jan}
(680) 
_{Feb}
(1075) 
_{Mar}
(954) 
_{Apr}
(595) 
_{May}
(725) 
_{Jun}
(868) 
_{Jul}
(678) 
_{Aug}
(785) 
_{Sep}
(410) 
_{Oct}
(395) 
_{Nov}
(374) 
_{Dec}
(419) 
2002 
_{Jan}
(699) 
_{Feb}
(501) 
_{Mar}
(311) 
_{Apr}
(334) 
_{May}
(501) 
_{Jun}
(507) 
_{Jul}
(441) 
_{Aug}
(395) 
_{Sep}
(540) 
_{Oct}
(416) 
_{Nov}
(369) 
_{Dec}
(373) 
2003 
_{Jan}
(514) 
_{Feb}
(488) 
_{Mar}
(396) 
_{Apr}
(624) 
_{May}
(590) 
_{Jun}
(562) 
_{Jul}
(546) 
_{Aug}
(463) 
_{Sep}
(389) 
_{Oct}
(399) 
_{Nov}
(333) 
_{Dec}
(449) 
2004 
_{Jan}
(317) 
_{Feb}
(395) 
_{Mar}
(136) 
_{Apr}
(338) 
_{May}
(488) 
_{Jun}
(306) 
_{Jul}
(266) 
_{Aug}
(424) 
_{Sep}
(502) 
_{Oct}
(170) 
_{Nov}
(170) 
_{Dec}
(134) 
2005 
_{Jan}
(249) 
_{Feb}
(109) 
_{Mar}
(119) 
_{Apr}
(282) 
_{May}
(82) 
_{Jun}
(113) 
_{Jul}
(56) 
_{Aug}
(160) 
_{Sep}
(89) 
_{Oct}
(98) 
_{Nov}
(237) 
_{Dec}
(297) 
2006 
_{Jan}
(151) 
_{Feb}
(250) 
_{Mar}
(222) 
_{Apr}
(147) 
_{May}
(266) 
_{Jun}
(313) 
_{Jul}
(367) 
_{Aug}
(135) 
_{Sep}
(108) 
_{Oct}
(110) 
_{Nov}
(220) 
_{Dec}
(47) 
2007 
_{Jan}
(133) 
_{Feb}
(144) 
_{Mar}
(247) 
_{Apr}
(191) 
_{May}
(191) 
_{Jun}
(171) 
_{Jul}
(160) 
_{Aug}
(51) 
_{Sep}
(125) 
_{Oct}
(115) 
_{Nov}
(78) 
_{Dec}
(67) 
2008 
_{Jan}
(165) 
_{Feb}
(37) 
_{Mar}
(130) 
_{Apr}
(111) 
_{May}
(91) 
_{Jun}
(142) 
_{Jul}
(54) 
_{Aug}
(104) 
_{Sep}
(89) 
_{Oct}
(87) 
_{Nov}
(44) 
_{Dec}
(54) 
2009 
_{Jan}
(283) 
_{Feb}
(113) 
_{Mar}
(154) 
_{Apr}
(395) 
_{May}
(62) 
_{Jun}
(48) 
_{Jul}
(52) 
_{Aug}
(54) 
_{Sep}
(131) 
_{Oct}
(29) 
_{Nov}
(32) 
_{Dec}
(37) 
2010 
_{Jan}
(34) 
_{Feb}
(36) 
_{Mar}
(40) 
_{Apr}
(23) 
_{May}
(38) 
_{Jun}
(34) 
_{Jul}
(36) 
_{Aug}
(27) 
_{Sep}
(9) 
_{Oct}
(18) 
_{Nov}
(25) 
_{Dec}

2011 
_{Jan}
(1) 
_{Feb}
(14) 
_{Mar}
(1) 
_{Apr}
(5) 
_{May}
(1) 
_{Jun}

_{Jul}

_{Aug}
(37) 
_{Sep}
(6) 
_{Oct}
(2) 
_{Nov}

_{Dec}

2012 
_{Jan}

_{Feb}
(7) 
_{Mar}

_{Apr}
(4) 
_{May}

_{Jun}
(3) 
_{Jul}

_{Aug}

_{Sep}
(1) 
_{Oct}

_{Nov}

_{Dec}
(10) 
2013 
_{Jan}

_{Feb}
(1) 
_{Mar}
(7) 
_{Apr}
(2) 
_{May}

_{Jun}

_{Jul}
(9) 
_{Aug}

_{Sep}

_{Oct}

_{Nov}

_{Dec}

2014 
_{Jan}
(14) 
_{Feb}

_{Mar}
(2) 
_{Apr}

_{May}
(10) 
_{Jun}

_{Jul}

_{Aug}

_{Sep}

_{Oct}

_{Nov}
(3) 
_{Dec}

S  M  T  W  T  F  S 




1

2

3

4

5

6

7
(3) 
8

9
(1) 
10

11

12
(2) 
13

14

15

16
(21) 
17
(5) 
18
(3) 
19

20
(1) 
21

22
(2) 
23

24

25
(6) 
26

27
(1) 
28

29
(4) 
30
(3) 
31


From: Nathaniel Hoffman <naty@io...>  20090730 16:39:08

> To > be honest, all three approaches (m+2)/(2pi), (m+8)/(8pi), and (m+2)/(8pi) > have produced nice results for me, so it is basically down to what works > visually for us I guess since they all seem to have some legitimate > grounding in theory. Well... if you are using BlinnPhong (N.H), then dividing by (2pi) is unambiguously wrong (it is correct for Phong  (R.V)  though). I agree that the two with division by (8pi) both have theoretical legitimacy. Note however that most game engines use units that cause the (pi) factor to be removed, so you would just be dividing by 8. Naty 
From: Joe Meenaghan <joe@ga...>  20090730 15:33:57

First, let me thank you guys for the interesting discussion surrounding the normalization factor and its derivations. In the end I've decided to go with Naty's suggestion and (continue) using the m+2 approach as indeed I am including an approximate geometry/visibility term of 1 / (4 * L.H^2) (i.e., Kelemen, SzirmayKalos) and treating the cosine power function as an NDF. To be honest, all three approaches (m+2)/(2pi), (m+8)/(8pi), and (m+2)/(8pi) have produced nice results for me, so it is basically down to what works visually for us I guess since they all seem to have some legitimate grounding in theory. So with that bit of business pretty much wrapped up, that leaves me with just the outstanding issues on dynamic white point computation (vs. ditching it altogether) and practical advice on empirical vs. physically meaningful inputs for my direct light sources. Any insights on either of these two issues would of course be accepted gratefully. Thanks again! Joe 
From: Zafar Qamar <zafar.qamar@co...>  20090730 10:49:24

Hi, I don't have procedural stuff going on  I just want to play with art textures and do some shader programming to just give the appearance of lavalampy kind of effects. The cloudmap stuff using the noise approach that Megan mentioned is particularly the sort of thing I'm going to now try. Thanks very much for for all your responses so far. Some interesting reading on your posted links  cheers! Zafar Qamar ________________________________ From: Robin Green [mailto:robin.green@...] Sent: 29 July 2009 21:14 To: Game Development Algorithms Subject: Re: [Algorithms] Trickly Water I think the initial question was about droplets on a camera surface. Water droplets coalescing into larger droplets is more of a Poission points problem than a noise one. As two droplets come into contact they will generate a new droplet at the mean point between the two centers with a radius proportional to their two volumes. Making them run down the picture plane under gravitational accelleration means animating it's center and altering it's mass as it picks up surrounding droplets. Trails could probably be faked using a small heightfield that you decrement each frame. The droplets themselves have a very specific shape depending on the surface tension and hydrophilic properties of the surface they land on, specified by the contact angle and the contact radius. These subtle curves around the contact edge of the droplet are important cues to interpreting what is being dropped onto what. http://physicaplus.org.il/zope/home/en/1185176174/water_elect_en Trying to find a formula for the shape, ISTR it's a pretty regular analytic polynomial.  Robin Green. On Wed, Jul 29, 2009 at 11:41 AM, Megan Fox <shalinor@...> wrote: When it comes to morphing effects, your best bet is to start with noise. Click here <https://www.mailcontrol.com/sr/ZfaQhFWJHa!TndxI!oX7UlpW6f62H1zmG+2Q6DBw xKE6KzkmIuM8MsS4FsH4Qq9nbMA6wkmdxWTQE4zD1Sgchg==> to report this email as spam. ********************************************************************************** Disclaimer The information and attached documentation in this email is intended for the use of the addressee only and is confidential. If you are not the intended recipient please delete it and notify us immediately by telephoning or emailing the sender. Please note that without Codemasters’ prior written consent any form of distribution, copying or use of this communication or the information in it is strictly prohibited and may be unlawful. Attachments to this email may contain software viruses. You are advised to take all reasonable precautions to minimise this risk and to carry out a virus check on any documents before they are opened. Any offer contained in this communication is subject to Codemasters’ standard terms & conditions and must be signed by both parties. Except as expressly provided otherwise all information and attached documentation in this email is subject to contract and Codemasters’ board approval. Any views or opinions expressed are solely those of the author and do not necessarily represent those of Codemasters. This footnote also confirms that this email message has been swept by SurfControl for the presence of computer viruses. ********************************************************************************** 
From: Robin Green <robin.green@gm...>  20090729 20:14:04

I think the initial question was about droplets on a camera surface. Water droplets coalescing into larger droplets is more of a Poission points problem than a noise one. As two droplets come into contact they will generate a new droplet at the mean point between the two centers with a radius proportional to their two volumes. Making them run down the picture plane under gravitational accelleration means animating it's center and altering it's mass as it picks up surrounding droplets. Trails could probably be faked using a small heightfield that you decrement each frame. The droplets themselves have a very specific shape depending on the surface tension and hydrophilic properties of the surface they land on, specified by the contact angle and the contact radius. These subtle curves around the contact edge of the droplet are important cues to interpreting what is being dropped onto what. http://physicaplus.org.il/zope/home/en/1185176174/water_elect_en Trying to find a formula for the shape, ISTR it's a pretty regular analytic polynomial.  Robin Green. On Wed, Jul 29, 2009 at 11:41 AM, Megan Fox <shalinor@...> wrote: > When it comes to morphing effects, your best bet is to start with noise. > > 
From: Megan Fox <shalinor@gm...>  20090729 18:42:18

When it comes to morphing effects, your best bet is to start with noise. Start by reading this: http://freespace.virgin.net/hugo.elias/models/m_clouds.htm ... which is a great primer, and generally good way to do clouds period. You could get a lavalamp appearance by simply adjusting the low/highpass on his output. Past that, consider the source of your noise. The usual idea is to go with nonpatterned noise, it being... well, noise... but you can get some very, very interesting results when you treat a nonnoisy data set as noise. An image's color values, for instance. (sorry I can't offer specifics, but this is at the core of the water tech I made for LU  NDA's and all that, etc. Still, the above should get you started) On Wed, Jul 29, 2009 at 11:32 AM, Zafar Qamar<zafar.qamar@...> wrote: > Just thought of another way of describing the animation... > > The animation effect I'm after is a bit like a lavalamp. > Hope that clarifies at least slightly the look I'm after. > > Cheers > Zafar Qamar  Megan Fox http://www.shalinor.com/ 
From: Zafar Qamar <zafar.qamar@co...>  20090729 18:18:24

Hi, I'm trying to create the effect of water spray landing on a camera with a small lens size, thus making the drops rather large. The bit I really need is how to generate animated normal maps that resemble big blobs of water that warp and trickle to form new shapes. Imagine I've drawn a few blobs of normalmap in Photoshop and put them onto the screen as a postprocess effect. I now need to warp and move them. Are there any utils that could generate this kind of thing? Any ideas and suggestions at all would be most welcome. Cheers Zafar Qamar ********************************************************************************** Disclaimer The information and attached documentation in this email is intended for the use of the addressee only and is confidential. If you are not the intended recipient please delete it and notify us immediately by telephoning or emailing the sender. Please note that without Codemasters’ prior written consent any form of distribution, copying or use of this communication or the information in it is strictly prohibited and may be unlawful. Attachments to this email may contain software viruses. You are advised to take all reasonable precautions to minimise this risk and to carry out a virus check on any documents before they are opened. Any offer contained in this communication is subject to Codemasters’ standard terms & conditions and must be signed by both parties. Except as expressly provided otherwise all information and attached documentation in this email is subject to contract and Codemasters’ board approval. Any views or opinions expressed are solely those of the author and do not necessarily represent those of Codemasters. This footnote also confirms that this email message has been swept by SurfControl for the presence of computer viruses. ********************************************************************************** 
From: Zafar Qamar <zafar.qamar@co...>  20090729 18:09:59

Just thought of another way of describing the animation... The animation effect I'm after is a bit like a lavalamp. Hope that clarifies at least slightly the look I'm after. Cheers Zafar Qamar Original Message From: Zafar Qamar Sent: 29 July 2009 18:30 To: 'Game Development Algorithms' Subject: Trickly Water Hi, I'm trying to create the effect of water spray landing on a camera with a small lens size, thus making the drops rather large. The bit I really need is how to generate animated normal maps that resemble big blobs of water that warp and trickle to form new shapes. Imagine I've drawn a few blobs of normalmap in Photoshop and put them onto the screen as a postprocess effect. I now need to warp and move them. Are there any utils that could generate this kind of thing? Any ideas and suggestions at all would be most welcome. Cheers Zafar Qamar ********************************************************************************** Disclaimer The information and attached documentation in this email is intended for the use of the addressee only and is confidential. If you are not the intended recipient please delete it and notify us immediately by telephoning or emailing the sender. Please note that without Codemasters’ prior written consent any form of distribution, copying or use of this communication or the information in it is strictly prohibited and may be unlawful. Attachments to this email may contain software viruses. You are advised to take all reasonable precautions to minimise this risk and to carry out a virus check on any documents before they are opened. Any offer contained in this communication is subject to Codemasters’ standard terms & conditions and must be signed by both parties. Except as expressly provided otherwise all information and attached documentation in this email is subject to contract and Codemasters’ board approval. Any views or opinions expressed are solely those of the author and do not necessarily represent those of Codemasters. This footnote also confirms that this email message has been swept by SurfControl for the presence of computer viruses. ********************************************************************************** 
From: Nathaniel Hoffman <naty@io...>  20090727 05:16:05

There are two ways to get a normalized BRDF with a cosine power term. One is to construct the BRDF empirically (e.g. as a modified BlinnPhong), and then normalize the BRDF by computing an upper bound for the directionalhemispherical reflectance and dividing by it. The other is to derive the BRDF from physical principles (e.g. microfacet theory), treating the cosine power term as a normal distribution function (NDF) and normalizing it as such. The derivation on page 446 of "physicallybased rendering" (PBR) relates to the second way. "Realtime rendering, 3rd edition" (RTR3) does both kinds of derivations, and compares them to each other. The empirical approach results in the (m+8)/8pi term (page 257 of RTR3). Unfortunately, we did not include the derivation in the book; we did the exact same derivation as Fabian Giesen (see page 2 of http://www.farbrausch.de/~fg/articles/phong.pdf), but approximated the integral with a simple function rather than using it directly. Since we were concerned with realtime rendering and not global illumination, we did not try to make the BRDF strictly energyconserving (integral always less than 1) but just approximately so (integral close to 1, doesn't matter if slightly above or slightly below). We chose an approximation which was relatively accurate for low specular powers, which in our opinion is perceptually important. The physically based derivation starts with the derivation of microfacet BRDFs included in Ashikhmin, Shirley and Premoze's 2000 SIGGRAPH paper and "plugs in" a cosine power NDF (page 259 of RTR3). the result, like PBR, includes a (m+2) term, which is not surprising since the same normalization by projected solid angle is included in the derivation (the 8pi instead of 2pi term in the denominator is the result of converting between incident and halfvector angles). I wouldn't necessarily say (m+2) is "more correct" since it results from a physical derivation. Both are equally "correct", just based on different assumptions. In my own work, using relatively simple BRDFs, I have had good results with the (m+8)/8pi term (in most game engines the pi cancels out, so this is just (m+8)/8). If you are trying to include shadowing / masking and foreshortening terms and otherwise closely emulate a microfacet BRDF, you might want to go with (m+2)/8 instead. Thanks, Naty Hoffman > From: Matt Pharr <matt@...> > Date: Sat, Jul 25, 2009 at 4:29 PM > Subject: Re: [Algorithms] Lighting (HDR) > To: Game Development Algorithms <gdalgorithmslist@...> > > > (below) > > On Jul 25, 2009, at 12:00 PM, Fabian Giesen wrote: >> Joe Meenaghan wrote: >>> >>> 1. I have differing information about the normalization term for the >>> BlinnPhong BRDF and I'd like to know which is correct. In RealTime >>> Rendering the authors suggest (m + 8) / (8 * pi) based on a >>> derivation from Sloan and Hoffman. However, in Pharr and Humphrey >>> (Physically Based Rendering, p.446) they calculate (m + 2) / (2 * >>> pi). >>> The definite integral solution demonstrated in the >>> latter appears reasonable to me, so I'm uncertain. Normally I've >>> associated the m+2 version as the normalization term for plain Phong, >>> yet, Pharr and Humphrey are very explicitly referring to the Blinn >>> BRDF >>> and are using the half vector rather than the reflection vector in >>> their >>> discussion, and they are convincing. Is there something I'm missing? > > Unfortunately RTR doesn't seem to have a derivation, so it's hard to > see precisely where the difference comes from. One general note is > that the PBR book is normalizing a microfacet distribution, but RTR is > (from a quick skim) normalizing a BRDF. So I think that the 2pi vs > 8pi difference in the denominator comes from the fact that 2pi works > out to be the right denominator for a normalized microfacet > distribution, but if you fold in the 1/4 term that comes in the > TorranceSparrow BRDF (discussed on p442 of the PBR book), then that > covers that difference. > > The (m+2) vs (m+8) stuff in the numerator I don't have any insight > on. As far as I know the PBR derivation is correct and I just wrote a > short program to verify the result numerically and it all came out as > expected. Hopefully Naty can chime in? > >> >> I went through the same thing a few months ago. See the discussion in >> the comments here: >> >> http://www.rorydriscoll.com/2009/01/25/energyconservationingames/ >> >> (Including a comment from Naty Hoffman who did the derivation >> mentioned >> in RTR). The exact normalization factor obviously depends on what >> variant of the BRDF you use. I did the computation for typical >> variants >> of Phong and BlinnPhong here: >> >> http://www.farbrausch.de/~fg/articles/phong.pdf > > I think there is a bug in the BlinnPhong normalization there. In > particular, I don't think that the first step of going from an > integral over cos theta h to an integral of cos theta/2 is right (or > neededthe microfacet distribution can be normalized fine in theta_h > land.) > > A related note is that the extra cos theta factor doesn't come from it > being a BRDF, but comes from the geometry of microfacetsi.e. the > projection of a microfacet with angle theta_h from the actual surface > normal projects to a differential area scaled by cos theta_h on the > actual surface. > >> >> (also referenced in the discussion above). Hope that helps! >> >> Cheers, >> Fabian "ryg" Giesen 
From: Fabian Giesen <f.giesen@49...>  20090725 23:36:14

> That's sort of an over simplification. For example, The Torrance > Sparrow BRDF has both 1/cos theta_i and 1/cos theta_o terms, not to > make the old shading models match with the new reflection equation, > but intuitively to account for the fact that as you approach grazing > angles, the total projected area of all microfacets seen along a ray > increases (and goes to infinity at grazing when you hit the point of > your ray going through all of the microfacets!) (It is also reciprocal.) Yes, if you divide both by cos(theta_i) and cos(theta_o) it's reciprocal again (since it's symmetrical in both angles), but you cannot divide by just one of them. Fabian 
From: Matt Pharr <matt.pharr@gm...>  20090725 23:23:42

On Jul 25, 2009, at 3:49 PM, Fabian Giesen wrote: >> >> A related note is that the extra cos theta factor doesn't come from >> it >> being a BRDF, but comes from the geometry of microfacetsi.e. the >> projection of a microfacet with angle theta_h from the actual surface >> normal projects to a differential area scaled by cos theta_h on the >> actual surface. > > I didn't mean to imply that you tack on an additional factor of > cos(theta) because it's a BRDF, but rather that the reflection > equation > (and also the rendering equation) contain the cos(theta) normalization > term, while the original Phong and BlinnPhong shading models don't, > so > the BRDF for these original formulations has to divide through by > cos(theta) to get rid of the term in the integral, violating > reciprocity > in the process (and being generally physically implausible). Which is > why modern formulations don't do this. That's sort of an over simplification. For example, The Torrance Sparrow BRDF has both 1/cos theta_i and 1/cos theta_o terms, not to make the old shading models match with the new reflection equation, but intuitively to account for the fact that as you approach grazing angles, the total projected area of all microfacets seen along a ray increases (and goes to infinity at grazing when you hit the point of your ray going through all of the microfacets!) (It is also reciprocal.) However, one generally includes a microfacet geometric attenuation term which (lo and behold) accounts for microfacets shadowing each other and in turn makes things behave properly at grazing angles. (At least 'modern formulations' in my world do, for what that's worth.) (I'm still not following what this other normalization term is attempting to achieve or how exactly it was derived.) matt > > Anyway, Naty already explained the discrepancy between this and the > formula from RTR in the comments of the article I linked to. I'll > quote, > first from Rory: > > "I did manage to talk to Naty about this at GDC. He said that a > few people have asked him about the derivation of the specular > factor in the book, and that he had gone through it himself and > got the exact same answer as Fabian. The value they mention in > the book is just an approximation of that result." > > And then from Naty: > > "About the approximation we chose, we were not trying to be strictly > conservative (that is important for multibounce GI solutions to > converge, but not for rasterization). We were trying to choose a > cheap > approximation which is close to 1, and we thought it more > important to > be close for low specular powers. Low specular powers have > highlights > that cover a lot of pixels and are unlikely to be saturating past > 1." > > Fabian "ryg" Giesen 
From: Fabian Giesen <f.giesen@49...>  20090725 22:49:53

> I think there is a bug in the BlinnPhong normalization there. In > particular, I don't think that the first step of going from an > integral over cos theta h to an integral of cos theta/2 is right (or > neededthe microfacet distribution can be normalized fine in theta_h > land.) theta_h is not theta/2 in general, correct. That's why I specifically note that L=N and hence all angles are in the same plane; for general configurations, the halfway angle might lie in a different plane than the angle between N and L, so the result depends on the second angle (phi) as well and the whole process gets a lot more complex. > A related note is that the extra cos theta factor doesn't come from it > being a BRDF, but comes from the geometry of microfacetsi.e. the > projection of a microfacet with angle theta_h from the actual surface > normal projects to a differential area scaled by cos theta_h on the > actual surface. I didn't mean to imply that you tack on an additional factor of cos(theta) because it's a BRDF, but rather that the reflection equation (and also the rendering equation) contain the cos(theta) normalization term, while the original Phong and BlinnPhong shading models don't, so the BRDF for these original formulations has to divide through by cos(theta) to get rid of the term in the integral, violating reciprocity in the process (and being generally physically implausible). Which is why modern formulations don't do this. Anyway, Naty already explained the discrepancy between this and the formula from RTR in the comments of the article I linked to. I'll quote, first from Rory: "I did manage to talk to Naty about this at GDC. He said that a few people have asked him about the derivation of the specular factor in the book, and that he had gone through it himself and got the exact same answer as Fabian. The value they mention in the book is just an approximation of that result." And then from Naty: "About the approximation we chose, we were not trying to be strictly conservative (that is important for multibounce GI solutions to converge, but not for rasterization). We were trying to choose a cheap approximation which is close to 1, and we thought it more important to be close for low specular powers. Low specular powers have highlights that cover a lot of pixels and are unlikely to be saturating past 1." Fabian "ryg" Giesen 
From: Matt Pharr <matt@ph...>  20090725 20:43:42

(below) On Jul 25, 2009, at 12:00 PM, Fabian Giesen wrote: > Joe Meenaghan wrote: >> >> 1. I have differing information about the normalization term for the >> BlinnPhong BRDF and I'd like to know which is correct. In RealTime >> Rendering the authors suggest (m + 8) / (8 * pi) based on a >> derivation from Sloan and Hoffman. However, in Pharr and Humphrey >> (Physically Based Rendering, p.446) they calculate (m + 2) / (2 * >> pi). >> The definite integral solution demonstrated in the >> latter appears reasonable to me, so I'm uncertain. Normally I've >> associated the m+2 version as the normalization term for plain Phong, >> yet, Pharr and Humphrey are very explicitly referring to the Blinn >> BRDF >> and are using the half vector rather than the reflection vector in >> their >> discussion, and they are convincing. Is there something I'm missing? Unfortunately RTR doesn't seem to have a derivation, so it's hard to see precisely where the difference comes from. One general note is that the PBR book is normalizing a microfacet distribution, but RTR is (from a quick skim) normalizing a BRDF. So I think that the 2pi vs 8pi difference in the denominator comes from the fact that 2pi works out to be the right denominator for a normalized microfacet distribution, but if you fold in the 1/4 term that comes in the TorranceSparrow BRDF (discussed on p442 of the PBR book), then that covers that difference. The (m+2) vs (m+8) stuff in the numerator I don't have any insight on. As far as I know the PBR derivation is correct and I just wrote a short program to verify the result numerically and it all came out as expected. Hopefully Naty can chime in? > > I went through the same thing a few months ago. See the discussion in > the comments here: > > http://www.rorydriscoll.com/2009/01/25/energyconservationingames/ > > (Including a comment from Naty Hoffman who did the derivation > mentioned > in RTR). The exact normalization factor obviously depends on what > variant of the BRDF you use. I did the computation for typical > variants > of Phong and BlinnPhong here: > > http://www.farbrausch.de/~fg/articles/phong.pdf I think there is a bug in the BlinnPhong normalization there. In particular, I don't think that the first step of going from an integral over cos theta h to an integral of cos theta/2 is right (or neededthe microfacet distribution can be normalized fine in theta_h land.) A related note is that the extra cos theta factor doesn't come from it being a BRDF, but comes from the geometry of microfacetsi.e. the projection of a microfacet with angle theta_h from the actual surface normal projects to a differential area scaled by cos theta_h on the actual surface. > > (also referenced in the discussion above). Hope that helps! > > Cheers, > Fabian "ryg" Giesen > >  > _______________________________________________ > GDAlgorithmslist mailing list > GDAlgorithmslist@... > https://lists.sourceforge.net/lists/listinfo/gdalgorithmslist > Archives: > http://sourceforge.net/mailarchive/forum.php?forum_name=gdalgorithmslist 
From: Fabian Giesen <f.giesen@49...>  20090725 19:01:21

Joe Meenaghan wrote: > Hey All, > > Just a few random questions pertaining primarily to HDR lighting that > I'm hoping to get some insight on. > > 1. I have differing information about the normalization term for the > BlinnPhong BRDF and I'd like to know which is correct. In RealTime > Rendering the authors suggest (m + 8) / (8 * pi) based on a > derivation from Sloan and Hoffman. However, in Pharr and Humphrey > (Physically Based Rendering, p.446) they calculate (m + 2) / (2 * pi). > The definite integral solution demonstrated in the > latter appears reasonable to me, so I'm uncertain. Normally I've > associated the m+2 version as the normalization term for plain Phong, > yet, Pharr and Humphrey are very explicitly referring to the Blinn BRDF > and are using the half vector rather than the reflection vector in their > discussion, and they are convincing. Is there something I'm missing? I went through the same thing a few months ago. See the discussion in the comments here: http://www.rorydriscoll.com/2009/01/25/energyconservationingames/ (Including a comment from Naty Hoffman who did the derivation mentioned in RTR). The exact normalization factor obviously depends on what variant of the BRDF you use. I did the computation for typical variants of Phong and BlinnPhong here: http://www.farbrausch.de/~fg/articles/phong.pdf (also referenced in the discussion above). Hope that helps! Cheers, Fabian "ryg" Giesen 
From: Joe Meenaghan <joe@ga...>  20090725 18:38:33

Hey All, Just a few random questions pertaining primarily to HDR lighting that I'm hoping to get some insight on. 1. I have differing information about the normalization term for the BlinnPhong BRDF and I'd like to know which is correct. In RealTime Rendering the authors suggest (m + 8) / (8 * pi) based on a derivation from Sloan and Hoffman. However, in Pharr and Humphrey (Physically Based Rendering, p.446) they calculate (m + 2) / (2 * pi). The definite integral solution demonstrated in the latter appears reasonable to me, so I'm uncertain. Normally I've associated the m+2 version as the normalization term for plain Phong, yet, Pharr and Humphrey are very explicitly referring to the Blinn BRDF and are using the half vector rather than the reflection vector in their discussion, and they are convincing. Is there something I'm missing? 2. Despite a variety of different ideas tried, I have yet to come up with a satisfactory method for dynamically estimating the white point term for use with Reinhard's Photographic Tone Reproduction tone mapper (global). I'm using Krawczyk et al. for computation of the estimated key and it works well, but for white point, I currently just wind up setting some arbitrary value until it looks good. Of course, this ultimately has implications for the variety of light intensities we can use and still benefit from having the term (e.g., the very nice contrast it can introduce). I'd like to avoid having to hand place regions of tonemapping inputs throughout the scene to account for different lighting conditions. Has anyone come up with a way of estimating this parameter without the need for such manual intervention? Reinhard's paper on parameter estimation has not worked well for me in practice, although I am open to the possibility that I am doing something improperly (see next question). While I don't have a runtime histogram, I do have access to arithmetic and log average luminance as well as min/max luminance, so hopefully there's a way to cobble something together here that works? Arguably, I can ditch the white point altogether (or just always use values close to some percentange of the scenewide max lighting intensity) and use our postprocessing pipeline to manipulate contrast and so on, but I figured I'd see if anyone has any suggestions before I throw in the towel. 3. Somewhat related to the last question is the range of the irradiance values used in the first place. In our system, we've got a light source color (speaking only about direct sources like point, spot, etc. not IBL textures) and a separate HDR intensity scalar per light. The idea is that the former can be used in LDR lighting scenarios as is (it is a float3, so it can technically go outside [0,1] although we rarely go past 2 or so for LDR) and that the intensity scalar can adjust it as desired for HDR situations (in HDR our framebuffer is RGBA16F). However, based on my troubles with trying to automatically compute reasonable key and white point values, I am wondering if the difficulties I am having are perhaps related to the fact that many of the algorithms I've tried to experiment with are likely working with proper real world data and as such the resulting luminances are more ideally suited to yield plausible behavior. I am considering whether our light intensities need to be more carefully tuned to match their real world counterparts (admittedly, under certain conditions the 16bit range concerns me a little if that is the case) so I'd like to better understand what sort of values work well in practice  i.e., what is the tradeoff between physical realism where all values have to be very precise (SI units, etc.) and artistic control? Certainly there's an argument to be made for something that visually looks like it is illuminated by say, a 60 watt bulb, and if that is the desired end result then so be it, regardless of what intensities were used to create the effect. But if this sort of arbitrary artistic experimentation is preventing me from implementing algorithms correctly, as per above, that would be good to know. Ultimately, I guess I'm just looking for a bit of guidance as to what inputs work well in realtime HDR situations :) As always, my sincere thanks in advance for any assistance. Joe Meenaghan joe@... 
From: Richard Mitton <mitton@tr...>  20090722 23:04:36

As you may have guessed, this wasn't meant to go here. Outlook Express sucks.  ()() Richard Mitton ( '.') (")_(") Beard Without Portfolio :: Treyarch  Original Message  From: "Richard Mitton" <mitton@...> To: "GD Algorithms" <gdalgorithmslist@...> Sent: Wednesday, July 22, 2009 3:20 PM Subject: [Algorithms] Removing Public Folders > Hi, > Is there any way to remove the ATVI Public Folders from my IMAP account? > (and I mean actually remove, not just hide them) > >  > ()() Richard Mitton > ( '.') > (")_(") Beard Without Portfolio :: Treyarch > > > >  > _______________________________________________ > GDAlgorithmslist mailing list > GDAlgorithmslist@... > https://lists.sourceforge.net/lists/listinfo/gdalgorithmslist > Archives: > http://sourceforge.net/mailarchive/forum.php?forum_name=gdalgorithmslist > 
From: Richard Mitton <mitton@tr...>  20090722 22:47:16

Hi, Is there any way to remove the ATVI Public Folders from my IMAP account? (and I mean actually remove, not just hide them)  ()() Richard Mitton ( '.') (")_(") Beard Without Portfolio :: Treyarch 
From: Robin Green <robin.green@gm...>  20090720 06:25:25

On Fri, Jul 17, 2009 at 7:06 PM, Jorge Rodriguez<jrodriguez@...> wrote: > On Fri, Jul 17, 2009 at 12:10 PM, Sam Martin <sam.martin@...> > wrote: >> >> Eek. Have to go. Office is flooding with water. > > ? Yeah. That's, like, so off topic. Moderator!  R. 
From: Sam Martin <sam.martin@ge...>  20090718 09:31:33

Yeah, I hit send a bit quickly on this one. It might convey the rough idea, but I can poke holes in it already and need to actually bottom out the maths once and for all. It might take a bit of time, but if I complete it I'll write it up. We had torrential rain in Cambridge (UK) yesterday. The rear entrance to our office is slightly underground. A submerged pump either broken down or was blocked causing the area to suddenly fill up with water and pour under the back door. Half the team bailed the water out while the other half rescued all the equipment. Some carpet destroyed, but otherwise we're fine! Lesson learnt: never put your UPS on the bottom rung of your server rack. ta, Sam Original Message From: Sam Martin [mailto:sam.martin@...] Sent: Fri 17/07/2009 17:10 To: Game Development Algorithms Subject: Re: [Algorithms] Gaussian blur kernels Thanks, that's clearer. I think there's an extra detail here that might explain things. Warning: this contains some handwavey maths. Let's imagine for a second that the space your original line lives in is some real domain, R^2. Instead of doing the integration as we did before we apply (convolve with) a box filter and produce a new function, also in R^2. This would look like a blurry line. Note that it doesn't look like the result we got with the projection into the pixel basis. It would instead look a bit like the bokeh effect you get on a camera, but square. If we now *pointsample* this new signal we could produce a nice looking reconstruction with a pixel basis. I believe (subject to resolving some further fiddly details) the issues with the previous solution will have gone away  I think. No integration perpixel was required this time. We had already bandlimited the signal with the previous continuouslyapplied box filter. Incidentally, I believe we could also perfectly reconstruct the R^2 blurry line from the pointsampled pixel basis, but I'm not sure exactly what the reconstruction filter would be, and whether it requires the box filter to be twice the size of the original pixel or not. It's one of those things I've wanted to find out for a while, but the maths is a bit heavy. Eek. Have to go. Office is flooding with water. Ta, Sam Original Message From: Simon Fenney [mailto:simon.fenney@...] Sent: 17 July 2009 14:59 To: Game Development Algorithms Subject: Re: [Algorithms] Gaussian blur kernels Sam Martin wrote: > I'm not sure I quite understand what you mean by antialiasing in your > experiment? Sorry, I thought that was obvious from the context :( Pragmatically, I meant take NxN samples per pixel, apply the (trivial) weighting due to a box filter (i.e. 1/N^2), and either choose a "large enough" N or take the limit as N>infinity. The latter should then be your integral below. > To remove the aliasing completely you can't blur after sampling*. > You'd need to rasterise the line by doing the integral over each > pixel properly, rather than point sample. This will produce an > aliasfree result, but isn't exactly easy in general :). I wasn't intending it to be easy :). I just wanted to show that a box filter is not ideal. Cheers Simon   Enter the BlackBerry Developer Challenge This is your chance to win up to $100,000 in prizes! For a limited time, vendors submitting new applications to BlackBerry App World(TM) will have the opportunity to enter the BlackBerry Developer Challenge. See full prize details at: http://p.sf.net/sfu/Challenge _______________________________________________ GDAlgorithmslist mailing list GDAlgorithmslist@... https://lists.sourceforge.net/lists/listinfo/gdalgorithmslist Archives: http://sourceforge.net/mailarchive/forum.php?forum_name=gdalgorithmslis t  Enter the BlackBerry Developer Challenge This is your chance to win up to $100,000 in prizes! For a limited time, vendors submitting new applications to BlackBerry App World(TM) will have the opportunity to enter the BlackBerry Developer Challenge. See full prize details at: http://p.sf.net/sfu/Challenge _______________________________________________ GDAlgorithmslist mailing list GDAlgorithmslist@... https://lists.sourceforge.net/lists/listinfo/gdalgorithmslist Archives: http://sourceforge.net/mailarchive/forum.php?forum_name=gdalgorithmslist 
From: Joe Meenaghan <joe@ga...>  20090718 04:34:59

Hey Guys, First, let me start by thanking you for all of the very helpful responses. While I've encountered many of these ideas over the years, admittedly I'd been putting off a proper study of DSP and this discussion really inspired me to just buckle down and give it a go. I spent a good chunk of time yesterday browsing the book that Sam recommended and it's fantastic (and free online)! I even went ahead and ordered a more recent edition from Amazon which just arrived today and am looking forward to sitting down with it over the weekend. I also spent a fair amount of time experimenting with Photoshop's Gaussian Blur (in hindsight, I probably should have done this to begin with) and discovered many of the things that were mentioned here (the relationship between downsampling and kernel sizes, how multiple passes fit in, etc.). Interestingly, the Photoshop implementation is a bit odd  what they indicate as a radius in pixels (the only input to the function) actually appears to be treated as a half radius which is subsequently doubled behind the scenes, so you wind up with kernel sizes that are twice as large as one would expect given the input. That one had me scratching my head as I agonized over how they were able to get results that were so much blurrier than mine with such small kernels :) Once I figured it out, I was able to learn a lot about its behavior under different conditions. Anyway, I think I'm all set at this stage, and I thank you all again for the insights. Cheers, Joe 
From: Jorge Rodriguez <jrodriguez@ma...>  20090718 02:33:36

On Fri, Jul 17, 2009 at 12:10 PM, Sam Martin <sam.martin@...>wrote: > Eek. Have to go. Office is flooding with water. > ?  Jorge Rodriguez 
From: Jon Watte <jwatte@gm...>  20090717 16:11:53

Simon Fenney wrote: > Oh, I agree that there is a difference between what is 'best' for audio > and graphics** but a box filter is far from an ideal way of downsampling > graphics. I had a test image which was rendered with 10k samples per > pixel but downfiltered*** using a box filter. The amount of aliasing > surprised people. > Didn't we discuss this like three years ago, in the context of "what are textures, really?" Once you get into those details, you start having to make assumptions about what a sample is. Is it a point sample of a bandlimited function? Is it an area average for a picture sensor? If so, what is the shape of that area filter? Sincerely, jw  Revenge is the most pointless and damaging of human desires. 
From: Sam Martin <sam.martin@ge...>  20090717 16:11:52

Thanks, that's clearer. I think there's an extra detail here that might explain things. Warning: this contains some handwavey maths. Let's imagine for a second that the space your original line lives in is some real domain, R^2. Instead of doing the integration as we did before we apply (convolve with) a box filter and produce a new function, also in R^2. This would look like a blurry line. Note that it doesn't look like the result we got with the projection into the pixel basis. It would instead look a bit like the bokeh effect you get on a camera, but square. If we now *pointsample* this new signal we could produce a nice looking reconstruction with a pixel basis. I believe (subject to resolving some further fiddly details) the issues with the previous solution will have gone away  I think. No integration perpixel was required this time. We had already bandlimited the signal with the previous continuouslyapplied box filter. Incidentally, I believe we could also perfectly reconstruct the R^2 blurry line from the pointsampled pixel basis, but I'm not sure exactly what the reconstruction filter would be, and whether it requires the box filter to be twice the size of the original pixel or not. It's one of those things I've wanted to find out for a while, but the maths is a bit heavy. Eek. Have to go. Office is flooding with water. Ta, Sam Original Message From: Simon Fenney [mailto:simon.fenney@...] Sent: 17 July 2009 14:59 To: Game Development Algorithms Subject: Re: [Algorithms] Gaussian blur kernels Sam Martin wrote: > I'm not sure I quite understand what you mean by antialiasing in your > experiment? Sorry, I thought that was obvious from the context :( Pragmatically, I meant take NxN samples per pixel, apply the (trivial) weighting due to a box filter (i.e. 1/N^2), and either choose a "large enough" N or take the limit as N>infinity. The latter should then be your integral below. > To remove the aliasing completely you can't blur after sampling*. > You'd need to rasterise the line by doing the integral over each > pixel properly, rather than point sample. This will produce an > aliasfree result, but isn't exactly easy in general :). I wasn't intending it to be easy :). I just wanted to show that a box filter is not ideal. Cheers Simon   Enter the BlackBerry Developer Challenge This is your chance to win up to $100,000 in prizes! For a limited time, vendors submitting new applications to BlackBerry App World(TM) will have the opportunity to enter the BlackBerry Developer Challenge. See full prize details at: http://p.sf.net/sfu/Challenge _______________________________________________ GDAlgorithmslist mailing list GDAlgorithmslist@... https://lists.sourceforge.net/lists/listinfo/gdalgorithmslist Archives: http://sourceforge.net/mailarchive/forum.php?forum_name=gdalgorithmslis t 
From: Simon Fenney <simon.fenney@po...>  20090717 13:59:12

Sam Martin wrote: > I'm not sure I quite understand what you mean by antialiasing in your > experiment? Sorry, I thought that was obvious from the context :( Pragmatically, I meant take NxN samples per pixel, apply the (trivial) weighting due to a box filter (i.e. 1/N^2), and either choose a "large enough" N or take the limit as N>infinity. The latter should then be your integral below. > To remove the aliasing completely you can't blur after sampling*. > You'd need to rasterise the line by doing the integral over each > pixel properly, rather than point sample. This will produce an > aliasfree result, but isn't exactly easy in general :). I wasn't intending it to be easy :). I just wanted to show that a box filter is not ideal. Cheers Simon 
From: Sam Martin <sam.martin@ge...>  20090717 13:16:44

I'm not sure I quite understand what you mean by antialiasing in your experiment? If you have a line which you rasterise using the usual point sampling, you will immediately produce aliasing (jaggies). Blurring the result with any filter may make the output more acceptable  and some filters may be more pleasant any others  but it doesn't remove the aliasing. It just reduces the objectional high frequencies. To remove the aliasing completely you can't blur after sampling*. You'd need to rasterise the line by doing the integral over each pixel properly, rather than point sample. This will produce an aliasfree result, but isn't exactly easy in general :). There are further details here though. When working with a pixelbasis we aren't in exactly the same territory as when we pointsample. There is a more general theory of sampling worked out largely by Papoulis and another dude I forget. The concepts of band limited signals and the reconstruction filters change as your basis changes. Cheers, Sam * In general  but using blue noise can help push the aliasing into high frequencies which helps. Original Message From: Simon Fenney [mailto:simon.fenney@...] Sent: 17 July 2009 11:51 To: Game Development Algorithms Subject: Re: [Algorithms] Gaussian blur kernels Olivier Galibert wrote: > On Thu, Jul 16, 2009 at 09:56:58AM +0100, Simon Fenney wrote: >> Jon Watte wrote: >> >>> Actually, they are both lowpass filters, so >>> downsamplingplusupsampling is a form of blur. However, box >>> filtering (the traditional downsampling function) isn't actually all >>> that great at lowpass filtering, so it can generate aliasing, which >>> a Gaussian filter does not. >> >> AFAICS a box filter won't "generate" aliasing, it's just that it's >> not very good at eliminating the high frequencies which cause >> aliasing. A Gaussian, OTOH, has better behaviour in this respect... > > Aren't we talking about graphics here? Vision is area sampling, > making box filters perfect for integer downsampling. It's sound that > is point sampling and hence sensitive to high frequency aliasing. > > OG. Oh, I agree that there is a difference between what is 'best' for audio and graphics** but a box filter is far from an ideal way of downsampling graphics. I had a test image which was rendered with 10k samples per pixel but downfiltered*** using a box filter. The amount of aliasing surprised people. To show why a box is inadequate, as a thought experiment, *temporarily* consider the pixels as little squares (apologies to Alvy Ray Smith), and draw, say, a 1/8th pixel wide, *nearly* horizontal white line on a black background. Now imagine antialiasing it with a box filter. Now consider either animating the line, shifting it vertically by a fraction of a pixel per frame (and watch it jump), OR alternatively just consider a static frame. Assuming we do the latter, start from a point where the line crosses from one scanline into the next and run along the row to the next crossing. With the box filter, except for the areas where the line actually crosses the pixel boundaries, there will be a large number of pixels with exactly the same intensity. AFAICS, your brain expects to see more variation as the line sweeps across the pixels. Simon ** I have my own hypothesis on what is the root cause of this difference which is being hotly debated here in our research office. I now just need to install a better mathematical brain in my head to prove it :) ***In a linear colour space. This is V. important.   Enter the BlackBerry Developer Challenge This is your chance to win up to $100,000 in prizes! For a limited time, vendors submitting new applications to BlackBerry App World(TM) will have the opportunity to enter the BlackBerry Developer Challenge. See full prize details at: http://p.sf.net/sfu/Challenge _______________________________________________ GDAlgorithmslist mailing list GDAlgorithmslist@... https://lists.sourceforge.net/lists/listinfo/gdalgorithmslist Archives: http://sourceforge.net/mailarchive/forum.php?forum_name=gdalgorithmslis t 
From: Simon Fenney <simon.fenney@po...>  20090717 10:50:50

Olivier Galibert wrote: > On Thu, Jul 16, 2009 at 09:56:58AM +0100, Simon Fenney wrote: >> Jon Watte wrote: >> >>> Actually, they are both lowpass filters, so >>> downsamplingplusupsampling is a form of blur. However, box >>> filtering (the traditional downsampling function) isn't actually all >>> that great at lowpass filtering, so it can generate aliasing, which >>> a Gaussian filter does not. >> >> AFAICS a box filter won't "generate" aliasing, it's just that it's >> not very good at eliminating the high frequencies which cause >> aliasing. A Gaussian, OTOH, has better behaviour in this respect... > > Aren't we talking about graphics here? Vision is area sampling, > making box filters perfect for integer downsampling. It's sound that > is point sampling and hence sensitive to high frequency aliasing. > > OG. Oh, I agree that there is a difference between what is 'best' for audio and graphics** but a box filter is far from an ideal way of downsampling graphics. I had a test image which was rendered with 10k samples per pixel but downfiltered*** using a box filter. The amount of aliasing surprised people. To show why a box is inadequate, as a thought experiment, *temporarily* consider the pixels as little squares (apologies to Alvy Ray Smith), and draw, say, a 1/8th pixel wide, *nearly* horizontal white line on a black background. Now imagine antialiasing it with a box filter. Now consider either animating the line, shifting it vertically by a fraction of a pixel per frame (and watch it jump), OR alternatively just consider a static frame. Assuming we do the latter, start from a point where the line crosses from one scanline into the next and run along the row to the next crossing. With the box filter, except for the areas where the line actually crosses the pixel boundaries, there will be a large number of pixels with exactly the same intensity. AFAICS, your brain expects to see more variation as the line sweeps across the pixels. Simon ** I have my own hypothesis on what is the root cause of this difference which is being hotly debated here in our research office. I now just need to install a better mathematical brain in my head to prove it :) ***In a linear colour space. This is V. important. 