gdalgorithms-list Mailing List for Game Dev Algorithms (Page 30)
Brought to you by:
vexxed72
You can subscribe to this list here.
| 2000 |
Jan
|
Feb
|
Mar
|
Apr
|
May
|
Jun
|
Jul
(390) |
Aug
(767) |
Sep
(940) |
Oct
(964) |
Nov
(819) |
Dec
(762) |
|---|---|---|---|---|---|---|---|---|---|---|---|---|
| 2001 |
Jan
(680) |
Feb
(1075) |
Mar
(954) |
Apr
(595) |
May
(725) |
Jun
(868) |
Jul
(678) |
Aug
(785) |
Sep
(410) |
Oct
(395) |
Nov
(374) |
Dec
(419) |
| 2002 |
Jan
(699) |
Feb
(501) |
Mar
(311) |
Apr
(334) |
May
(501) |
Jun
(507) |
Jul
(441) |
Aug
(395) |
Sep
(540) |
Oct
(416) |
Nov
(369) |
Dec
(373) |
| 2003 |
Jan
(514) |
Feb
(488) |
Mar
(396) |
Apr
(624) |
May
(590) |
Jun
(562) |
Jul
(546) |
Aug
(463) |
Sep
(389) |
Oct
(399) |
Nov
(333) |
Dec
(449) |
| 2004 |
Jan
(317) |
Feb
(395) |
Mar
(136) |
Apr
(338) |
May
(488) |
Jun
(306) |
Jul
(266) |
Aug
(424) |
Sep
(502) |
Oct
(170) |
Nov
(170) |
Dec
(134) |
| 2005 |
Jan
(249) |
Feb
(109) |
Mar
(119) |
Apr
(282) |
May
(82) |
Jun
(113) |
Jul
(56) |
Aug
(160) |
Sep
(89) |
Oct
(98) |
Nov
(237) |
Dec
(297) |
| 2006 |
Jan
(151) |
Feb
(250) |
Mar
(222) |
Apr
(147) |
May
(266) |
Jun
(313) |
Jul
(367) |
Aug
(135) |
Sep
(108) |
Oct
(110) |
Nov
(220) |
Dec
(47) |
| 2007 |
Jan
(133) |
Feb
(144) |
Mar
(247) |
Apr
(191) |
May
(191) |
Jun
(171) |
Jul
(160) |
Aug
(51) |
Sep
(125) |
Oct
(115) |
Nov
(78) |
Dec
(67) |
| 2008 |
Jan
(165) |
Feb
(37) |
Mar
(130) |
Apr
(111) |
May
(91) |
Jun
(142) |
Jul
(54) |
Aug
(104) |
Sep
(89) |
Oct
(87) |
Nov
(44) |
Dec
(54) |
| 2009 |
Jan
(283) |
Feb
(113) |
Mar
(154) |
Apr
(395) |
May
(62) |
Jun
(48) |
Jul
(52) |
Aug
(54) |
Sep
(131) |
Oct
(29) |
Nov
(32) |
Dec
(37) |
| 2010 |
Jan
(34) |
Feb
(36) |
Mar
(40) |
Apr
(23) |
May
(38) |
Jun
(34) |
Jul
(36) |
Aug
(27) |
Sep
(9) |
Oct
(18) |
Nov
(25) |
Dec
|
| 2011 |
Jan
(1) |
Feb
(14) |
Mar
(1) |
Apr
(5) |
May
(1) |
Jun
|
Jul
|
Aug
(37) |
Sep
(6) |
Oct
(2) |
Nov
|
Dec
|
| 2012 |
Jan
|
Feb
(7) |
Mar
|
Apr
(4) |
May
|
Jun
(3) |
Jul
|
Aug
|
Sep
(1) |
Oct
|
Nov
|
Dec
(10) |
| 2013 |
Jan
|
Feb
(1) |
Mar
(7) |
Apr
(2) |
May
|
Jun
|
Jul
(9) |
Aug
|
Sep
|
Oct
|
Nov
|
Dec
|
| 2014 |
Jan
(14) |
Feb
|
Mar
(2) |
Apr
|
May
(10) |
Jun
|
Jul
|
Aug
|
Sep
|
Oct
|
Nov
(3) |
Dec
|
| 2015 |
Jan
|
Feb
|
Mar
|
Apr
|
May
|
Jun
|
Jul
|
Aug
|
Sep
|
Oct
(12) |
Nov
|
Dec
(1) |
| 2016 |
Jan
|
Feb
(1) |
Mar
(1) |
Apr
(1) |
May
|
Jun
(1) |
Jul
|
Aug
(1) |
Sep
|
Oct
|
Nov
|
Dec
|
| 2017 |
Jan
|
Feb
|
Mar
|
Apr
|
May
|
Jun
|
Jul
(1) |
Aug
|
Sep
|
Oct
|
Nov
|
Dec
|
| 2022 |
Jan
|
Feb
|
Mar
|
Apr
|
May
|
Jun
|
Jul
|
Aug
|
Sep
|
Oct
|
Nov
(2) |
Dec
|
|
From: Zafar Q. <zaf...@co...> - 2009-07-30 10:49:24
|
Hi, I don't have procedural stuff going on - I just want to play with art textures and do some shader programming to just give the appearance of lava-lampy kind of effects. The cloud-map stuff using the noise approach that Megan mentioned is particularly the sort of thing I'm going to now try. Thanks very much for for all your responses so far. Some interesting reading on your posted links - cheers! Zafar Qamar ________________________________ From: Robin Green [mailto:rob...@gm...] Sent: 29 July 2009 21:14 To: Game Development Algorithms Subject: Re: [Algorithms] Trickly Water I think the initial question was about droplets on a camera surface. Water droplets coalescing into larger droplets is more of a Poission points problem than a noise one. As two droplets come into contact they will generate a new droplet at the mean point between the two centers with a radius proportional to their two volumes. Making them run down the picture plane under gravitational accelleration means animating it's center and altering it's mass as it picks up surrounding droplets. Trails could probably be faked using a small heightfield that you decrement each frame. The droplets themselves have a very specific shape depending on the surface tension and hydrophilic properties of the surface they land on, specified by the contact angle and the contact radius. These subtle curves around the contact edge of the droplet are important cues to interpreting what is being dropped onto what. http://physicaplus.org.il/zope/home/en/1185176174/water_elect_en Trying to find a formula for the shape, ISTR it's a pretty regular analytic polynomial. - Robin Green. On Wed, Jul 29, 2009 at 11:41 AM, Megan Fox <sha...@gm...> wrote: When it comes to morphing effects, your best bet is to start with noise. Click here <https://www.mailcontrol.com/sr/ZfaQhFWJHa!TndxI!oX7UlpW6f62H1zmG+2Q6DBw xKE6KzkmIuM8MsS4FsH4Qq9nbMA6wkmdxWTQE4zD1Sgchg==> to report this email as spam. ********************************************************************************** Disclaimer The information and attached documentation in this e-mail is intended for the use of the addressee only and is confidential. If you are not the intended recipient please delete it and notify us immediately by telephoning or e-mailing the sender. Please note that without Codemasters’ prior written consent any form of distribution, copying or use of this communication or the information in it is strictly prohibited and may be unlawful. Attachments to this e-mail may contain software viruses. You are advised to take all reasonable precautions to minimise this risk and to carry out a virus check on any documents before they are opened. Any offer contained in this communication is subject to Codemasters’ standard terms & conditions and must be signed by both parties. Except as expressly provided otherwise all information and attached documentation in this e-mail is subject to contract and Codemasters’ board approval. Any views or opinions expressed are solely those of the author and do not necessarily represent those of Codemasters. This footnote also confirms that this email message has been swept by SurfControl for the presence of computer viruses. ********************************************************************************** |
|
From: Robin G. <rob...@gm...> - 2009-07-29 20:14:04
|
I think the initial question was about droplets on a camera surface. Water droplets coalescing into larger droplets is more of a Poission points problem than a noise one. As two droplets come into contact they will generate a new droplet at the mean point between the two centers with a radius proportional to their two volumes. Making them run down the picture plane under gravitational accelleration means animating it's center and altering it's mass as it picks up surrounding droplets. Trails could probably be faked using a small heightfield that you decrement each frame. The droplets themselves have a very specific shape depending on the surface tension and hydrophilic properties of the surface they land on, specified by the contact angle and the contact radius. These subtle curves around the contact edge of the droplet are important cues to interpreting what is being dropped onto what. http://physicaplus.org.il/zope/home/en/1185176174/water_elect_en Trying to find a formula for the shape, ISTR it's a pretty regular analytic polynomial. - Robin Green. On Wed, Jul 29, 2009 at 11:41 AM, Megan Fox <sha...@gm...> wrote: > When it comes to morphing effects, your best bet is to start with noise. > > |
|
From: Megan F. <sha...@gm...> - 2009-07-29 18:42:18
|
When it comes to morphing effects, your best bet is to start with noise. Start by reading this: http://freespace.virgin.net/hugo.elias/models/m_clouds.htm ... which is a great primer, and generally good way to do clouds period. You could get a lava-lamp appearance by simply adjusting the low/high-pass on his output. Past that, consider the source of your noise. The usual idea is to go with non-patterned noise, it being... well, noise... but you can get some very, very interesting results when you treat a non-noisy data set as noise. An image's color values, for instance. (sorry I can't offer specifics, but this is at the core of the water tech I made for LU - NDA's and all that, etc. Still, the above should get you started) On Wed, Jul 29, 2009 at 11:32 AM, Zafar Qamar<zaf...@co...> wrote: > Just thought of another way of describing the animation... > > The animation effect I'm after is a bit like a lava-lamp. > Hope that clarifies at least slightly the look I'm after. > > Cheers > Zafar Qamar -- Megan Fox http://www.shalinor.com/ |
|
From: Zafar Q. <zaf...@co...> - 2009-07-29 18:18:24
|
Hi, I'm trying to create the effect of water spray landing on a camera with a small lens size, thus making the drops rather large. The bit I really need is how to generate animated normal maps that resemble big blobs of water that warp and trickle to form new shapes. Imagine I've drawn a few blobs of normal-map in Photoshop and put them onto the screen as a post-process effect. I now need to warp and move them. Are there any utils that could generate this kind of thing? Any ideas and suggestions at all would be most welcome. Cheers Zafar Qamar ********************************************************************************** Disclaimer The information and attached documentation in this e-mail is intended for the use of the addressee only and is confidential. If you are not the intended recipient please delete it and notify us immediately by telephoning or e-mailing the sender. Please note that without Codemasters’ prior written consent any form of distribution, copying or use of this communication or the information in it is strictly prohibited and may be unlawful. Attachments to this e-mail may contain software viruses. You are advised to take all reasonable precautions to minimise this risk and to carry out a virus check on any documents before they are opened. Any offer contained in this communication is subject to Codemasters’ standard terms & conditions and must be signed by both parties. Except as expressly provided otherwise all information and attached documentation in this e-mail is subject to contract and Codemasters’ board approval. Any views or opinions expressed are solely those of the author and do not necessarily represent those of Codemasters. This footnote also confirms that this email message has been swept by SurfControl for the presence of computer viruses. ********************************************************************************** |
|
From: Zafar Q. <zaf...@co...> - 2009-07-29 18:09:59
|
Just thought of another way of describing the animation... The animation effect I'm after is a bit like a lava-lamp. Hope that clarifies at least slightly the look I'm after. Cheers Zafar Qamar -----Original Message----- From: Zafar Qamar Sent: 29 July 2009 18:30 To: 'Game Development Algorithms' Subject: Trickly Water Hi, I'm trying to create the effect of water spray landing on a camera with a small lens size, thus making the drops rather large. The bit I really need is how to generate animated normal maps that resemble big blobs of water that warp and trickle to form new shapes. Imagine I've drawn a few blobs of normal-map in Photoshop and put them onto the screen as a post-process effect. I now need to warp and move them. Are there any utils that could generate this kind of thing? Any ideas and suggestions at all would be most welcome. Cheers Zafar Qamar ********************************************************************************** Disclaimer The information and attached documentation in this e-mail is intended for the use of the addressee only and is confidential. If you are not the intended recipient please delete it and notify us immediately by telephoning or e-mailing the sender. Please note that without Codemasters’ prior written consent any form of distribution, copying or use of this communication or the information in it is strictly prohibited and may be unlawful. Attachments to this e-mail may contain software viruses. You are advised to take all reasonable precautions to minimise this risk and to carry out a virus check on any documents before they are opened. Any offer contained in this communication is subject to Codemasters’ standard terms & conditions and must be signed by both parties. Except as expressly provided otherwise all information and attached documentation in this e-mail is subject to contract and Codemasters’ board approval. Any views or opinions expressed are solely those of the author and do not necessarily represent those of Codemasters. This footnote also confirms that this email message has been swept by SurfControl for the presence of computer viruses. ********************************************************************************** |
|
From: Nathaniel H. <na...@io...> - 2009-07-27 05:16:05
|
There are two ways to get a normalized BRDF with a cosine power term. One is to construct the BRDF empirically (e.g. as a modified Blinn-Phong), and then normalize the BRDF by computing an upper bound for the directional-hemispherical reflectance and dividing by it. The other is to derive the BRDF from physical principles (e.g. microfacet theory), treating the cosine power term as a normal distribution function (NDF) and normalizing it as such. The derivation on page 446 of "physically-based rendering" (PBR) relates to the second way. "Real-time rendering, 3rd edition" (RTR3) does both kinds of derivations, and compares them to each other. The empirical approach results in the (m+8)/8pi term (page 257 of RTR3). Unfortunately, we did not include the derivation in the book; we did the exact same derivation as Fabian Giesen (see page 2 of http://www.farbrausch.de/~fg/articles/phong.pdf), but approximated the integral with a simple function rather than using it directly. Since we were concerned with real-time rendering and not global illumination, we did not try to make the BRDF strictly energy-conserving (integral always less than 1) but just approximately so (integral close to 1, doesn't matter if slightly above or slightly below). We chose an approximation which was relatively accurate for low specular powers, which in our opinion is perceptually important. The physically based derivation starts with the derivation of microfacet BRDFs included in Ashikhmin, Shirley and Premoze's 2000 SIGGRAPH paper and "plugs in" a cosine power NDF (page 259 of RTR3). the result, like PBR, includes a (m+2) term, which is not surprising since the same normalization by projected solid angle is included in the derivation (the 8pi instead of 2pi term in the denominator is the result of converting between incident and half-vector angles). I wouldn't necessarily say (m+2) is "more correct" since it results from a physical derivation. Both are equally "correct", just based on different assumptions. In my own work, using relatively simple BRDFs, I have had good results with the (m+8)/8pi term (in most game engines the pi cancels out, so this is just (m+8)/8). If you are trying to include shadowing / masking and foreshortening terms and otherwise closely emulate a microfacet BRDF, you might want to go with (m+2)/8 instead. Thanks, Naty Hoffman > From: Matt Pharr <ma...@ph...> > Date: Sat, Jul 25, 2009 at 4:29 PM > Subject: Re: [Algorithms] Lighting (HDR) > To: Game Development Algorithms <gda...@li...> > > > (below) > > On Jul 25, 2009, at 12:00 PM, Fabian Giesen wrote: >> Joe Meenaghan wrote: >>> >>> 1. I have differing information about the normalization term for the >>> Blinn-Phong BRDF and I'd like to know which is correct. In Real-Time >>> Rendering the authors suggest (m + 8) / (8 * pi) based on a >>> derivation from Sloan and Hoffman. However, in Pharr and Humphrey >>> (Physically Based Rendering, p.446) they calculate (m + 2) / (2 * >>> pi). >>> The definite integral solution demonstrated in the >>> latter appears reasonable to me, so I'm uncertain. Normally I've >>> associated the m+2 version as the normalization term for plain Phong, >>> yet, Pharr and Humphrey are very explicitly referring to the Blinn >>> BRDF >>> and are using the half vector rather than the reflection vector in >>> their >>> discussion, and they are convincing. Is there something I'm missing? > > Unfortunately RTR doesn't seem to have a derivation, so it's hard to > see precisely where the difference comes from. One general note is > that the PBR book is normalizing a microfacet distribution, but RTR is > (from a quick skim) normalizing a BRDF. So I think that the 2pi vs > 8pi difference in the denominator comes from the fact that 2pi works > out to be the right denominator for a normalized microfacet > distribution, but if you fold in the 1/4 term that comes in the > Torrance-Sparrow BRDF (discussed on p442 of the PBR book), then that > covers that difference. > > The (m+2) vs (m+8) stuff in the numerator I don't have any insight > on. As far as I know the PBR derivation is correct and I just wrote a > short program to verify the result numerically and it all came out as > expected. Hopefully Naty can chime in? > >> >> I went through the same thing a few months ago. See the discussion in >> the comments here: >> >> http://www.rorydriscoll.com/2009/01/25/energy-conservation-in-games/ >> >> (Including a comment from Naty Hoffman who did the derivation >> mentioned >> in RTR). The exact normalization factor obviously depends on what >> variant of the BRDF you use. I did the computation for typical >> variants >> of Phong and Blinn-Phong here: >> >> http://www.farbrausch.de/~fg/articles/phong.pdf > > I think there is a bug in the Blinn-Phong normalization there. In > particular, I don't think that the first step of going from an > integral over cos theta h to an integral of cos theta/2 is right (or > needed--the microfacet distribution can be normalized fine in theta_h > land.) > > A related note is that the extra cos theta factor doesn't come from it > being a BRDF, but comes from the geometry of microfacets--i.e. the > projection of a microfacet with angle theta_h from the actual surface > normal projects to a differential area scaled by cos theta_h on the > actual surface. > >> >> (also referenced in the discussion above). Hope that helps! >> >> Cheers, >> -Fabian "ryg" Giesen |
|
From: Fabian G. <f.g...@49...> - 2009-07-25 23:36:14
|
> That's sort of an over simplification. For example, The Torrance- > Sparrow BRDF has both 1/cos theta_i and 1/cos theta_o terms, not to > make the old shading models match with the new reflection equation, > but intuitively to account for the fact that as you approach grazing > angles, the total projected area of all microfacets seen along a ray > increases (and goes to infinity at grazing when you hit the point of > your ray going through all of the microfacets!) (It is also reciprocal.) Yes, if you divide both by cos(theta_i) and cos(theta_o) it's reciprocal again (since it's symmetrical in both angles), but you cannot divide by just one of them. -Fabian |
|
From: Matt P. <mat...@gm...> - 2009-07-25 23:23:42
|
On Jul 25, 2009, at 3:49 PM, Fabian Giesen wrote: >> >> A related note is that the extra cos theta factor doesn't come from >> it >> being a BRDF, but comes from the geometry of microfacets--i.e. the >> projection of a microfacet with angle theta_h from the actual surface >> normal projects to a differential area scaled by cos theta_h on the >> actual surface. > > I didn't mean to imply that you tack on an additional factor of > cos(theta) because it's a BRDF, but rather that the reflection > equation > (and also the rendering equation) contain the cos(theta) normalization > term, while the original Phong and Blinn-Phong shading models don't, > so > the BRDF for these original formulations has to divide through by > cos(theta) to get rid of the term in the integral, violating > reciprocity > in the process (and being generally physically implausible). Which is > why modern formulations don't do this. That's sort of an over simplification. For example, The Torrance- Sparrow BRDF has both 1/cos theta_i and 1/cos theta_o terms, not to make the old shading models match with the new reflection equation, but intuitively to account for the fact that as you approach grazing angles, the total projected area of all microfacets seen along a ray increases (and goes to infinity at grazing when you hit the point of your ray going through all of the microfacets!) (It is also reciprocal.) However, one generally includes a microfacet geometric attenuation term which (lo and behold) accounts for microfacets shadowing each other and in turn makes things behave properly at grazing angles. (At least 'modern formulations' in my world do, for what that's worth.) (I'm still not following what this other normalization term is attempting to achieve or how exactly it was derived.) -matt > > Anyway, Naty already explained the discrepancy between this and the > formula from RTR in the comments of the article I linked to. I'll > quote, > first from Rory: > > "I did manage to talk to Naty about this at GDC. He said that a > few people have asked him about the derivation of the specular > factor in the book, and that he had gone through it himself and > got the exact same answer as Fabian. The value they mention in > the book is just an approximation of that result." > > And then from Naty: > > "About the approximation we chose, we were not trying to be strictly > conservative (that is important for multi-bounce GI solutions to > converge, but not for rasterization). We were trying to choose a > cheap > approximation which is close to 1, and we thought it more > important to > be close for low specular powers. Low specular powers have > highlights > that cover a lot of pixels and are unlikely to be saturating past > 1." > > -Fabian "ryg" Giesen |
|
From: Fabian G. <f.g...@49...> - 2009-07-25 22:49:53
|
> I think there is a bug in the Blinn-Phong normalization there. In > particular, I don't think that the first step of going from an > integral over cos theta h to an integral of cos theta/2 is right (or > needed--the microfacet distribution can be normalized fine in theta_h > land.) theta_h is not theta/2 in general, correct. That's why I specifically note that L=N and hence all angles are in the same plane; for general configurations, the halfway angle might lie in a different plane than the angle between N and L, so the result depends on the second angle (phi) as well and the whole process gets a lot more complex. > A related note is that the extra cos theta factor doesn't come from it > being a BRDF, but comes from the geometry of microfacets--i.e. the > projection of a microfacet with angle theta_h from the actual surface > normal projects to a differential area scaled by cos theta_h on the > actual surface. I didn't mean to imply that you tack on an additional factor of cos(theta) because it's a BRDF, but rather that the reflection equation (and also the rendering equation) contain the cos(theta) normalization term, while the original Phong and Blinn-Phong shading models don't, so the BRDF for these original formulations has to divide through by cos(theta) to get rid of the term in the integral, violating reciprocity in the process (and being generally physically implausible). Which is why modern formulations don't do this. Anyway, Naty already explained the discrepancy between this and the formula from RTR in the comments of the article I linked to. I'll quote, first from Rory: "I did manage to talk to Naty about this at GDC. He said that a few people have asked him about the derivation of the specular factor in the book, and that he had gone through it himself and got the exact same answer as Fabian. The value they mention in the book is just an approximation of that result." And then from Naty: "About the approximation we chose, we were not trying to be strictly conservative (that is important for multi-bounce GI solutions to converge, but not for rasterization). We were trying to choose a cheap approximation which is close to 1, and we thought it more important to be close for low specular powers. Low specular powers have highlights that cover a lot of pixels and are unlikely to be saturating past 1." -Fabian "ryg" Giesen |
|
From: Matt P. <ma...@ph...> - 2009-07-25 20:43:42
|
(below) On Jul 25, 2009, at 12:00 PM, Fabian Giesen wrote: > Joe Meenaghan wrote: >> >> 1. I have differing information about the normalization term for the >> Blinn-Phong BRDF and I'd like to know which is correct. In Real-Time >> Rendering the authors suggest (m + 8) / (8 * pi) based on a >> derivation from Sloan and Hoffman. However, in Pharr and Humphrey >> (Physically Based Rendering, p.446) they calculate (m + 2) / (2 * >> pi). >> The definite integral solution demonstrated in the >> latter appears reasonable to me, so I'm uncertain. Normally I've >> associated the m+2 version as the normalization term for plain Phong, >> yet, Pharr and Humphrey are very explicitly referring to the Blinn >> BRDF >> and are using the half vector rather than the reflection vector in >> their >> discussion, and they are convincing. Is there something I'm missing? Unfortunately RTR doesn't seem to have a derivation, so it's hard to see precisely where the difference comes from. One general note is that the PBR book is normalizing a microfacet distribution, but RTR is (from a quick skim) normalizing a BRDF. So I think that the 2pi vs 8pi difference in the denominator comes from the fact that 2pi works out to be the right denominator for a normalized microfacet distribution, but if you fold in the 1/4 term that comes in the Torrance-Sparrow BRDF (discussed on p442 of the PBR book), then that covers that difference. The (m+2) vs (m+8) stuff in the numerator I don't have any insight on. As far as I know the PBR derivation is correct and I just wrote a short program to verify the result numerically and it all came out as expected. Hopefully Naty can chime in? > > I went through the same thing a few months ago. See the discussion in > the comments here: > > http://www.rorydriscoll.com/2009/01/25/energy-conservation-in-games/ > > (Including a comment from Naty Hoffman who did the derivation > mentioned > in RTR). The exact normalization factor obviously depends on what > variant of the BRDF you use. I did the computation for typical > variants > of Phong and Blinn-Phong here: > > http://www.farbrausch.de/~fg/articles/phong.pdf I think there is a bug in the Blinn-Phong normalization there. In particular, I don't think that the first step of going from an integral over cos theta h to an integral of cos theta/2 is right (or needed--the microfacet distribution can be normalized fine in theta_h land.) A related note is that the extra cos theta factor doesn't come from it being a BRDF, but comes from the geometry of microfacets--i.e. the projection of a microfacet with angle theta_h from the actual surface normal projects to a differential area scaled by cos theta_h on the actual surface. > > (also referenced in the discussion above). Hope that helps! > > Cheers, > -Fabian "ryg" Giesen > > ------------------------------------------------------------------------------ > _______________________________________________ > GDAlgorithms-list mailing list > GDA...@li... > https://lists.sourceforge.net/lists/listinfo/gdalgorithms-list > Archives: > http://sourceforge.net/mailarchive/forum.php?forum_name=gdalgorithms-list |
|
From: Fabian G. <f.g...@49...> - 2009-07-25 19:01:21
|
Joe Meenaghan wrote: > Hey All, > > Just a few random questions pertaining primarily to HDR lighting that > I'm hoping to get some insight on. > > 1. I have differing information about the normalization term for the > Blinn-Phong BRDF and I'd like to know which is correct. In Real-Time > Rendering the authors suggest (m + 8) / (8 * pi) based on a > derivation from Sloan and Hoffman. However, in Pharr and Humphrey > (Physically Based Rendering, p.446) they calculate (m + 2) / (2 * pi). > The definite integral solution demonstrated in the > latter appears reasonable to me, so I'm uncertain. Normally I've > associated the m+2 version as the normalization term for plain Phong, > yet, Pharr and Humphrey are very explicitly referring to the Blinn BRDF > and are using the half vector rather than the reflection vector in their > discussion, and they are convincing. Is there something I'm missing? I went through the same thing a few months ago. See the discussion in the comments here: http://www.rorydriscoll.com/2009/01/25/energy-conservation-in-games/ (Including a comment from Naty Hoffman who did the derivation mentioned in RTR). The exact normalization factor obviously depends on what variant of the BRDF you use. I did the computation for typical variants of Phong and Blinn-Phong here: http://www.farbrausch.de/~fg/articles/phong.pdf (also referenced in the discussion above). Hope that helps! Cheers, -Fabian "ryg" Giesen |
|
From: Joe M. <jo...@ga...> - 2009-07-25 18:38:33
|
Hey All, Just a few random questions pertaining primarily to HDR lighting that I'm hoping to get some insight on. 1. I have differing information about the normalization term for the Blinn-Phong BRDF and I'd like to know which is correct. In Real-Time Rendering the authors suggest (m + 8) / (8 * pi) based on a derivation from Sloan and Hoffman. However, in Pharr and Humphrey (Physically Based Rendering, p.446) they calculate (m + 2) / (2 * pi). The definite integral solution demonstrated in the latter appears reasonable to me, so I'm uncertain. Normally I've associated the m+2 version as the normalization term for plain Phong, yet, Pharr and Humphrey are very explicitly referring to the Blinn BRDF and are using the half vector rather than the reflection vector in their discussion, and they are convincing. Is there something I'm missing? 2. Despite a variety of different ideas tried, I have yet to come up with a satisfactory method for dynamically estimating the white point term for use with Reinhard's Photographic Tone Reproduction tone mapper (global). I'm using Krawczyk et al. for computation of the estimated key and it works well, but for white point, I currently just wind up setting some arbitrary value until it looks good. Of course, this ultimately has implications for the variety of light intensities we can use and still benefit from having the term (e.g., the very nice contrast it can introduce). I'd like to avoid having to hand place regions of tonemapping inputs throughout the scene to account for different lighting conditions. Has anyone come up with a way of estimating this parameter without the need for such manual intervention? Reinhard's paper on parameter estimation has not worked well for me in practice, although I am open to the possibility that I am doing something improperly (see next question). While I don't have a runtime histogram, I do have access to arithmetic and log average luminance as well as min/max luminance, so hopefully there's a way to cobble something together here that works? Arguably, I can ditch the white point altogether (or just always use values close to some percentange of the scenewide max lighting intensity) and use our post-processing pipeline to manipulate contrast and so on, but I figured I'd see if anyone has any suggestions before I throw in the towel. 3. Somewhat related to the last question is the range of the irradiance values used in the first place. In our system, we've got a light source color (speaking only about direct sources like point, spot, etc. not IBL textures) and a separate HDR intensity scalar per light. The idea is that the former can be used in LDR lighting scenarios as is (it is a float3, so it can technically go outside [0,1] although we rarely go past 2 or so for LDR) and that the intensity scalar can adjust it as desired for HDR situations (in HDR our framebuffer is RGBA16F). However, based on my troubles with trying to automatically compute reasonable key and white point values, I am wondering if the difficulties I am having are perhaps related to the fact that many of the algorithms I've tried to experiment with are likely working with proper real world data and as such the resulting luminances are more ideally suited to yield plausible behavior. I am considering whether our light intensities need to be more carefully tuned to match their real world counterparts (admittedly, under certain conditions the 16-bit range concerns me a little if that is the case) so I'd like to better understand what sort of values work well in practice -- i.e., what is the tradeoff between physical realism where all values have to be very precise (SI units, etc.) and artistic control? Certainly there's an argument to be made for something that visually looks like it is illuminated by say, a 60 watt bulb, and if that is the desired end result then so be it, regardless of what intensities were used to create the effect. But if this sort of arbitrary artistic experimentation is preventing me from implementing algorithms correctly, as per above, that would be good to know. Ultimately, I guess I'm just looking for a bit of guidance as to what inputs work well in real-time HDR situations :) As always, my sincere thanks in advance for any assistance. Joe Meenaghan jo...@ga... |
|
From: Richard M. <mi...@tr...> - 2009-07-22 23:04:36
|
As you may have guessed, this wasn't meant to go here.
Outlook Express sucks.
--
()() Richard Mitton
( '.')
(")_(") Beard Without Portfolio :: Treyarch
----- Original Message -----
From: "Richard Mitton" <mi...@tr...>
To: "GD Algorithms" <gda...@li...>
Sent: Wednesday, July 22, 2009 3:20 PM
Subject: [Algorithms] Removing Public Folders
> Hi,
> Is there any way to remove the ATVI Public Folders from my IMAP account?
> (and I mean actually remove, not just hide them)
>
> --
> ()() Richard Mitton
> ( '.')
> (")_(") Beard Without Portfolio :: Treyarch
>
>
>
> ------------------------------------------------------------------------------
> _______________________________________________
> GDAlgorithms-list mailing list
> GDA...@li...
> https://lists.sourceforge.net/lists/listinfo/gdalgorithms-list
> Archives:
> http://sourceforge.net/mailarchive/forum.php?forum_name=gdalgorithms-list
>
|
|
From: Richard M. <mi...@tr...> - 2009-07-22 22:47:16
|
Hi,
Is there any way to remove the ATVI Public Folders from my IMAP account?
(and I mean actually remove, not just hide them)
--
()() Richard Mitton
( '.')
(")_(") Beard Without Portfolio :: Treyarch
|
|
From: Robin G. <rob...@gm...> - 2009-07-20 06:25:25
|
On Fri, Jul 17, 2009 at 7:06 PM, Jorge Rodriguez<jro...@ma...> wrote: > On Fri, Jul 17, 2009 at 12:10 PM, Sam Martin <sam...@ge...> > wrote: >> >> Eek. Have to go. Office is flooding with water. > > ? Yeah. That's, like, so off topic. Moderator! - R. |
|
From: Sam M. <sam...@ge...> - 2009-07-18 09:31:33
|
Yeah, I hit send a bit quickly on this one. It might convey the rough idea, but I can poke holes in it already and need to actually bottom out the maths once and for all. It might take a bit of time, but if I complete it I'll write it up. We had torrential rain in Cambridge (UK) yesterday. The rear entrance to our office is slightly underground. A submerged pump either broken down or was blocked causing the area to suddenly fill up with water and pour under the back door. Half the team bailed the water out while the other half rescued all the equipment. Some carpet destroyed, but otherwise we're fine! Lesson learnt: never put your UPS on the bottom rung of your server rack. ta, Sam -----Original Message----- From: Sam Martin [mailto:sam...@ge...] Sent: Fri 17/07/2009 17:10 To: Game Development Algorithms Subject: Re: [Algorithms] Gaussian blur kernels Thanks, that's clearer. I think there's an extra detail here that might explain things. Warning: this contains some hand-wavey maths. Let's imagine for a second that the space your original line lives in is some real domain, R^2. Instead of doing the integration as we did before we apply (convolve with) a box filter and produce a new function, also in R^2. This would look like a blurry line. Note that it doesn't look like the result we got with the projection into the pixel basis. It would instead look a bit like the bokeh effect you get on a camera, but square. If we now *point-sample* this new signal we could produce a nice looking reconstruction with a pixel basis. I believe (subject to resolving some further fiddly details) the issues with the previous solution will have gone away - I think. No integration per-pixel was required this time. We had already band-limited the signal with the previous continuously-applied box filter. Incidentally, I believe we could also perfectly reconstruct the R^2 blurry line from the point-sampled pixel basis, but I'm not sure exactly what the reconstruction filter would be, and whether it requires the box filter to be twice the size of the original pixel or not. It's one of those things I've wanted to find out for a while, but the maths is a bit heavy. Eek. Have to go. Office is flooding with water. Ta, Sam -----Original Message----- From: Simon Fenney [mailto:sim...@po...] Sent: 17 July 2009 14:59 To: Game Development Algorithms Subject: Re: [Algorithms] Gaussian blur kernels Sam Martin wrote: > I'm not sure I quite understand what you mean by antialiasing in your > experiment? Sorry, I thought that was obvious from the context :-( Pragmatically, I meant take NxN samples per pixel, apply the (trivial) weighting due to a box filter (i.e. 1/N^2), and either choose a "large enough" N or take the limit as N->infinity. The latter should then be your integral below. > To remove the aliasing completely you can't blur after sampling*. > You'd need to rasterise the line by doing the integral over each > pixel properly, rather than point sample. This will produce an > alias-free result, but isn't exactly easy in general :). I wasn't intending it to be easy :-). I just wanted to show that a box filter is not ideal. Cheers Simon ------------------------------------------------------------------------ ------ Enter the BlackBerry Developer Challenge This is your chance to win up to $100,000 in prizes! For a limited time, vendors submitting new applications to BlackBerry App World(TM) will have the opportunity to enter the BlackBerry Developer Challenge. See full prize details at: http://p.sf.net/sfu/Challenge _______________________________________________ GDAlgorithms-list mailing list GDA...@li... https://lists.sourceforge.net/lists/listinfo/gdalgorithms-list Archives: http://sourceforge.net/mailarchive/forum.php?forum_name=gdalgorithms-lis t ------------------------------------------------------------------------------ Enter the BlackBerry Developer Challenge This is your chance to win up to $100,000 in prizes! For a limited time, vendors submitting new applications to BlackBerry App World(TM) will have the opportunity to enter the BlackBerry Developer Challenge. See full prize details at: http://p.sf.net/sfu/Challenge _______________________________________________ GDAlgorithms-list mailing list GDA...@li... https://lists.sourceforge.net/lists/listinfo/gdalgorithms-list Archives: http://sourceforge.net/mailarchive/forum.php?forum_name=gdalgorithms-list |
|
From: Joe M. <jo...@ga...> - 2009-07-18 04:34:59
|
Hey Guys, First, let me start by thanking you for all of the very helpful responses. While I've encountered many of these ideas over the years, admittedly I'd been putting off a proper study of DSP and this discussion really inspired me to just buckle down and give it a go. I spent a good chunk of time yesterday browsing the book that Sam recommended and it's fantastic (and free online)! I even went ahead and ordered a more recent edition from Amazon which just arrived today and am looking forward to sitting down with it over the weekend. I also spent a fair amount of time experimenting with Photoshop's Gaussian Blur (in hindsight, I probably should have done this to begin with) and discovered many of the things that were mentioned here (the relationship between downsampling and kernel sizes, how multiple passes fit in, etc.). Interestingly, the Photoshop implementation is a bit odd -- what they indicate as a radius in pixels (the only input to the function) actually appears to be treated as a half radius which is subsequently doubled behind the scenes, so you wind up with kernel sizes that are twice as large as one would expect given the input. That one had me scratching my head as I agonized over how they were able to get results that were so much blurrier than mine with such small kernels :) Once I figured it out, I was able to learn a lot about its behavior under different conditions. Anyway, I think I'm all set at this stage, and I thank you all again for the insights. Cheers, Joe |
|
From: Jorge R. <jro...@ma...> - 2009-07-18 02:33:36
|
On Fri, Jul 17, 2009 at 12:10 PM, Sam Martin <sam...@ge...>wrote: > Eek. Have to go. Office is flooding with water. > ? -- Jorge Rodriguez |
|
From: Jon W. <jw...@gm...> - 2009-07-17 16:11:53
|
Simon Fenney wrote: > Oh, I agree that there is a difference between what is 'best' for audio > and graphics** but a box filter is far from an ideal way of downsampling > graphics. I had a test image which was rendered with 10k samples per > pixel but downfiltered*** using a box filter. The amount of aliasing > surprised people. > Didn't we discuss this like three years ago, in the context of "what are textures, really?" Once you get into those details, you start having to make assumptions about what a sample is. Is it a point sample of a band-limited function? Is it an area average for a picture sensor? If so, what is the shape of that area filter? Sincerely, jw -- Revenge is the most pointless and damaging of human desires. |
|
From: Sam M. <sam...@ge...> - 2009-07-17 16:11:52
|
Thanks, that's clearer. I think there's an extra detail here that might explain things. Warning: this contains some hand-wavey maths. Let's imagine for a second that the space your original line lives in is some real domain, R^2. Instead of doing the integration as we did before we apply (convolve with) a box filter and produce a new function, also in R^2. This would look like a blurry line. Note that it doesn't look like the result we got with the projection into the pixel basis. It would instead look a bit like the bokeh effect you get on a camera, but square. If we now *point-sample* this new signal we could produce a nice looking reconstruction with a pixel basis. I believe (subject to resolving some further fiddly details) the issues with the previous solution will have gone away - I think. No integration per-pixel was required this time. We had already band-limited the signal with the previous continuously-applied box filter. Incidentally, I believe we could also perfectly reconstruct the R^2 blurry line from the point-sampled pixel basis, but I'm not sure exactly what the reconstruction filter would be, and whether it requires the box filter to be twice the size of the original pixel or not. It's one of those things I've wanted to find out for a while, but the maths is a bit heavy. Eek. Have to go. Office is flooding with water. Ta, Sam -----Original Message----- From: Simon Fenney [mailto:sim...@po...] Sent: 17 July 2009 14:59 To: Game Development Algorithms Subject: Re: [Algorithms] Gaussian blur kernels Sam Martin wrote: > I'm not sure I quite understand what you mean by antialiasing in your > experiment? Sorry, I thought that was obvious from the context :-( Pragmatically, I meant take NxN samples per pixel, apply the (trivial) weighting due to a box filter (i.e. 1/N^2), and either choose a "large enough" N or take the limit as N->infinity. The latter should then be your integral below. > To remove the aliasing completely you can't blur after sampling*. > You'd need to rasterise the line by doing the integral over each > pixel properly, rather than point sample. This will produce an > alias-free result, but isn't exactly easy in general :). I wasn't intending it to be easy :-). I just wanted to show that a box filter is not ideal. Cheers Simon ------------------------------------------------------------------------ ------ Enter the BlackBerry Developer Challenge This is your chance to win up to $100,000 in prizes! For a limited time, vendors submitting new applications to BlackBerry App World(TM) will have the opportunity to enter the BlackBerry Developer Challenge. See full prize details at: http://p.sf.net/sfu/Challenge _______________________________________________ GDAlgorithms-list mailing list GDA...@li... https://lists.sourceforge.net/lists/listinfo/gdalgorithms-list Archives: http://sourceforge.net/mailarchive/forum.php?forum_name=gdalgorithms-lis t |
|
From: Simon F. <sim...@po...> - 2009-07-17 13:59:12
|
Sam Martin wrote: > I'm not sure I quite understand what you mean by antialiasing in your > experiment? Sorry, I thought that was obvious from the context :-( Pragmatically, I meant take NxN samples per pixel, apply the (trivial) weighting due to a box filter (i.e. 1/N^2), and either choose a "large enough" N or take the limit as N->infinity. The latter should then be your integral below. > To remove the aliasing completely you can't blur after sampling*. > You'd need to rasterise the line by doing the integral over each > pixel properly, rather than point sample. This will produce an > alias-free result, but isn't exactly easy in general :). I wasn't intending it to be easy :-). I just wanted to show that a box filter is not ideal. Cheers Simon |
|
From: Sam M. <sam...@ge...> - 2009-07-17 13:16:44
|
I'm not sure I quite understand what you mean by antialiasing in your experiment? If you have a line which you rasterise using the usual point sampling, you will immediately produce aliasing (jaggies). Blurring the result with any filter may make the output more acceptable - and some filters may be more pleasant any others - but it doesn't remove the aliasing. It just reduces the objectional high frequencies. To remove the aliasing completely you can't blur after sampling*. You'd need to rasterise the line by doing the integral over each pixel properly, rather than point sample. This will produce an alias-free result, but isn't exactly easy in general :). There are further details here though. When working with a pixel-basis we aren't in exactly the same territory as when we point-sample. There is a more general theory of sampling worked out largely by Papoulis and another dude I forget. The concepts of band limited signals and the reconstruction filters change as your basis changes. Cheers, Sam * In general - but using blue noise can help push the aliasing into high frequencies which helps. -----Original Message----- From: Simon Fenney [mailto:sim...@po...] Sent: 17 July 2009 11:51 To: Game Development Algorithms Subject: Re: [Algorithms] Gaussian blur kernels Olivier Galibert wrote: > On Thu, Jul 16, 2009 at 09:56:58AM +0100, Simon Fenney wrote: >> Jon Watte wrote: >> >>> Actually, they are both low-pass filters, so >>> downsampling-plus-upsampling is a form of blur. However, box >>> filtering (the traditional downsampling function) isn't actually all >>> that great at low-pass filtering, so it can generate aliasing, which >>> a Gaussian filter does not. >> >> AFAICS a box filter won't "generate" aliasing, it's just that it's >> not very good at eliminating the high frequencies which cause >> aliasing. A Gaussian, OTOH, has better behaviour in this respect... > > Aren't we talking about graphics here? Vision is area sampling, > making box filters perfect for integer downsampling. It's sound that > is point sampling and hence sensitive to high frequency aliasing. > > OG. Oh, I agree that there is a difference between what is 'best' for audio and graphics** but a box filter is far from an ideal way of downsampling graphics. I had a test image which was rendered with 10k samples per pixel but downfiltered*** using a box filter. The amount of aliasing surprised people. To show why a box is inadequate, as a thought experiment, *temporarily* consider the pixels as little squares (apologies to Alvy Ray Smith), and draw, say, a 1/8th pixel wide, *nearly* horizontal white line on a black background. Now imagine antialiasing it with a box filter. Now consider either animating the line, shifting it vertically by a fraction of a pixel per frame (and watch it jump), OR alternatively just consider a static frame. Assuming we do the latter, start from a point where the line crosses from one scanline into the next and run along the row to the next crossing. With the box filter, except for the areas where the line actually crosses the pixel boundaries, there will be a large number of pixels with exactly the same intensity. AFAICS, your brain expects to see more variation as the line sweeps across the pixels. Simon ** I have my own hypothesis on what is the root cause of this difference which is being hotly debated here in our research office. I now just need to install a better mathematical brain in my head to prove it :) ***In a linear colour space. This is V. important. ------------------------------------------------------------------------ ------ Enter the BlackBerry Developer Challenge This is your chance to win up to $100,000 in prizes! For a limited time, vendors submitting new applications to BlackBerry App World(TM) will have the opportunity to enter the BlackBerry Developer Challenge. See full prize details at: http://p.sf.net/sfu/Challenge _______________________________________________ GDAlgorithms-list mailing list GDA...@li... https://lists.sourceforge.net/lists/listinfo/gdalgorithms-list Archives: http://sourceforge.net/mailarchive/forum.php?forum_name=gdalgorithms-lis t |
|
From: Simon F. <sim...@po...> - 2009-07-17 10:50:50
|
Olivier Galibert wrote: > On Thu, Jul 16, 2009 at 09:56:58AM +0100, Simon Fenney wrote: >> Jon Watte wrote: >> >>> Actually, they are both low-pass filters, so >>> downsampling-plus-upsampling is a form of blur. However, box >>> filtering (the traditional downsampling function) isn't actually all >>> that great at low-pass filtering, so it can generate aliasing, which >>> a Gaussian filter does not. >> >> AFAICS a box filter won't "generate" aliasing, it's just that it's >> not very good at eliminating the high frequencies which cause >> aliasing. A Gaussian, OTOH, has better behaviour in this respect... > > Aren't we talking about graphics here? Vision is area sampling, > making box filters perfect for integer downsampling. It's sound that > is point sampling and hence sensitive to high frequency aliasing. > > OG. Oh, I agree that there is a difference between what is 'best' for audio and graphics** but a box filter is far from an ideal way of downsampling graphics. I had a test image which was rendered with 10k samples per pixel but downfiltered*** using a box filter. The amount of aliasing surprised people. To show why a box is inadequate, as a thought experiment, *temporarily* consider the pixels as little squares (apologies to Alvy Ray Smith), and draw, say, a 1/8th pixel wide, *nearly* horizontal white line on a black background. Now imagine antialiasing it with a box filter. Now consider either animating the line, shifting it vertically by a fraction of a pixel per frame (and watch it jump), OR alternatively just consider a static frame. Assuming we do the latter, start from a point where the line crosses from one scanline into the next and run along the row to the next crossing. With the box filter, except for the areas where the line actually crosses the pixel boundaries, there will be a large number of pixels with exactly the same intensity. AFAICS, your brain expects to see more variation as the line sweeps across the pixels. Simon ** I have my own hypothesis on what is the root cause of this difference which is being hotly debated here in our research office. I now just need to install a better mathematical brain in my head to prove it :) ***In a linear colour space. This is V. important. |
|
From: Danny K. <dr...@we...> - 2009-07-16 22:43:30
|
> In practice, you can encode any finite or countably infinite digital > system state in one number (that's where names like Godel start to > fuse). So, on a turing-ish tape, that means two bits are enough. > Converting the program to run on the 1-bit limited version is the hard > part though. I have no idea whether it's actually possible. That's exactly where I was heading with the question, yes - thanks for reassuring me that I'm not being completely crazy. I envision a system where the distance between the first two boxes represents the simulated TM itself, the next distance represents its current state, the next represents the state of the tape, and the last represents the current position on the tape. (as you say you could further compress this into a single number, but that has the disadvantage that you have to do a lot more processing to work out what it's actually doing). Then any further calculations would be done with other boxes that come after these initial five. It seems to me that this ought to be doable with a finite number of boxes and a finite number of states, but I'm not sure. I know it's a totally pointless and academic question, but it intrigues me strangely. Danny |
|
From: Jon W. <jw...@gm...> - 2009-07-16 21:59:36
|
Fabian Giesen wrote: > versions of both the signal and the filter. But the details don't > matter, since we don't have to write it in terms of convolutions here! > > That was the whole point of my previous mail. There's no point computing > the 5x5 Gaussian convolution at full res only to throw away 3 out of > Oh, yeah, of course. I mis-understood your meaning, because the point was too obvious :-) So, yes, the down-sampling operation is, at the end, a single pass (assuming you do a 2D Gaussian in a single pass), and it samples some number of samples from the source, per output pixel. I guess even considering the "filter THEN discard" implementation is just too alien for those of us who have been actually implementing these things for so long that the details blend with memories of college beer bashes in the distant past... Sincerely, jw -- Revenge is the most pointless and damaging of human desires. |