You can subscribe to this list here.
2000 
_{Jan}

_{Feb}

_{Mar}

_{Apr}

_{May}

_{Jun}

_{Jul}
(390) 
_{Aug}
(767) 
_{Sep}
(940) 
_{Oct}
(964) 
_{Nov}
(819) 
_{Dec}
(762) 

2001 
_{Jan}
(680) 
_{Feb}
(1075) 
_{Mar}
(954) 
_{Apr}
(595) 
_{May}
(725) 
_{Jun}
(868) 
_{Jul}
(678) 
_{Aug}
(785) 
_{Sep}
(410) 
_{Oct}
(395) 
_{Nov}
(374) 
_{Dec}
(419) 
2002 
_{Jan}
(699) 
_{Feb}
(501) 
_{Mar}
(311) 
_{Apr}
(334) 
_{May}
(501) 
_{Jun}
(507) 
_{Jul}
(441) 
_{Aug}
(395) 
_{Sep}
(540) 
_{Oct}
(416) 
_{Nov}
(369) 
_{Dec}
(373) 
2003 
_{Jan}
(514) 
_{Feb}
(488) 
_{Mar}
(396) 
_{Apr}
(624) 
_{May}
(590) 
_{Jun}
(562) 
_{Jul}
(546) 
_{Aug}
(463) 
_{Sep}
(389) 
_{Oct}
(399) 
_{Nov}
(333) 
_{Dec}
(449) 
2004 
_{Jan}
(317) 
_{Feb}
(395) 
_{Mar}
(136) 
_{Apr}
(338) 
_{May}
(488) 
_{Jun}
(306) 
_{Jul}
(266) 
_{Aug}
(424) 
_{Sep}
(502) 
_{Oct}
(170) 
_{Nov}
(170) 
_{Dec}
(134) 
2005 
_{Jan}
(249) 
_{Feb}
(109) 
_{Mar}
(119) 
_{Apr}
(282) 
_{May}
(82) 
_{Jun}
(113) 
_{Jul}
(56) 
_{Aug}
(160) 
_{Sep}
(89) 
_{Oct}
(98) 
_{Nov}
(237) 
_{Dec}
(297) 
2006 
_{Jan}
(151) 
_{Feb}
(250) 
_{Mar}
(222) 
_{Apr}
(147) 
_{May}
(266) 
_{Jun}
(313) 
_{Jul}
(367) 
_{Aug}
(135) 
_{Sep}
(108) 
_{Oct}
(110) 
_{Nov}
(220) 
_{Dec}
(47) 
2007 
_{Jan}
(133) 
_{Feb}
(144) 
_{Mar}
(247) 
_{Apr}
(191) 
_{May}
(191) 
_{Jun}
(171) 
_{Jul}
(160) 
_{Aug}
(51) 
_{Sep}
(125) 
_{Oct}
(115) 
_{Nov}
(78) 
_{Dec}
(67) 
2008 
_{Jan}
(165) 
_{Feb}
(37) 
_{Mar}
(130) 
_{Apr}
(111) 
_{May}
(91) 
_{Jun}
(142) 
_{Jul}
(54) 
_{Aug}
(104) 
_{Sep}
(89) 
_{Oct}
(87) 
_{Nov}
(44) 
_{Dec}
(54) 
2009 
_{Jan}
(283) 
_{Feb}
(113) 
_{Mar}
(154) 
_{Apr}
(395) 
_{May}
(62) 
_{Jun}
(48) 
_{Jul}
(52) 
_{Aug}
(54) 
_{Sep}
(131) 
_{Oct}
(29) 
_{Nov}
(32) 
_{Dec}
(37) 
2010 
_{Jan}
(34) 
_{Feb}
(36) 
_{Mar}
(40) 
_{Apr}
(23) 
_{May}
(38) 
_{Jun}
(34) 
_{Jul}
(36) 
_{Aug}
(27) 
_{Sep}
(9) 
_{Oct}
(18) 
_{Nov}
(25) 
_{Dec}

2011 
_{Jan}
(1) 
_{Feb}
(14) 
_{Mar}
(1) 
_{Apr}
(5) 
_{May}
(1) 
_{Jun}

_{Jul}

_{Aug}
(37) 
_{Sep}
(6) 
_{Oct}
(2) 
_{Nov}

_{Dec}

2012 
_{Jan}

_{Feb}
(7) 
_{Mar}

_{Apr}
(4) 
_{May}

_{Jun}
(3) 
_{Jul}

_{Aug}

_{Sep}
(1) 
_{Oct}

_{Nov}

_{Dec}
(10) 
2013 
_{Jan}

_{Feb}
(1) 
_{Mar}
(7) 
_{Apr}
(2) 
_{May}

_{Jun}

_{Jul}
(9) 
_{Aug}

_{Sep}

_{Oct}

_{Nov}

_{Dec}

2014 
_{Jan}
(14) 
_{Feb}

_{Mar}
(2) 
_{Apr}

_{May}
(10) 
_{Jun}

_{Jul}

_{Aug}

_{Sep}

_{Oct}

_{Nov}
(3) 
_{Dec}

S  M  T  W  T  F  S 

1

2

3

4
(11) 
5
(1) 
6
(2) 
7
(1) 
8
(7) 
9

10
(2) 
11
(7) 
12
(1) 
13

14

15

16

17

18

19

20

21

22

23

24

25

26

27

28

29

30






From: JSeb <J<seb@na...>  20091112 10:18:18

Hello, I'm investigating since quite a long time for a good hdr tonemapping formula For the time being, we're using the following: a) a classic global operator defined by: LinearLDR.rgb = pow(1  exp(UserScale*HDR.rgb*AutoScale), UserGamma) SrgbLDR.rgb = LinearToSrgb(LinearLDR.rgb) (display gamma convertion) With AutoScale = clamp(MiddleGray/AverageLuminance, MinScale, MaxScale) This formula works quite well with dynamic adaptation of AverageLuminance. b) Yet, we've some situations where dynamic range of hdr input is too high and tone mapping output burns too much (for instance when getting out of a dark tunnel, outside locally burns) I've a big belief that a local operator could behaves better in case (b). I've made an old try with a selfmade multiscale gaussian blur of intensity (with downsizes/upscales to reach big scales) which was triggering another "AutoScale" factor, yet results were not so good. c) I've recently implement gpu histograms first to visualize hdr input, then i've tried a naive histogram equalization in log space which results is very poor. I'm appealed by some histogram mapping like c.1) "black & white 2" , (minIntens(<1%)=>0, maxIntens(>99%)=>1, midIntens(50%) controling a gamma) yet minIntens=>black is quite odd c.2) smarter histogram equalization (ward histogram adjustement) Yet, I've the feelling histogram won't really solve case (b) (pdf: 2007 "Efficient Histogram Generation Using scattering on GPUs") d) I've more recently implemented a approximation of Reinhard local operator (with box filters computed by summed area tables) result beging to be interesting yet, I've some artifacts, the biggest being banding & aliasing, I fear theses are inhere in the operator... and even if Alpha & Teta parameters default value works (0.05 & 8), others values (like 0.025 proposed by the paper) don't work so well. in addition very high luminance colors don't go to white & made "pure" saturated colors (but this defect may be tweaked) (pdf: 2008 "RealTime Photographic Local Tone Reproduction Using SummedArea Tables" Slomp/Oliveira CGI2008) Have you any advices or know local operators which really works for big hdr range images ?? Many thanks JSeb 
From: Nathaniel Hoffman <naty@io...>  20091111 21:17:14

Halfvector doesn't break down for very smooth surfaces, you just get a very peaked NDF that is vanishingly small outside a very small solid angle  so there are only reflections when H is very close to N. In the limit you get the perfect mirror behavior where there are only reflections when N == H (which is the same as saying V == R). In other words, H formulation behavior converges cleanly to R formulation in the limit. Getting "halfvector" like behavior out of an environment map (which is inherently Rbased) is possible, but more expensive than traditional methods. There are a few approaches in the literature (note that I haven't tried out any of these myself): 1) The paper "Antialiasing of Environment Maps" by Andreas Schilling has an approach that could be implemented using TEXDD (if hardware supports anisotropic filtering of cubemaps) or with manual filtering. 2) The body of work on prefiltered environment maps could be used for this as well, though there are complex fitting procedures to follow. The relevant papers here are "A Unified Approach to Prefiltered Environment Maps" by Kautz et. al., "Approximation of Glossy Reflection with Prefiltered Environment Maps" by Kautz and McCool, and (much newer) "Efficient Reflectance and Visibility Approximations for Environment Map Rendering" by Green et. al. 3) BRDF importance sampling approaches have also been applied to environment maps. Colbert and Krivanek's work is most relevant here: a SIGGRAPH sketch ("RealTime Shading with Filtered Importance Sampling") and a GPU Gems 3 chapter ("GPUBased Importance Sampling"). 4) The SIGGRAPH Asia 2009 paper "AllFrequency Rendering of Dynamic, SpatiallyVarying Reflectance" by Wang et. al. warps reflectance lobes from Hspace into Lspace, for use with prefiltered environment maps. Their technique can even work with a single lookup (in this case it just shrinks the lobe rather than distorting it, but this is still better than the naive approach which results in grossly oversized highlights in some cases). Naty Hoffman > The progression Jon mentioned between a rough surface and a smooth one is > interesting to me though. The use of the halfvector does break down for > very smooth surfaces, it seems. It could be worth considering what that > means exactly. > > I just finished a game where we used cube maps for specular lighting > contributions from the sun & sky. Several cube images were used, each > "preblurred" to a set specular power around the reflection vector. > Halfvector was not an option I think (every pixel in the map is a light > source). Is there a way to get this "halfvectorlike" behavior out of a > cube lookup? Feels like the answer is no... > > On Wed, Nov 11, 2009 at 12:31 PM, Nathaniel Hoffman <naty@...> wrote: > >> Jon, >> >> It is inarguable that H produces much more realistic results  a simple >> observation of light streaks on wet roads and similar scenes with a >> comparison to renderings of the two formulations proves this without a >> doubt. >> >> However, there are also good, fundamental, theoretical reasons to prefer >> H. I seem to be explaining this poorly  I will give it another try, but >> first, here are some pointers to other explanations: >> >> There is a good diagram illustrating the difference in behavior of the >> two >> vectors in Figure 7 of this paper: >> http://people.csail.mit.edu/addy/research/ngan05_brdf_eval.pdf. >> >> There is also some discussion about it in Appendix A of this paper: >> >> http://graphics.stanford.edu/courses/cs44805winter/Schilling1997TechRep.pdf >> >> "RealTime Rendering, 3rd edition" also has some discussion of it on >> pages >> 249251. If you don't have a copy of the book, you can "look inside" at >> Amazon: >> >> http://www.amazon.com/RealTimeRenderingThirdTomasAkenineMoller/dp/1568814240 >> , >> click on "search inside this book", look for "half vector" (in quotes)  >> you will get a link to page 249. >> >> OK, now I'll have another go As we both agree, the reflection vector is >> fundamental for a perfectly flat mirror. Imagine a directional or point >> light shining on the mirror. There are only visible reflections when V >> == >> R(L, N) (view vector is equal to the reflection of the light vector >> around >> the surface normal). >> >> Now how should we treat a surface which is not perfectly flat? A good >> model (which comes to us from fields outside graphics but has been very >> successful in graphics) is to treat such a surface as a statistical >> collection of stochasticallyoriented perfect mirrors, each one too tiny >> to be individually visible. A useful description of such a surface for >> purposes of rendering is a normal distribution function, or NDF, which >> gives the statistical distribution of the microfacet normals relative to >> the overall macroscopic normal. >> >> Given a light direction L and a view direction V, how bright will we >> observe the surface to be? Let's assume for simplicity that each of >> these >> mirrors is 100% reflective at all angles (silver comes close to that). >> Then it is clear that the brightness is proportional to the percentage >> of >> microfacets from which there are visible reflections, in other words >> those >> for which V == R(L, N_u) (here I use N_u for the microfacet normal to >> distinguish from the overall surface normal N). >> >> It is simple to demonstrate that this is equivalent to N_u == H. >> Therefore >> we should "plug" H into the microfacet distribution function, which >> yields >> the (N dot H) formulation for isotropic surfaces. >> >> I can think of no similarlyprincipled way to derive the reflection >> vector >> formulation, and none has appeared in the literature. >> >> I hope this has convinced you that the H formulation is superior to R >> both >> in terms of realism and theoretical soundness. >> >> Thanks, >> >> Naty Hoffman >> >> > But that's equally true for the reflection vector! If all the >> > micromirrors were perfectly flat, then an infinite specular power >> > would be applied, and you'd get a perfect reflection of the lighting >> > environment  in fact, this is what environment mapping gives you. >> > >> > As the mirrors start deviating from the perfectly flat state, the >> > specular power would decrease, and the specular reflection area would >> > grow in size. I don't see how you can say that the halfangle >> > formulation is more meaningful. We're still talking about reflected >> > light. In the perfectly reflected case, clearly the reflection vector >> > is 100% meaningful and accurate, and any other formulation would be >> > less meaningful. I don't see how "meaningfulness" would change as >> > smoothness goes from 100% to 99.9% or 95% or 50%. >> > >> > I do agree that the math gives you a different assumed microfacet >> > distribution in the case of the reflection formulation versus the >> > halfangle formulation. Both are approximations, of course. However, >> > what I don't get, is why the reflection vector approximation is >> > considered so inferior to the more expensive halfangle vector >> > approximation. Does it have anything to do with the space integral of >> > the reflection cone formed by the vector in question spread out by the >> > power function? If so, how? >> > >> > Sincerely, >> > >> > jw >> > >> > >> > On Sun, Nov 8, 2009 at 9:41 AM, Nathaniel Hoffman <naty@...> wrote: >> > >> >> The halfangle formulation is not just more physically correct than >> the >> >> reflectionvector formulation, it is fundamentally more meaningful. >> > ... >> >> The halfvector comes from microfacet theory. Imagine that the >> surface >> >> is >> >> actually a large collection of tiny flat mirrors when viewed under >> >> magnification. Recall that a mirror only reflects light in the >> >> reflection >> >> direction. For given light vector L and view vector V, only mirrors >> >> which >> > >> > >> >  >> > Americans might object: there is no way we would sacrifice our living >> > standards for the benefit of people in the rest of the world. >> > Nevertheless, whether we get there willingly or not, we shall soon >> > have lower consumption rates, because our present rates are >> > unsustainable. >> > >> > >>  >> > Let Crystal Reports handle the reporting  Free Crystal Reports 2008 >> > 30Day >> > trial. Simplify your report design, integration and deployment  and >> focus >> > on >> > what you do best, core application coding. Discover what's new with >> > Crystal Reports now. http://p.sf.net/sfu/bobjjuly >> > _______________________________________________ >> > GDAlgorithmslist mailing list >> > GDAlgorithmslist@... >> > https://lists.sourceforge.net/lists/listinfo/gdalgorithmslist >> > Archives: >> > >> http://sourceforge.net/mailarchive/forum.php?forum_name=gdalgorithmslist >> > >> >> >> >> >>  >> Let Crystal Reports handle the reporting  Free Crystal Reports 2008 >> 30Day >> trial. Simplify your report design, integration and deployment  and >> focus >> on >> what you do best, core application coding. Discover what's new with >> Crystal Reports now. http://p.sf.net/sfu/bobjjuly >> _______________________________________________ >> GDAlgorithmslist mailing list >> GDAlgorithmslist@... >> https://lists.sourceforge.net/lists/listinfo/gdalgorithmslist >> Archives: >> http://sourceforge.net/mailarchive/forum.php?forum_name=gdalgorithmslist >> > > > >  > Jeff Russell > Engineer, 8monkey Labs > http://www.8monkeylabs.com >  > Let Crystal Reports handle the reporting  Free Crystal Reports 2008 > 30Day > trial. Simplify your report design, integration and deployment  and focus > on > what you do best, core application coding. Discover what's new with > Crystal Reports now. > http://p.sf.net/sfu/bobjjuly_______________________________________________ > GDAlgorithmslist mailing list > GDAlgorithmslist@... > https://lists.sourceforge.net/lists/listinfo/gdalgorithmslist > Archives: > http://sourceforge.net/mailarchive/forum.php?forum_name=gdalgorithmslist 
From: Jeff Russell <jeffdr@8m...>  20091111 19:25:04

The progression Jon mentioned between a rough surface and a smooth one is interesting to me though. The use of the halfvector does break down for very smooth surfaces, it seems. It could be worth considering what that means exactly. I just finished a game where we used cube maps for specular lighting contributions from the sun & sky. Several cube images were used, each "preblurred" to a set specular power around the reflection vector. Halfvector was not an option I think (every pixel in the map is a light source). Is there a way to get this "halfvectorlike" behavior out of a cube lookup? Feels like the answer is no... On Wed, Nov 11, 2009 at 12:31 PM, Nathaniel Hoffman <naty@...> wrote: > Jon, > > It is inarguable that H produces much more realistic results  a simple > observation of light streaks on wet roads and similar scenes with a > comparison to renderings of the two formulations proves this without a > doubt. > > However, there are also good, fundamental, theoretical reasons to prefer > H. I seem to be explaining this poorly  I will give it another try, but > first, here are some pointers to other explanations: > > There is a good diagram illustrating the difference in behavior of the two > vectors in Figure 7 of this paper: > http://people.csail.mit.edu/addy/research/ngan05_brdf_eval.pdf. > > There is also some discussion about it in Appendix A of this paper: > > http://graphics.stanford.edu/courses/cs44805winter/Schilling1997TechRep.pdf > > "RealTime Rendering, 3rd edition" also has some discussion of it on pages > 249251. If you don't have a copy of the book, you can "look inside" at > Amazon: > > http://www.amazon.com/RealTimeRenderingThirdTomasAkenineMoller/dp/1568814240 > , > click on "search inside this book", look for "half vector" (in quotes)  > you will get a link to page 249. > > OK, now I'll have another go As we both agree, the reflection vector is > fundamental for a perfectly flat mirror. Imagine a directional or point > light shining on the mirror. There are only visible reflections when V == > R(L, N) (view vector is equal to the reflection of the light vector around > the surface normal). > > Now how should we treat a surface which is not perfectly flat? A good > model (which comes to us from fields outside graphics but has been very > successful in graphics) is to treat such a surface as a statistical > collection of stochasticallyoriented perfect mirrors, each one too tiny > to be individually visible. A useful description of such a surface for > purposes of rendering is a normal distribution function, or NDF, which > gives the statistical distribution of the microfacet normals relative to > the overall macroscopic normal. > > Given a light direction L and a view direction V, how bright will we > observe the surface to be? Let's assume for simplicity that each of these > mirrors is 100% reflective at all angles (silver comes close to that). > Then it is clear that the brightness is proportional to the percentage of > microfacets from which there are visible reflections, in other words those > for which V == R(L, N_u) (here I use N_u for the microfacet normal to > distinguish from the overall surface normal N). > > It is simple to demonstrate that this is equivalent to N_u == H. Therefore > we should "plug" H into the microfacet distribution function, which yields > the (N dot H) formulation for isotropic surfaces. > > I can think of no similarlyprincipled way to derive the reflection vector > formulation, and none has appeared in the literature. > > I hope this has convinced you that the H formulation is superior to R both > in terms of realism and theoretical soundness. > > Thanks, > > Naty Hoffman > > > But that's equally true for the reflection vector! If all the > > micromirrors were perfectly flat, then an infinite specular power > > would be applied, and you'd get a perfect reflection of the lighting > > environment  in fact, this is what environment mapping gives you. > > > > As the mirrors start deviating from the perfectly flat state, the > > specular power would decrease, and the specular reflection area would > > grow in size. I don't see how you can say that the halfangle > > formulation is more meaningful. We're still talking about reflected > > light. In the perfectly reflected case, clearly the reflection vector > > is 100% meaningful and accurate, and any other formulation would be > > less meaningful. I don't see how "meaningfulness" would change as > > smoothness goes from 100% to 99.9% or 95% or 50%. > > > > I do agree that the math gives you a different assumed microfacet > > distribution in the case of the reflection formulation versus the > > halfangle formulation. Both are approximations, of course. However, > > what I don't get, is why the reflection vector approximation is > > considered so inferior to the more expensive halfangle vector > > approximation. Does it have anything to do with the space integral of > > the reflection cone formed by the vector in question spread out by the > > power function? If so, how? > > > > Sincerely, > > > > jw > > > > > > On Sun, Nov 8, 2009 at 9:41 AM, Nathaniel Hoffman <naty@...> wrote: > > > >> The halfangle formulation is not just more physically correct than the > >> reflectionvector formulation, it is fundamentally more meaningful. > > ... > >> The halfvector comes from microfacet theory. Imagine that the surface > >> is > >> actually a large collection of tiny flat mirrors when viewed under > >> magnification. Recall that a mirror only reflects light in the > >> reflection > >> direction. For given light vector L and view vector V, only mirrors > >> which > > > > > >  > > Americans might object: there is no way we would sacrifice our living > > standards for the benefit of people in the rest of the world. > > Nevertheless, whether we get there willingly or not, we shall soon > > have lower consumption rates, because our present rates are > > unsustainable. > > > > >  > > Let Crystal Reports handle the reporting  Free Crystal Reports 2008 > > 30Day > > trial. Simplify your report design, integration and deployment  and > focus > > on > > what you do best, core application coding. Discover what's new with > > Crystal Reports now. http://p.sf.net/sfu/bobjjuly > > _______________________________________________ > > GDAlgorithmslist mailing list > > GDAlgorithmslist@... > > https://lists.sourceforge.net/lists/listinfo/gdalgorithmslist > > Archives: > > > http://sourceforge.net/mailarchive/forum.php?forum_name=gdalgorithmslist > > > > > > >  > Let Crystal Reports handle the reporting  Free Crystal Reports 2008 30Day > trial. Simplify your report design, integration and deployment  and focus > on > what you do best, core application coding. Discover what's new with > Crystal Reports now. http://p.sf.net/sfu/bobjjuly > _______________________________________________ > GDAlgorithmslist mailing list > GDAlgorithmslist@... > https://lists.sourceforge.net/lists/listinfo/gdalgorithmslist > Archives: > http://sourceforge.net/mailarchive/forum.php?forum_name=gdalgorithmslist >  Jeff Russell Engineer, 8monkey Labs http://www.8monkeylabs.com 
From: Nathaniel Hoffman <naty@io...>  20091111 18:32:12

Jon, It is inarguable that H produces much more realistic results  a simple observation of light streaks on wet roads and similar scenes with a comparison to renderings of the two formulations proves this without a doubt. However, there are also good, fundamental, theoretical reasons to prefer H. I seem to be explaining this poorly  I will give it another try, but first, here are some pointers to other explanations: There is a good diagram illustrating the difference in behavior of the two vectors in Figure 7 of this paper: http://people.csail.mit.edu/addy/research/ngan05_brdf_eval.pdf. There is also some discussion about it in Appendix A of this paper: http://graphics.stanford.edu/courses/cs44805winter/Schilling1997TechRep.pdf "RealTime Rendering, 3rd edition" also has some discussion of it on pages 249251. If you don't have a copy of the book, you can "look inside" at Amazon: http://www.amazon.com/RealTimeRenderingThirdTomasAkenineMoller/dp/1568814240, click on "search inside this book", look for "half vector" (in quotes)  you will get a link to page 249. OK, now I'll have another go As we both agree, the reflection vector is fundamental for a perfectly flat mirror. Imagine a directional or point light shining on the mirror. There are only visible reflections when V == R(L, N) (view vector is equal to the reflection of the light vector around the surface normal). Now how should we treat a surface which is not perfectly flat? A good model (which comes to us from fields outside graphics but has been very successful in graphics) is to treat such a surface as a statistical collection of stochasticallyoriented perfect mirrors, each one too tiny to be individually visible. A useful description of such a surface for purposes of rendering is a normal distribution function, or NDF, which gives the statistical distribution of the microfacet normals relative to the overall macroscopic normal. Given a light direction L and a view direction V, how bright will we observe the surface to be? Let's assume for simplicity that each of these mirrors is 100% reflective at all angles (silver comes close to that). Then it is clear that the brightness is proportional to the percentage of microfacets from which there are visible reflections, in other words those for which V == R(L, N_u) (here I use N_u for the microfacet normal to distinguish from the overall surface normal N). It is simple to demonstrate that this is equivalent to N_u == H. Therefore we should "plug" H into the microfacet distribution function, which yields the (N dot H) formulation for isotropic surfaces. I can think of no similarlyprincipled way to derive the reflection vector formulation, and none has appeared in the literature. I hope this has convinced you that the H formulation is superior to R both in terms of realism and theoretical soundness. Thanks, Naty Hoffman > But that's equally true for the reflection vector! If all the > micromirrors were perfectly flat, then an infinite specular power > would be applied, and you'd get a perfect reflection of the lighting > environment  in fact, this is what environment mapping gives you. > > As the mirrors start deviating from the perfectly flat state, the > specular power would decrease, and the specular reflection area would > grow in size. I don't see how you can say that the halfangle > formulation is more meaningful. We're still talking about reflected > light. In the perfectly reflected case, clearly the reflection vector > is 100% meaningful and accurate, and any other formulation would be > less meaningful. I don't see how "meaningfulness" would change as > smoothness goes from 100% to 99.9% or 95% or 50%. > > I do agree that the math gives you a different assumed microfacet > distribution in the case of the reflection formulation versus the > halfangle formulation. Both are approximations, of course. However, > what I don't get, is why the reflection vector approximation is > considered so inferior to the more expensive halfangle vector > approximation. Does it have anything to do with the space integral of > the reflection cone formed by the vector in question spread out by the > power function? If so, how? > > Sincerely, > > jw > > > On Sun, Nov 8, 2009 at 9:41 AM, Nathaniel Hoffman <naty@...> wrote: > >> The halfangle formulation is not just more physically correct than the >> reflectionvector formulation, it is fundamentally more meaningful. > ... >> The halfvector comes from microfacet theory. Imagine that the surface >> is >> actually a large collection of tiny flat mirrors when viewed under >> magnification. Recall that a mirror only reflects light in the >> reflection >> direction. For given light vector L and view vector V, only mirrors >> which > > >  > Americans might object: there is no way we would sacrifice our living > standards for the benefit of people in the rest of the world. > Nevertheless, whether we get there willingly or not, we shall soon > have lower consumption rates, because our present rates are > unsustainable. > >  > Let Crystal Reports handle the reporting  Free Crystal Reports 2008 > 30Day > trial. Simplify your report design, integration and deployment  and focus > on > what you do best, core application coding. Discover what's new with > Crystal Reports now. http://p.sf.net/sfu/bobjjuly > _______________________________________________ > GDAlgorithmslist mailing list > GDAlgorithmslist@... > https://lists.sourceforge.net/lists/listinfo/gdalgorithmslist > Archives: > http://sourceforge.net/mailarchive/forum.php?forum_name=gdalgorithmslist > 
From: Ben Yeoh <shuttlecork@gm...>  20091111 14:05:16

Okay. Figured out the issue. PIX saved me on this one. So multiplying the log visibility coefficients by N amount of spheres and fitting it to the single sphere case to derive the AB table actually does work. And all this time I'm wondering why it didn't. Turns out that I was also stupidly storing the postmultiplied log coefficients in the sphere visibility lookup table instead of the original log values... Sigh. 
From: Ben Yeoh <shuttlecork@gm...>  20091111 04:26:26

Sorry, hit enter a bit prematurely. Regarding Figure 3, I wanted to also ask how the values for AB were derived for multiple overlapping spheres. It's basically what I'm trying to accomplish... On Wed, Nov 11, 2009 at 12:04 PM, Ben Yeoh <shuttlecork@...> wrote: > Hi Peter, > > Thanks for chiming in! > > Okay, let me elaborate a bit. > > Basically, I'm trying to avoid doing any factorization to reduce the log > magnitude (besides DC isolation) in the pixel shader for performance > reasons. It still looks prohibitively expensive even with 3rd order SHs and > John's code generator. I was hoping that the artifacts arising with just the > OL approach with overlapping sphere occluders would be similar to what is > shown in the bunny example in Figure 9 in the paper, which looks to me like > fainter shadows. I think I'm willing to live with that if that's the case... > > So the pure OL approach (fit to a single sphere) with 1 sphere occluder in > the scene works fine. But when I have multiple overlapping spheres though as > in the bunny example, there's some pretty objectionable "shadow > saturation"/ringing artifacts when those overlapping spheres are "close" to > the receiver. In fact, it looks very different compared to the Figure 9 > example. I'm guessing that's because the OL approximation in Figure 9 was > fit to multiple spheres (ie, 63 spheres for the bunny?), wheares I was only > fitting to a single sphere. Am I right to assume this? > > Now, going along this line, I've tried a couple of things to "fit" the AB > table to multiple sphere occluders (which didn't work) : > > 1. Multiply the log visibility coeffs by 2 (for 2 sphere occluders) and use > that to find AB instead. > > 2. Do a triple product on the sphere visibility coeffs (ie, F * F) and use > that to find the log and AB. > > > > > > > > On Wed, Nov 11, 2009 at 2:29 AM, PeterPike Sloan < > peter_pike_sloan@...> wrote: > >> >> I'm not really sure what you are talking about. You might want to email >> John directly, I think we just computed a bunch of pairs and then built the >> table as a function of log magnitude (after DC isolation.) >> >> You could pose computing the ab texture itself as a least squares problem, >> and include training examples that were the result of multiple spheres >> (instead of just single spheres), but I don't think we did that... >> >> Are you referring to figure 3 in the paper? It is really just showing the >> OL pretty much just works as long as the magnitude is small enough... >> >> PeterPike Sloan >> >>  >> Date: Tue, 10 Nov 2009 16:58:04 +0800 >> From: shuttlecork@... >> To: gdalgorithmslist@... >> Subject: [Algorithms] More SH exponentiation questions >> >> Has anyone implemented SH exponentiation AND managed to approximate the >> optimal linear (OL) values for 2 or more sphere occluders? >> >> The SH paper briefly mentioned/implied that the authors managed to fit the >> OL approximation to multiple spheres, which still has some artifacts with >> inaccurate/lighter occlusion, but is still preferable to the single sphere >> approximation when dealing with multiple sphere occluders, which is common >> in most "practical" cases. >> >> If anyone has some idea how the fitting to multiple spheres thing is done >>  that'll be awesome. >> >> >> >>  >> Let Crystal Reports handle the reporting  Free Crystal Reports 2008 >> 30Day >> trial. Simplify your report design, integration and deployment  and focus >> on >> what you do best, core application coding. Discover what's new with >> Crystal Reports now. http://p.sf.net/sfu/bobjjuly >> _______________________________________________ >> GDAlgorithmslist mailing list >> GDAlgorithmslist@... >> https://lists.sourceforge.net/lists/listinfo/gdalgorithmslist >> Archives: >> http://sourceforge.net/mailarchive/forum.php?forum_name=gdalgorithmslist >> > > 
From: Jon Watte <jwatte@gm...>  20091111 04:24:42

But that's equally true for the reflection vector! If all the micromirrors were perfectly flat, then an infinite specular power would be applied, and you'd get a perfect reflection of the lighting environment  in fact, this is what environment mapping gives you. As the mirrors start deviating from the perfectly flat state, the specular power would decrease, and the specular reflection area would grow in size. I don't see how you can say that the halfangle formulation is more meaningful. We're still talking about reflected light. In the perfectly reflected case, clearly the reflection vector is 100% meaningful and accurate, and any other formulation would be less meaningful. I don't see how "meaningfulness" would change as smoothness goes from 100% to 99.9% or 95% or 50%. I do agree that the math gives you a different assumed microfacet distribution in the case of the reflection formulation versus the halfangle formulation. Both are approximations, of course. However, what I don't get, is why the reflection vector approximation is considered so inferior to the more expensive halfangle vector approximation. Does it have anything to do with the space integral of the reflection cone formed by the vector in question spread out by the power function? If so, how? Sincerely, jw On Sun, Nov 8, 2009 at 9:41 AM, Nathaniel Hoffman <naty@...> wrote: > The halfangle formulation is not just more physically correct than the > reflectionvector formulation, it is fundamentally more meaningful. ... > The halfvector comes from microfacet theory. Imagine that the surface is > actually a large collection of tiny flat mirrors when viewed under > magnification. Recall that a mirror only reflects light in the reflection > direction. For given light vector L and view vector V, only mirrors which  Americans might object: there is no way we would sacrifice our living standards for the benefit of people in the rest of the world. Nevertheless, whether we get there willingly or not, we shall soon have lower consumption rates, because our present rates are unsustainable. 
From: Ben Yeoh <shuttlecork@gm...>  20091111 04:04:24

Hi Peter, Thanks for chiming in! Okay, let me elaborate a bit. Basically, I'm trying to avoid doing any factorization to reduce the log magnitude (besides DC isolation) in the pixel shader for performance reasons. It still looks prohibitively expensive even with 3rd order SHs and John's code generator. I was hoping that the artifacts arising with just the OL approach with overlapping sphere occluders would be similar to what is shown in the bunny example in Figure 9 in the paper, which looks to me like fainter shadows. I think I'm willing to live with that if that's the case... So the pure OL approach (fit to a single sphere) with 1 sphere occluder in the scene works fine. But when I have multiple overlapping spheres though as in the bunny example, there's some pretty objectionable "shadow saturation"/ringing artifacts when those overlapping spheres are "close" to the receiver. In fact, it looks very different compared to the Figure 9 example. I'm guessing that's because the OL approximation in Figure 9 was fit to multiple spheres (ie, 63 spheres for the bunny?), wheares I was only fitting to a single sphere. Am I right to assume this? Now, going along this line, I've tried a couple of things to "fit" the AB table to multiple sphere occluders (which didn't work) : 1. Multiply the log visibility coeffs by 2 (for 2 sphere occluders) and use that to find AB instead. 2. Do a triple product on the sphere visibility coeffs (ie, F * F) and use that to find the log and AB. On Wed, Nov 11, 2009 at 2:29 AM, PeterPike Sloan < peter_pike_sloan@...> wrote: > > I'm not really sure what you are talking about. You might want to email > John directly, I think we just computed a bunch of pairs and then built the > table as a function of log magnitude (after DC isolation.) > > You could pose computing the ab texture itself as a least squares problem, > and include training examples that were the result of multiple spheres > (instead of just single spheres), but I don't think we did that... > > Are you referring to figure 3 in the paper? It is really just showing the > OL pretty much just works as long as the magnitude is small enough... > > PeterPike Sloan > >  > Date: Tue, 10 Nov 2009 16:58:04 +0800 > From: shuttlecork@... > To: gdalgorithmslist@... > Subject: [Algorithms] More SH exponentiation questions > > Has anyone implemented SH exponentiation AND managed to approximate the > optimal linear (OL) values for 2 or more sphere occluders? > > The SH paper briefly mentioned/implied that the authors managed to fit the > OL approximation to multiple spheres, which still has some artifacts with > inaccurate/lighter occlusion, but is still preferable to the single sphere > approximation when dealing with multiple sphere occluders, which is common > in most "practical" cases. > > If anyone has some idea how the fitting to multiple spheres thing is done  > that'll be awesome. > > > >  > Let Crystal Reports handle the reporting  Free Crystal Reports 2008 30Day > trial. Simplify your report design, integration and deployment  and focus > on > what you do best, core application coding. Discover what's new with > Crystal Reports now. http://p.sf.net/sfu/bobjjuly > _______________________________________________ > GDAlgorithmslist mailing list > GDAlgorithmslist@... > https://lists.sourceforge.net/lists/listinfo/gdalgorithmslist > Archives: > http://sourceforge.net/mailarchive/forum.php?forum_name=gdalgorithmslist > 
From: PeterPike Sloan <peter_pike_sloan@ho...>  20091110 18:30:12

I'm not really sure what you are talking about. You might want to email John directly, I think we just computed a bunch of pairs and then built the table as a function of log magnitude (after DC isolation.) You could pose computing the ab texture itself as a least squares problem, and include training examples that were the result of multiple spheres (instead of just single spheres), but I don't think we did that... Are you referring to figure 3 in the paper? It is really just showing the OL pretty much just works as long as the magnitude is small enough... PeterPike Sloan Date: Tue, 10 Nov 2009 16:58:04 +0800 From: shuttlecork@... To: gdalgorithmslist@... Subject: [Algorithms] More SH exponentiation questions Has anyone implemented SH exponentiation AND managed to approximate the optimal linear (OL) values for 2 or more sphere occluders? The SH paper briefly mentioned/implied that the authors managed to fit the OL approximation to multiple spheres, which still has some artifacts with inaccurate/lighter occlusion, but is still preferable to the single sphere approximation when dealing with multiple sphere occluders, which is common in most "practical" cases. If anyone has some idea how the fitting to multiple spheres thing is done  that'll be awesome. 
From: Ben Yeoh <shuttlecork@gm...>  20091110 08:58:20

Has anyone implemented SH exponentiation AND managed to approximate the optimal linear (OL) values for 2 or more sphere occluders? The SH paper briefly mentioned/implied that the authors managed to fit the OL approximation to multiple spheres, which still has some artifacts with inaccurate/lighter occlusion, but is still preferable to the single sphere approximation when dealing with multiple sphere occluders, which is common in most "practical" cases. If anyone has some idea how the fitting to multiple spheres thing is done  that'll be awesome. 
From: Jason Hughes <jhughes@st...>  20091108 22:32:08

Since you're not looking for precision, there's a hacky solution that might send you off in a fruitful direction (or not). You could do a little preprocessing to pack a bunch of spheres inside your level geometry that are equalsized. These would represent your volume. When you split the geometry by a plane, you simply dotproduct each sphere to see if it's on one side or the other. You can trade off performance vs. precision by having more or less spheres. The volume of spheres won't be correct, but they're representative of an equal portion. You could figure out the actual volume of the geometry offline, then assign each sphere one share of that amount... The point of using spheres is there are (many) algorithms to pack them in a space, and the dot product to determine side of a splitting plane is cheap. Maybe a bad idea, but it's relatively runtime friendly. :) JH Jason Hughes President Steel Penny Games, Inc. Austin, TX Juan Linietsky wrote: > Well, i was wondering about doing it realtime and without having to > create polygons.. > but it doesn't sound so simple after all. I guess the fastest way to > do it "correctly" is to traverse the BSP "save" the planes > intersecting the polygon along the way. In the end i guess i'd end up > with a set of convex objects delimited by the planes, but computing > area from that seems a little expensive to me. > > Since what i need to know does not need to be super precise, I was > thinking doing something similar to a montecarlo method should be > best, as in.. throwing N random points inside the convex volume i want > to test against the BSP tree, and use them to traverse the BSP. At the > end, the number of points inside vs outside should give me an > approximation of the area taken up by the intersection.. but i was > wondering if there could be a better way to do something like this. > > Juan Linietsky > > On Sun, Nov 8, 2009 at 7:21 PM, Alen Ladavac <alenlml@...> wrote: > >> Juan, >> >> There are ways to make a boolean intersection of two objects >> represented by BSP trees and get the resulting volume in form of a >> polygonsoup. It is not trivial, but it not overly complex either. And >> from the resulting polygonsoup you can determine volume or area of >> the intersection. Is that what you need? >> >> Cheers, >> Alen >> >> >> Sunday, November 8, 2009, 5:41:57 PM, you wrote: >> >> >>> Hi! Here's another question about an algorithm i've been wondering >>> since a few days, to implement an idea i had about interior >>> rendering.. >>> >>> Basically, take a BSP tree of a closed, concave shape that encloses an >>> area, and also a convex object (that provides a support function), how >>> could the area of the convex object that is inside the concave object >>> represented by a BSP tree be calculated? >>> Finding if they intersect is easy, but it seems to me that calculating >>> how much of the convex object is inside the BSP tree area is not so >>> simple.. but maybe i'm missing something? >>> Also maybe there is another structure that best fits this problem than >>> a BSP tree? >>> >>> Cheers >>> >>> Juan Linietsky >>> >>>  >>> Let Crystal Reports handle the reporting  Free Crystal Reports 2008 30Day >>> trial. Simplify your report design, integration and deployment  and focus on >>> what you do best, core application coding. Discover what's new with >>> Crystal Reports now. http://p.sf.net/sfu/bobjjuly >>> _______________________________________________ >>> GDAlgorithmslist mailing list >>> GDAlgorithmslist@... >>> https://lists.sourceforge.net/lists/listinfo/gdalgorithmslist >>> Archives: >>> http://sourceforge.net/mailarchive/forum.php?forum_name=gdalgorithmslist >>> >>  >> Best regards, >> Alen mailto:alenlml@... >> >> >> > >  > Let Crystal Reports handle the reporting  Free Crystal Reports 2008 30Day > trial. Simplify your report design, integration and deployment  and focus on > what you do best, core application coding. Discover what's new with > Crystal Reports now. http://p.sf.net/sfu/bobjjuly > _______________________________________________ > GDAlgorithmslist mailing list > GDAlgorithmslist@... > https://lists.sourceforge.net/lists/listinfo/gdalgorithmslist > Archives: > http://sourceforge.net/mailarchive/forum.php?forum_name=gdalgorithmslist > > 
From: Jason Hughes <jhughes@st...>  20091108 22:29:56

Since you're not looking for precision, there's a hacky solution that might send you off in a fruitful direction (or not). You could do a little preprocessing to pack a bunch of spheres inside your level geometry that are equalsized. These would represent your volume. When you split the geometry by a plane, you simply dotproduct each sphere to see if it's on one side or the other. You can trade off performance vs. precision by having more or less spheres. The volume of spheres won't be correct, but they're representative of an equal portion. You could figure out the actual volume of the geometry offline, then assign each sphere one share of that amount... The point of using spheres is there are (many) algorithms to pack them in a space, and the dot product to determine side of a splitting plane is cheap. Maybe a bad idea, but it's relatively runtime friendly. :) JH Jason Hughes President Steel Penny Games, Inc. Austin, TX Juan Linietsky wrote: > Well, i was wondering about doing it realtime and without having to > create polygons.. > but it doesn't sound so simple after all. I guess the fastest way to > do it "correctly" is to traverse the BSP "save" the planes > intersecting the polygon along the way. In the end i guess i'd end up > with a set of convex objects delimited by the planes, but computing > area from that seems a little expensive to me. > > Since what i need to know does not need to be super precise, I was > thinking doing something similar to a montecarlo method should be > best, as in.. throwing N random points inside the convex volume i want > to test against the BSP tree, and use them to traverse the BSP. At the > end, the number of points inside vs outside should give me an > approximation of the area taken up by the intersection.. but i was > wondering if there could be a better way to do something like this. > > Juan Linietsky > > On Sun, Nov 8, 2009 at 7:21 PM, Alen Ladavac <alenlml@...> wrote: > >> Juan, >> >> There are ways to make a boolean intersection of two objects >> represented by BSP trees and get the resulting volume in form of a >> polygonsoup. It is not trivial, but it not overly complex either. And >> from the resulting polygonsoup you can determine volume or area of >> the intersection. Is that what you need? >> >> Cheers, >> Alen >> >> >> Sunday, November 8, 2009, 5:41:57 PM, you wrote: >> >> >>> Hi! Here's another question about an algorithm i've been wondering >>> since a few days, to implement an idea i had about interior >>> rendering.. >>> >>> Basically, take a BSP tree of a closed, concave shape that encloses an >>> area, and also a convex object (that provides a support function), how >>> could the area of the convex object that is inside the concave object >>> represented by a BSP tree be calculated? >>> Finding if they intersect is easy, but it seems to me that calculating >>> how much of the convex object is inside the BSP tree area is not so >>> simple.. but maybe i'm missing something? >>> Also maybe there is another structure that best fits this problem than >>> a BSP tree? >>> >>> Cheers >>> >>> Juan Linietsky >>> >>>  >>> Let Crystal Reports handle the reporting  Free Crystal Reports 2008 30Day >>> trial. Simplify your report design, integration and deployment  and focus on >>> what you do best, core application coding. Discover what's new with >>> Crystal Reports now. http://p.sf.net/sfu/bobjjuly >>> _______________________________________________ >>> GDAlgorithmslist mailing list >>> GDAlgorithmslist@... >>> https://lists.sourceforge.net/lists/listinfo/gdalgorithmslist >>> Archives: >>> http://sourceforge.net/mailarchive/forum.php?forum_name=gdalgorithmslist >>> >> >>  >> Best regards, >> Alen mailto:alenlml@... >> >> >> > >  > Let Crystal Reports handle the reporting  Free Crystal Reports 2008 30Day > trial. Simplify your report design, integration and deployment  and focus on > what you do best, core application coding. Discover what's new with > Crystal Reports now. http://p.sf.net/sfu/bobjjuly > _______________________________________________ > GDAlgorithmslist mailing list > GDAlgorithmslist@... > https://lists.sourceforge.net/lists/listinfo/gdalgorithmslist > Archives: > http://sourceforge.net/mailarchive/forum.php?forum_name=gdalgorithmslist > > 
From: Juan Linietsky <reduzio@gm...>  20091108 22:19:57

Well, i was wondering about doing it realtime and without having to create polygons.. but it doesn't sound so simple after all. I guess the fastest way to do it "correctly" is to traverse the BSP "save" the planes intersecting the polygon along the way. In the end i guess i'd end up with a set of convex objects delimited by the planes, but computing area from that seems a little expensive to me. Since what i need to know does not need to be super precise, I was thinking doing something similar to a montecarlo method should be best, as in.. throwing N random points inside the convex volume i want to test against the BSP tree, and use them to traverse the BSP. At the end, the number of points inside vs outside should give me an approximation of the area taken up by the intersection.. but i was wondering if there could be a better way to do something like this. Juan Linietsky On Sun, Nov 8, 2009 at 7:21 PM, Alen Ladavac <alenlml@...> wrote: > Juan, > > There are ways to make a boolean intersection of two objects > represented by BSP trees and get the resulting volume in form of a > polygonsoup. It is not trivial, but it not overly complex either. And > from the resulting polygonsoup you can determine volume or area of > the intersection. Is that what you need? > > Cheers, > Alen > > > Sunday, November 8, 2009, 5:41:57 PM, you wrote: > >> Hi! Here's another question about an algorithm i've been wondering >> since a few days, to implement an idea i had about interior >> rendering.. > >> Basically, take a BSP tree of a closed, concave shape that encloses an >> area, and also a convex object (that provides a support function), how >> could the area of the convex object that is inside the concave object >> represented by a BSP tree be calculated? >> Finding if they intersect is easy, but it seems to me that calculating >> how much of the convex object is inside the BSP tree area is not so >> simple.. but maybe i'm missing something? >> Also maybe there is another structure that best fits this problem than >> a BSP tree? > >> Cheers > >> Juan Linietsky > >>  >> Let Crystal Reports handle the reporting  Free Crystal Reports 2008 30Day >> trial. Simplify your report design, integration and deployment  and focus on >> what you do best, core application coding. Discover what's new with >> Crystal Reports now. http://p.sf.net/sfu/bobjjuly >> _______________________________________________ >> GDAlgorithmslist mailing list >> GDAlgorithmslist@... >> https://lists.sourceforge.net/lists/listinfo/gdalgorithmslist >> Archives: >> http://sourceforge.net/mailarchive/forum.php?forum_name=gdalgorithmslist > > > >  > Best regards, > Alen mailto:alenlml@... > > 
From: Alen Ladavac <alenlml@cr...>  20091108 21:20:58

Juan, There are ways to make a boolean intersection of two objects represented by BSP trees and get the resulting volume in form of a polygonsoup. It is not trivial, but it not overly complex either. And from the resulting polygonsoup you can determine volume or area of the intersection. Is that what you need? Cheers, Alen Sunday, November 8, 2009, 5:41:57 PM, you wrote: > Hi! Here's another question about an algorithm i've been wondering > since a few days, to implement an idea i had about interior > rendering.. > Basically, take a BSP tree of a closed, concave shape that encloses an > area, and also a convex object (that provides a support function), how > could the area of the convex object that is inside the concave object > represented by a BSP tree be calculated? > Finding if they intersect is easy, but it seems to me that calculating > how much of the convex object is inside the BSP tree area is not so > simple.. but maybe i'm missing something? > Also maybe there is another structure that best fits this problem than > a BSP tree? > Cheers > Juan Linietsky >  > Let Crystal Reports handle the reporting  Free Crystal Reports 2008 30Day > trial. Simplify your report design, integration and deployment  and focus on > what you do best, core application coding. Discover what's new with > Crystal Reports now. http://p.sf.net/sfu/bobjjuly > _______________________________________________ > GDAlgorithmslist mailing list > GDAlgorithmslist@... > https://lists.sourceforge.net/lists/listinfo/gdalgorithmslist > Archives: > http://sourceforge.net/mailarchive/forum.php?forum_name=gdalgorithmslist  Best regards, Alen mailto:alenlml@... 
From: Nathaniel Hoffman <naty@io...>  20091108 17:41:37

Robin is correct in that the problem with reflectionvector highlights is the shape (the center of the highlight will be in the same location under both formulations), and that the problem is most noticeable at grazing angles. However, the images he gives don't show the effect as much as a flat surface would. Consider the following examples of reallife highlights: * The "golden path" formed by the setting sun on the ocean * Vertical streaks from car highlights on wet streets And similar cases. Halfangle highlights will give you the correct shape. Reflectionvector highlights will always stay circular  in these cases you will get a large circular blob instead of the narrow vertical streak you would get in real life. The halfangle formulation is not just more physically correct than the reflectionvector formulation, it is fundamentally more meaningful. This affects things other than highlight shape, for example the correct way to compute the Fresnel factor. The halfvector comes from microfacet theory. Imagine that the surface is actually a large collection of tiny flat mirrors when viewed under magnification. Recall that a mirror only reflects light in the reflection direction. For given light vector L and view vector V, only mirrors which happen to be angled just right to reflect L into V will matter for the purpose of shading. All other mirrors will be reflecting L into other directions. The intensity of the reflection is proportional to the percentage of mirrors that are angled "just right". To be angled "just right" to reflect L into V, the surface normal of the mirror has to be halfway between them  in other words the microfacet normal needs to be equal to H. The question "what percentage of mirrors are angled just right" becomes "what percentage of mirrors have a normal equal to H". The way to answer this question is to define a microfacet normal distribution function, or NDF for the surface. You can plug into this function any example direction, and it will tell you the percentage of microfacets with normals pointing in that direction (I'm glossing over a few mathematical details here). NDFs are defined in the local tangent space of the surface. In isotropic NDFs the only parameter is the elevation angle of the microfacet normal in tangent space  in other words the angle between N and H. Since in most cases surface microstructure results from random processes, the NDF is a Gaussianish blob. The cosine raised to a power is simply an approximation of this blob (there should also be a normalization factor, which I don't discuss since this email is already too long). When you calculate (N.H)^m, you are actually evaluating the NDF for the case of microfacet normal equal to H, or in other words calculating the answer to the question "what percentage of microfacets are participating in the reflection of light from L to V". When you calculate (V.R)^m, you aren't calculating anything with a physical meaning. Understanding this helps with things like Fresnel. The Fresnel factor for a mirror is a function of the angle between the mirror normal and the light vector (or reflection vector). Since all microfacets participating in the reflection have their microfacet normal equal to H, the angle for Fresnel can be found by computing (L dot H) or (V dot H). This is the cosine you should plug into the Shlick Fresnel approximation, for example ((V dot N) is correct for Fresnel applied to an environment map, but not for specular highlights). I hope this clears up some of the confusion. (begin shameless plug)There is also a fairly detailed explanation of this in "RealTime Rendering, 3rd edition"(end shameless plug). Naty Hoffman > The difference is in the shape of specular highlights. Where > Phong specular highlights at grazing angles are streched out > moon shapes, the Blinn halfangle highlights retain a more > circular shape. Real world photos of specular surfaces at > grazing angles more closely resemble Blinn shapes than Phong, > plus the Blinn model has some good physical reasoning behind > it to do with reflection from distributions of microfacets. > > http://img22.imageshack.us/img22/7/blinn.jpg > http://img526.imageshack.us/img526/758/phong.jpg > >  Robin Green. > > On Wed, Nov 4, 2009 at 10:14 AM, Jeff Russell <jeffdr@...> > wrote: >> Not to derail the conversation, but I've never really understood >> why half vectors are preferable to an actual reflection vector, >> either in terms of efficiency or realism. I've always just used >> reflection, am I missing something? 
From: Juan Linietsky <reduzio@gm...>  20091108 16:42:12

Hi! Here's another question about an algorithm i've been wondering since a few days, to implement an idea i had about interior rendering.. Basically, take a BSP tree of a closed, concave shape that encloses an area, and also a convex object (that provides a support function), how could the area of the convex object that is inside the concave object represented by a BSP tree be calculated? Finding if they intersect is easy, but it seems to me that calculating how much of the convex object is inside the BSP tree area is not so simple.. but maybe i'm missing something? Also maybe there is another structure that best fits this problem than a BSP tree? Cheers Juan Linietsky 
From: Juan Linietsky <reduzio@gm...>  20091108 16:37:20

Hey eveyone! thanks for the answers, I was actually more looking for an attenuation curve function that retains the convex/concave properties of using pow(), although some of the examples are great 
From: pete demoreuille <pbd@po...>  20091107 01:01:11

Without getting too OT and talking about D3D...! I was not referring to PRT per se but of the texture>vertex resampling method it provides: ID3DXPRTEngine::ResampleBuffer (which could do something smarter than filter the texture around the vert's uv). A smarter, probably adhoc method to generate samples and generate/fit pervertex values is what I am after  I can't afford the compute time to do a proper job sampling (needless to say our function is not analytic, is expensive to compute and is at times hard to analyze) or even fill out and then filter a chart where there is reasonable coverage. I'll rephrase my question a little.. Do folks who do something more clever than straight up point/supersampling to generate pervertex data (like fitting based on error between interpolated and reference values or proper/adhoc sampling) get compellingly better results than those after the occasional touch up by an artist? Or do you just use what comes out of Maya/Max/whatever? Likewise with solving for and using gradients when you want to spend the memory and time (I've had mixed experiences with their cost/benefit at times so don't have a good guess if they'll be worth it). Apologies if this isn't algorithmy enough.. maybe we could analyze the structure of the matrix to solve based on connectivity of the mesh! :) >> gives compelling improvements, etc. Searching on the web/archive didn't >> really find too much (besides D3DX PRT methods that don't say how they >> work, and still require a texture thus an atlas/usable parameterization, >> which I don't really want to deal with). >> > > PRT solves an undersampling problem, but a different one; instead of > figuring out what the best lighting params are for a vertex to > accurately represent a face, it figures out what the best lighting > params are for representing the entire incoming radiance (cosine > convolved, optionally bounced/scattered) at the sample location. > > It can be done per vertex, in which case it doesn't need a UV mapping, > but then it can suffer artefacts from interpolation or unfortunate > sample locations again; hence it solves a related but different problem > and wouldn't solve your issue (other than smoothing things out perhaps > and thus being less vulnerable to it). > > Thanks! Pete 
From: Bert Peers <bert@bp...>  20091106 23:46:32

This sounds like a standard sampling problem, so all of the literature with regards to sampling, aliasing, undersampling, filtering and reconstruction could be helpful? (including a thread IIRC about it either here or on SWeng) Ie. that may tell you how many samples you need to be artefact free, how to best position and maybe subdivide them, using gradients for better reconstruction, adding a good filter, etc. > gives compelling improvements, etc. Searching on the web/archive didn't > really find too much (besides D3DX PRT methods that don't say how they > work, and still require a texture thus an atlas/usable parameterization, > which I don't really want to deal with). PRT solves an undersampling problem, but a different one; instead of figuring out what the best lighting params are for a vertex to accurately represent a face, it figures out what the best lighting params are for representing the entire incoming radiance (cosine convolved, optionally bounced/scattered) at the sample location. It can be done per vertex, in which case it doesn't need a UV mapping, but then it can suffer artefacts from interpolation or unfortunate sample locations again; hence it solves a related but different problem and wouldn't solve your issue (other than smoothing things out perhaps and thus being less vulnerable to it). hth, bert 
From: pete demoreuille <pbd@po...>  20091106 23:01:13

Our engine (I presume, like many others) stores some lighting/misc data per mesh vertex that would really be best stored in a texture. This works reasonably well and in itself doesn't bother me terribly, but the fact that we sample the values at the vertex itself does. The artifacts, ie the occasional dark tri due to occlusion at the verts being much darker than the average over the face, are something our artists have gotten used to painting out. But I'd hope we could give much better baseline results. So instead of simply sampling at vertices, I was planning on generating samples on each face (probably somewhat uniformly over the outside area of the mesh) and fitting (least squares) for the vertex values. Ie try to minimize the actual deviation of the interpolated values over the face from what they should be, instead of simply ignoring the fact that they'll be interpolated.. And in additional to accuracy, it should let us handle hard edges, doublesided tris and a few more situations more accurately than they are now. I presume people have done this before, so before trying it out I thought I'd ask if anyone had positive results, rules of thumb about how dense data needs to be sampled to generate good results in most cases, if applying a similar scheme to additionally generate gradients wrt u&v gives compelling improvements, etc. Searching on the web/archive didn't really find too much (besides D3DX PRT methods that don't say how they work, and still require a texture thus an atlas/usable parameterization, which I don't really want to deal with). Thanks for any input! Pete 
From: Simon Fenney <simon.fenney@po...>  20091105 11:50:28

Jon Watte wrote: > Nathaniel Hoffman wrote: >> lowly cosine power  it's not completely physically meaningless. If >> you are using BlinnPhong (N dot H, which is much to be preferred >> over original Phong  R dot L), then using a cosine power is >> equivalent to >> > > Except you already have the reflection vector for your environment > mapping, so why not reuse it? > Ignoring any issues on what is more "physically accurate", I always assumed that Blinn introduced his model because if you.. A) assume your view angle didn't change (i.e. viewer is an infinite distance away) and B) your lights are 'parallel/infinitely far away' ...then the method is much cheaper than the "Phong" method. I think these sort of assumptions are valid approximations for software rendering, which was the norm at the time. Once these restrictive conditions are lifted however, it seems to me that the original Phong method is far cheaper. Simon 
From: Alen Ladavac <alenlml@cr...>  20091104 22:39:39

And in that spirit I would recommend Schlick's BDRF papers. Besides using halfvector and deriving a really good looking physicallybased model, Schlick has derived some very nice approximations based on division of polynomials instead of pow() function. That approach is faster, doesn't suffer from that much precision issues, and has another nice property that parameters can be made to fit the 0..1 domain. I consider that a big plus, since it allows for easy storing of the parameter in a texture. Such approximations are very useful even for other applications, not just specular lighting. (Which we still don't have any evidence is what OP needs. ;) ) Alen Wednesday, November 4, 2009, 9:00:19 PM, you wrote: > The difference is in the shape of specular highlights. Where Phong > specular highlights at grazing angles are streched out moon shapes, > the Blinn halfangle highlights retain a more circular shape. Real > world photos of specular surfaces at grazing angles more closely > resemble Blinn shapes than Phong, plus the Blinn model has some good > physical reasoning behind it to do with reflection from distributions > of microfacets. > http://img22.imageshack.us/img22/7/blinn.jpg > http://img526.imageshack.us/img526/758/phong.jpg >  Robin Green. > On Wed, Nov 4, 2009 at 10:14 AM, Jeff Russell <jeffdr@...> wrote: >> Not to derail the conversation, but I've never really understood why half >> vectors are preferable to an actual reflection vector, either in terms of >> efficiency or realism. I've always just used reflection, am I missing >> something? >  > Let Crystal Reports handle the reporting  Free Crystal Reports 2008 30Day > trial. Simplify your report design, integration and deployment  and focus on > what you do best, core application coding. Discover what's new with > Crystal Reports now. http://p.sf.net/sfu/bobjjuly > _______________________________________________ > GDAlgorithmslist mailing list > GDAlgorithmslist@... > https://lists.sourceforge.net/lists/listinfo/gdalgorithmslist > Archives: > http://sourceforge.net/mailarchive/forum.php?forum_name=gdalgorithmslist  Best regards, Alen mailto:alenlml@... 
From: Robin Green <robin.green@gm...>  20091104 20:00:36

The difference is in the shape of specular highlights. Where Phong specular highlights at grazing angles are streched out moon shapes, the Blinn halfangle highlights retain a more circular shape. Real world photos of specular surfaces at grazing angles more closely resemble Blinn shapes than Phong, plus the Blinn model has some good physical reasoning behind it to do with reflection from distributions of microfacets. http://img22.imageshack.us/img22/7/blinn.jpg http://img526.imageshack.us/img526/758/phong.jpg  Robin Green. On Wed, Nov 4, 2009 at 10:14 AM, Jeff Russell <jeffdr@...> wrote: > Not to derail the conversation, but I've never really understood why half > vectors are preferable to an actual reflection vector, either in terms of > efficiency or realism. I've always just used reflection, am I missing > something? 
From: Jeff Russell <jeffdr@8m...>  20091104 18:14:30

Not to derail the conversation, but I've never really understood why half vectors are preferable to an actual reflection vector, either in terms of efficiency or realism. I've always just used reflection, am I missing something? On Wed, Nov 4, 2009 at 12:06 PM, Jon Watte <jwatte@...> wrote: > Nathaniel Hoffman wrote: > > lowly cosine power  it's not completely physically meaningless. If you > > are using BlinnPhong (N dot H, which is much to be preferred over > > original Phong  R dot L), then using a cosine power is equivalent to > > > > Except you already have the reflection vector for your environment > mapping, so why not reuse it? > > As far as I can tell, cos(halfvector dot) behaves the same as cos(0.5 + > reflectionvector dot), so you can make them equivalent by just > adjusting the dot product value you put into the power function. Why do > you think that the half vector is preferable? > > When it comes to approximating the power function for a specular > highlight, you can go with a texture lookup  0 .. 1 on one axis, and 0 > .. 200 on the other, for example. Or you can do the cheapest of the cheap: > > float cheap_pow(float x, float n) > { > x = saturate(1  (1  x ) * (n * 0.333)); > return x * x; > } > > Not very accurate, but very cheap, and still creates a softish diffuse > highlight shape, which tends to be slightly narrower towards the edges. > It also gets worse below power 5 or so. > > Sincerely, > > jw > > >  > > Revenge is the most pointless and damaging of human desires. > > > >  > Let Crystal Reports handle the reporting  Free Crystal Reports 2008 30Day > trial. Simplify your report design, integration and deployment  and focus > on > what you do best, core application coding. Discover what's new with > Crystal Reports now. http://p.sf.net/sfu/bobjjuly > _______________________________________________ > GDAlgorithmslist mailing list > GDAlgorithmslist@... > https://lists.sourceforge.net/lists/listinfo/gdalgorithmslist > Archives: > http://sourceforge.net/mailarchive/forum.php?forum_name=gdalgorithmslist >  Jeff Russell Engineer, 8monkey Labs http://www.8monkeylabs.com 
From: Jon Watte <jwatte@gm...>  20091104 18:06:53

Nathaniel Hoffman wrote: > lowly cosine power  it's not completely physically meaningless. If you > are using BlinnPhong (N dot H, which is much to be preferred over > original Phong  R dot L), then using a cosine power is equivalent to > Except you already have the reflection vector for your environment mapping, so why not reuse it? As far as I can tell, cos(halfvector dot) behaves the same as cos(0.5 + reflectionvector dot), so you can make them equivalent by just adjusting the dot product value you put into the power function. Why do you think that the half vector is preferable? When it comes to approximating the power function for a specular highlight, you can go with a texture lookup  0 .. 1 on one axis, and 0 .. 200 on the other, for example. Or you can do the cheapest of the cheap: float cheap_pow(float x, float n) { x = saturate(1  (1  x ) * (n * 0.333)); return x * x; } Not very accurate, but very cheap, and still creates a softish diffuse highlight shape, which tends to be slightly narrower towards the edges. It also gets worse below power 5 or so. Sincerely, jw  Revenge is the most pointless and damaging of human desires. 