Thread: [Algorithms] approximation to pow(n,x)?
Brought to you by:
vexxed72
From: Juan L. <re...@gm...> - 2009-11-04 03:45:43
|
Hi guys! I was wondering if there are fast ways to approximate the curve resulting from pow(n,x) where n is in range [0..1] and x > 0 using only floating point (without strange pointer casts/etc).. Cheers Juan Linietsky |
From: Oscar F. <os...@tr...> - 2009-11-04 11:32:29
|
x^m = exp(m log x) So really you need a fast exp and log function. Can you newton-raphson refine a log and exp calculation? I've never tried ... perhaps someone else can help there. Failing that you could probably use tables for the calculations though this, obviously, limits the range of powers you can perform. 2009/11/4 Juan Linietsky <re...@gm...> > Hi guys! I was wondering if there are fast ways to approximate the > curve resulting from pow(n,x) where n is in range [0..1] and x > 0 > using only floating point (without strange pointer casts/etc).. > > Cheers > > Juan Linietsky > > > ------------------------------------------------------------------------------ > Let Crystal Reports handle the reporting - Free Crystal Reports 2008 30-Day > trial. Simplify your report design, integration and deployment - and focus > on > what you do best, core application coding. Discover what's new with > Crystal Reports now. http://p.sf.net/sfu/bobj-july > _______________________________________________ > GDAlgorithms-list mailing list > GDA...@li... > https://lists.sourceforge.net/lists/listinfo/gdalgorithms-list > Archives: > http://sourceforge.net/mailarchive/forum.php?forum_name=gdalgorithms-list > |
From: Richard F. <ra...@gm...> - 2009-11-04 11:59:26
|
I'm generally having a problem with the definition of the problem. Curve for me defines that this is a differentiation, but i feel that it's probably not. Not using clever bit ops seems pointless if you're after a lot of speed, and a lookup table might be more expensive because of the memory access. I think we need the problem defined better. Give us some context! 2009/11/4 Oscar Forth <os...@tr...> > x^m = exp(m log x) > > So really you need a fast exp and log function. > > Can you newton-raphson refine a log and exp calculation? I've never tried > ... perhaps someone else can help there. > > Failing that you could probably use tables for the calculations though > this, obviously, limits the range of powers you can perform. > > 2009/11/4 Juan Linietsky <re...@gm...> > > Hi guys! I was wondering if there are fast ways to approximate the >> curve resulting from pow(n,x) where n is in range [0..1] and x > 0 >> using only floating point (without strange pointer casts/etc).. >> >> Cheers >> >> Juan Linietsky >> >> >> ------------------------------------------------------------------------------ >> Let Crystal Reports handle the reporting - Free Crystal Reports 2008 >> 30-Day >> trial. Simplify your report design, integration and deployment - and focus >> on >> what you do best, core application coding. Discover what's new with >> Crystal Reports now. http://p.sf.net/sfu/bobj-july >> _______________________________________________ >> GDAlgorithms-list mailing list >> GDA...@li... >> https://lists.sourceforge.net/lists/listinfo/gdalgorithms-list >> Archives: >> http://sourceforge.net/mailarchive/forum.php?forum_name=gdalgorithms-list >> > > > > ------------------------------------------------------------------------------ > Let Crystal Reports handle the reporting - Free Crystal Reports 2008 30-Day > trial. Simplify your report design, integration and deployment - and focus > on > what you do best, core application coding. Discover what's new with > Crystal Reports now. http://p.sf.net/sfu/bobj-july > _______________________________________________ > GDAlgorithms-list mailing list > GDA...@li... > https://lists.sourceforge.net/lists/listinfo/gdalgorithms-list > Archives: > http://sourceforge.net/mailarchive/forum.php?forum_name=gdalgorithms-list > -- fabs(); Just because the world is full of people that think just like you, doesn't mean the other ones can't be right. |
From: Danny K. <dr...@we...> - 2009-11-04 12:50:43
|
> x^m = exp(m log x) > > > So really you need a fast exp and log function. > > > Can you newton-raphson refine a log and exp calculation? > I've never tried ... perhaps someone else can help there. > > > Failing that you could probably use tables for the > calculations though this, obviously, limits the range of > powers you can perform. I know nothing about this but one thing that pops into my head is that exp and log both have a lot of self-similarity features so I wonder whether that would help. I once experimented with using a look-up table with very few entries and catmull-rom interpolation between them. It was just playing, but I do remember getting surprisingly fast results on sin and cos. Danny (aware he's out of his depth...) |
From: Robin G. <rob...@gm...> - 2009-11-04 14:32:38
|
Your real problem is that the range of x is unconstrained. Looking at how pow() is calculated: pow(n,x) = exp(log(x)*n) You've simplified the exponent to exp(y+0) .. exp(y+1), but as y is unconstrained you've really gained nothing. The main problem with implementing pow() is that naive code can lose many digits of significance during the calculation if you don't calculate the log and multiply using extended precision, e.g. if you want a 24-bit result you'll need to calculate the intermediate values to 30 or more bits of accuracy. This is done by splitting the problem into a high part and a low part and combining the two at the reconstruction phase. For example, here's an implementation of powf(): http://www.koders.com/c/fidF4B379CC08D80BEE9CD9B65E01302343E03BF4A7.aspx?s=Chebyshev The exp() is a great function to minimax as it only requires a few terms to approach 24-bit accuracy, but log() is a painfully different story often requiring table lookups (memory hits) to function well across the full number range. What are you using this for? If it's lighting or shading, there's no real reason to use a power function on the cosine term to get tighter specular highlights. It's just a shaping function that people use because it's easy to control and AFAICT it has no physical basis in defining a BRDF from the micropolygon point of view. - Robin Green. On Tue, Nov 3, 2009 at 7:45 PM, Juan Linietsky <re...@gm...> wrote: > Hi guys! I was wondering if there are fast ways to approximate the > curve resulting from pow(n,x) where n is in range [0..1] and x > 0 > using only floating point (without strange pointer casts/etc).. > > Cheers > > Juan Linietsky |
From: Robin G. <rob...@gm...> - 2009-11-04 14:47:44
|
Lack of coffee made be forget this - if you don't give a monkey's about numerical accuracy and only want power-like curves, go right ahead and use a fast exp() and log() to build your own power function. Details of quick'n'dirty medium accuracy implementations are available here: http://www.research.scea.com/gdc2003/fast-math-functions.html or in Game Programming Gems. - Robin Green. On Wed, Nov 4, 2009 at 6:32 AM, Robin Green <rob...@gm...> wrote: > Your real problem is that the range of x is unconstrained. Looking at > how pow() is calculated: > > pow(n,x) = exp(log(x)*n) > > You've simplified the exponent to exp(y+0) .. exp(y+1), but as y is > unconstrained you've really gained nothing. The main problem with > implementing pow() is that naive code can lose many digits of > significance during the calculation if you don't calculate the log and > multiply using extended precision, e.g. if you want a 24-bit result > you'll need to calculate the intermediate values to 30 or more bits of > accuracy. This is done by splitting the problem into a high part and a > low part and combining the two at the reconstruction phase. For > example, here's an implementation of powf(): > > http://www.koders.com/c/fidF4B379CC08D80BEE9CD9B65E01302343E03BF4A7.aspx?s=Chebyshev > > The exp() is a great function to minimax as it only requires a few > terms to approach 24-bit accuracy, but log() is a painfully different > story often requiring table lookups (memory hits) to function well > across the full number range. > > What are you using this for? If it's lighting or shading, there's no > real reason to use a power function on the cosine term to get tighter > specular highlights. It's just a shaping function that people use > because it's easy to control and AFAICT it has no physical basis in > defining a BRDF from the micropolygon point of view. > > - Robin Green. > > On Tue, Nov 3, 2009 at 7:45 PM, Juan Linietsky <re...@gm...> wrote: >> Hi guys! I was wondering if there are fast ways to approximate the >> curve resulting from pow(n,x) where n is in range [0..1] and x > 0 >> using only floating point (without strange pointer casts/etc).. >> >> Cheers >> >> Juan Linietsky > |
From: Nathaniel H. <na...@io...> - 2009-11-04 15:37:21
|
> What are you using this for? If it's lighting or shading, there's no > real reason to use a power function on the cosine term to get tighter > specular highlights. It's just a shaping function that people use > because it's easy to control and AFAICT it has no physical basis in > defining a BRDF from the micropolygon point of view. > > - Robin Green. It's true that there is no reason to _exactly_ match a cosine power, and I agree with Robin that for this purpose, any curve which vaguely resembles cosine power will do. As a side note, I just wanted to say a few words in the defense of the lowly cosine power - it's not completely physically meaningless. If you are using Blinn-Phong (N dot H, which is much to be preferred over original Phong - R dot L), then using a cosine power is equivalent to assuming that the microfacet normal distribution follows a cosine power curve. Now, there is no physical reason to assume that microfacet distributions necessarily follow a cosine power curve, but (except for very low powers) this curve very closely matches one that _does_ have a physical basis - the Beckmann distribution (the one used in the Cook-Torrance BRDF). The match is amazingly close considering that Bui-Tong Phong just eyeballed the function; he didn't do any curve fitting. BTW, Beckmann's behavior for very low powers is interesting - it stops behaving like a Gaussianish blob and starts turning inside-out (which makes sense when you look at the definition of the "m" parameter). Beckmann isn't the last word on microfacet distributions; the EGSR 2007 paper "Microfacet Models for Refraction through Rough Surfaces" (which is a great paper overall and well worth reading for anyone interested in microfacet BRDFs) makes a good case for a different curve, with a more gradual falloff. Which again supports Robin's original point. Naty Hoffman |
From: Jon W. <jw...@gm...> - 2009-11-04 18:06:53
|
Nathaniel Hoffman wrote: > lowly cosine power - it's not completely physically meaningless. If you > are using Blinn-Phong (N dot H, which is much to be preferred over > original Phong - R dot L), then using a cosine power is equivalent to > Except you already have the reflection vector for your environment mapping, so why not re-use it? As far as I can tell, cos(half-vector dot) behaves the same as cos(0.5 + reflection-vector dot), so you can make them equivalent by just adjusting the dot product value you put into the power function. Why do you think that the half vector is preferable? When it comes to approximating the power function for a specular highlight, you can go with a texture lookup -- 0 .. 1 on one axis, and 0 .. 200 on the other, for example. Or you can do the cheapest of the cheap: float cheap_pow(float x, float n) { x = saturate(1 - (1 - x ) * (n * 0.333)); return x * x; } Not very accurate, but very cheap, and still creates a soft-ish diffuse highlight shape, which tends to be slightly narrower towards the edges. It also gets worse below power 5 or so. Sincerely, jw -- Revenge is the most pointless and damaging of human desires. |
From: Simon F. <sim...@po...> - 2009-11-05 11:50:28
|
Jon Watte wrote: > Nathaniel Hoffman wrote: >> lowly cosine power - it's not completely physically meaningless. If >> you are using Blinn-Phong (N dot H, which is much to be preferred >> over original Phong - R dot L), then using a cosine power is >> equivalent to >> > > Except you already have the reflection vector for your environment > mapping, so why not re-use it? > Ignoring any issues on what is more "physically accurate", I always assumed that Blinn introduced his model because if you.. A) assume your view angle didn't change (i.e. viewer is an infinite distance away) and B) your lights are 'parallel/infinitely far away' ...then the method is much cheaper than the "Phong" method. I think these sort of assumptions are valid approximations for software rendering, which was the norm at the time. Once these restrictive conditions are lifted however, it seems to me that the original Phong method is far cheaper. Simon |
From: Juan L. <re...@gm...> - 2009-11-08 16:37:20
|
Hey eveyone! thanks for the answers, I was actually more looking for an attenuation curve function that retains the convex/concave properties of using pow(), although some of the examples are great |
From: Jeff R. <je...@8m...> - 2009-11-04 18:14:30
|
Not to derail the conversation, but I've never really understood why half vectors are preferable to an actual reflection vector, either in terms of efficiency or realism. I've always just used reflection, am I missing something? On Wed, Nov 4, 2009 at 12:06 PM, Jon Watte <jw...@gm...> wrote: > Nathaniel Hoffman wrote: > > lowly cosine power - it's not completely physically meaningless. If you > > are using Blinn-Phong (N dot H, which is much to be preferred over > > original Phong - R dot L), then using a cosine power is equivalent to > > > > Except you already have the reflection vector for your environment > mapping, so why not re-use it? > > As far as I can tell, cos(half-vector dot) behaves the same as cos(0.5 + > reflection-vector dot), so you can make them equivalent by just > adjusting the dot product value you put into the power function. Why do > you think that the half vector is preferable? > > When it comes to approximating the power function for a specular > highlight, you can go with a texture lookup -- 0 .. 1 on one axis, and 0 > .. 200 on the other, for example. Or you can do the cheapest of the cheap: > > float cheap_pow(float x, float n) > { > x = saturate(1 - (1 - x ) * (n * 0.333)); > return x * x; > } > > Not very accurate, but very cheap, and still creates a soft-ish diffuse > highlight shape, which tends to be slightly narrower towards the edges. > It also gets worse below power 5 or so. > > Sincerely, > > jw > > > -- > > Revenge is the most pointless and damaging of human desires. > > > > ------------------------------------------------------------------------------ > Let Crystal Reports handle the reporting - Free Crystal Reports 2008 30-Day > trial. Simplify your report design, integration and deployment - and focus > on > what you do best, core application coding. Discover what's new with > Crystal Reports now. http://p.sf.net/sfu/bobj-july > _______________________________________________ > GDAlgorithms-list mailing list > GDA...@li... > https://lists.sourceforge.net/lists/listinfo/gdalgorithms-list > Archives: > http://sourceforge.net/mailarchive/forum.php?forum_name=gdalgorithms-list > -- Jeff Russell Engineer, 8monkey Labs www.8monkeylabs.com |
From: Robin G. <rob...@gm...> - 2009-11-04 20:00:36
|
The difference is in the shape of specular highlights. Where Phong specular highlights at grazing angles are streched out moon shapes, the Blinn half-angle highlights retain a more circular shape. Real world photos of specular surfaces at grazing angles more closely resemble Blinn shapes than Phong, plus the Blinn model has some good physical reasoning behind it to do with reflection from distributions of microfacets. http://img22.imageshack.us/img22/7/blinn.jpg http://img526.imageshack.us/img526/758/phong.jpg - Robin Green. On Wed, Nov 4, 2009 at 10:14 AM, Jeff Russell <je...@8m...> wrote: > Not to derail the conversation, but I've never really understood why half > vectors are preferable to an actual reflection vector, either in terms of > efficiency or realism. I've always just used reflection, am I missing > something? |
From: Alen L. <ale...@cr...> - 2009-11-04 22:39:39
|
And in that spirit I would recommend Schlick's BDRF papers. Besides using half-vector and deriving a really good looking physically-based model, Schlick has derived some very nice approximations based on division of polynomials instead of pow() function. That approach is faster, doesn't suffer from that much precision issues, and has another nice property that parameters can be made to fit the 0..1 domain. I consider that a big plus, since it allows for easy storing of the parameter in a texture. Such approximations are very useful even for other applications, not just specular lighting. (Which we still don't have any evidence is what OP needs. ;) ) Alen Wednesday, November 4, 2009, 9:00:19 PM, you wrote: > The difference is in the shape of specular highlights. Where Phong > specular highlights at grazing angles are streched out moon shapes, > the Blinn half-angle highlights retain a more circular shape. Real > world photos of specular surfaces at grazing angles more closely > resemble Blinn shapes than Phong, plus the Blinn model has some good > physical reasoning behind it to do with reflection from distributions > of microfacets. > http://img22.imageshack.us/img22/7/blinn.jpg > http://img526.imageshack.us/img526/758/phong.jpg > - Robin Green. > On Wed, Nov 4, 2009 at 10:14 AM, Jeff Russell <je...@8m...> wrote: >> Not to derail the conversation, but I've never really understood why half >> vectors are preferable to an actual reflection vector, either in terms of >> efficiency or realism. I've always just used reflection, am I missing >> something? > ------------------------------------------------------------------------------ > Let Crystal Reports handle the reporting - Free Crystal Reports 2008 30-Day > trial. Simplify your report design, integration and deployment - and focus on > what you do best, core application coding. Discover what's new with > Crystal Reports now. http://p.sf.net/sfu/bobj-july > _______________________________________________ > GDAlgorithms-list mailing list > GDA...@li... > https://lists.sourceforge.net/lists/listinfo/gdalgorithms-list > Archives: > http://sourceforge.net/mailarchive/forum.php?forum_name=gdalgorithms-list -- Best regards, Alen mailto:ale...@cr... |
From: Nathaniel H. <na...@io...> - 2009-11-08 17:41:37
|
Robin is correct in that the problem with reflection-vector highlights is the shape (the center of the highlight will be in the same location under both formulations), and that the problem is most noticeable at grazing angles. However, the images he gives don't show the effect as much as a flat surface would. Consider the following examples of real-life highlights: * The "golden path" formed by the setting sun on the ocean * Vertical streaks from car highlights on wet streets And similar cases. Half-angle highlights will give you the correct shape. Reflection-vector highlights will always stay circular - in these cases you will get a large circular blob instead of the narrow vertical streak you would get in real life. The half-angle formulation is not just more physically correct than the reflection-vector formulation, it is fundamentally more meaningful. This affects things other than highlight shape, for example the correct way to compute the Fresnel factor. The half-vector comes from microfacet theory. Imagine that the surface is actually a large collection of tiny flat mirrors when viewed under magnification. Recall that a mirror only reflects light in the reflection direction. For given light vector L and view vector V, only mirrors which happen to be angled just right to reflect L into V will matter for the purpose of shading. All other mirrors will be reflecting L into other directions. The intensity of the reflection is proportional to the percentage of mirrors that are angled "just right". To be angled "just right" to reflect L into V, the surface normal of the mirror has to be half-way between them - in other words the microfacet normal needs to be equal to H. The question "what percentage of mirrors are angled just right" becomes "what percentage of mirrors have a normal equal to H". The way to answer this question is to define a microfacet normal distribution function, or NDF for the surface. You can plug into this function any example direction, and it will tell you the percentage of microfacets with normals pointing in that direction (I'm glossing over a few mathematical details here). NDFs are defined in the local tangent space of the surface. In isotropic NDFs the only parameter is the elevation angle of the microfacet normal in tangent space - in other words the angle between N and H. Since in most cases surface microstructure results from random processes, the NDF is a Gaussian-ish blob. The cosine raised to a power is simply an approximation of this blob (there should also be a normalization factor, which I don't discuss since this email is already too long). When you calculate (N.H)^m, you are actually evaluating the NDF for the case of microfacet normal equal to H, or in other words calculating the answer to the question "what percentage of microfacets are participating in the reflection of light from L to V". When you calculate (V.R)^m, you aren't calculating anything with a physical meaning. Understanding this helps with things like Fresnel. The Fresnel factor for a mirror is a function of the angle between the mirror normal and the light vector (or reflection vector). Since all microfacets participating in the reflection have their microfacet normal equal to H, the angle for Fresnel can be found by computing (L dot H) or (V dot H). This is the cosine you should plug into the Shlick Fresnel approximation, for example ((V dot N) is correct for Fresnel applied to an environment map, but not for specular highlights). I hope this clears up some of the confusion. (begin shameless plug)There is also a fairly detailed explanation of this in "Real-Time Rendering, 3rd edition"(end shameless plug). Naty Hoffman > The difference is in the shape of specular highlights. Where > Phong specular highlights at grazing angles are streched out > moon shapes, the Blinn half-angle highlights retain a more > circular shape. Real world photos of specular surfaces at > grazing angles more closely resemble Blinn shapes than Phong, > plus the Blinn model has some good physical reasoning behind > it to do with reflection from distributions of microfacets. > > http://img22.imageshack.us/img22/7/blinn.jpg > http://img526.imageshack.us/img526/758/phong.jpg > > - Robin Green. > > On Wed, Nov 4, 2009 at 10:14 AM, Jeff Russell <je...@8m...> > wrote: >> Not to derail the conversation, but I've never really understood >> why half vectors are preferable to an actual reflection vector, >> either in terms of efficiency or realism. I've always just used >> reflection, am I missing something? |
From: Jon W. <jw...@gm...> - 2009-11-11 04:24:42
|
But that's equally true for the reflection vector! If all the micro-mirrors were perfectly flat, then an infinite specular power would be applied, and you'd get a perfect reflection of the lighting environment -- in fact, this is what environment mapping gives you. As the mirrors start deviating from the perfectly flat state, the specular power would decrease, and the specular reflection area would grow in size. I don't see how you can say that the half-angle formulation is more meaningful. We're still talking about reflected light. In the perfectly reflected case, clearly the reflection vector is 100% meaningful and accurate, and any other formulation would be less meaningful. I don't see how "meaningfulness" would change as smoothness goes from 100% to 99.9% or 95% or 50%. I do agree that the math gives you a different assumed microfacet distribution in the case of the reflection formulation versus the half-angle formulation. Both are approximations, of course. However, what I don't get, is why the reflection vector approximation is considered so inferior to the more expensive half-angle vector approximation. Does it have anything to do with the space integral of the reflection cone formed by the vector in question spread out by the power function? If so, how? Sincerely, jw On Sun, Nov 8, 2009 at 9:41 AM, Nathaniel Hoffman <na...@io...> wrote: > The half-angle formulation is not just more physically correct than the > reflection-vector formulation, it is fundamentally more meaningful. ... > The half-vector comes from microfacet theory. Imagine that the surface is > actually a large collection of tiny flat mirrors when viewed under > magnification. Recall that a mirror only reflects light in the reflection > direction. For given light vector L and view vector V, only mirrors which -- Americans might object: there is no way we would sacrifice our living standards for the benefit of people in the rest of the world. Nevertheless, whether we get there willingly or not, we shall soon have lower consumption rates, because our present rates are unsustainable. |
From: Nathaniel H. <na...@io...> - 2009-11-11 18:32:12
|
Jon, It is inarguable that H produces much more realistic results - a simple observation of light streaks on wet roads and similar scenes with a comparison to renderings of the two formulations proves this without a doubt. However, there are also good, fundamental, theoretical reasons to prefer H. I seem to be explaining this poorly - I will give it another try, but first, here are some pointers to other explanations: There is a good diagram illustrating the difference in behavior of the two vectors in Figure 7 of this paper: http://people.csail.mit.edu/addy/research/ngan05_brdf_eval.pdf. There is also some discussion about it in Appendix A of this paper: http://graphics.stanford.edu/courses/cs448-05-winter/Schilling-1997-TechRep.pdf "Real-Time Rendering, 3rd edition" also has some discussion of it on pages 249-251. If you don't have a copy of the book, you can "look inside" at Amazon: http://www.amazon.com/Real-Time-Rendering-Third-Tomas-Akenine-Moller/dp/1568814240, click on "search inside this book", look for "half vector" (in quotes) - you will get a link to page 249. OK, now I'll have another go As we both agree, the reflection vector is fundamental for a perfectly flat mirror. Imagine a directional or point light shining on the mirror. There are only visible reflections when V == R(L, N) (view vector is equal to the reflection of the light vector around the surface normal). Now how should we treat a surface which is not perfectly flat? A good model (which comes to us from fields outside graphics but has been very successful in graphics) is to treat such a surface as a statistical collection of stochastically-oriented perfect mirrors, each one too tiny to be individually visible. A useful description of such a surface for purposes of rendering is a normal distribution function, or NDF, which gives the statistical distribution of the microfacet normals relative to the overall macroscopic normal. Given a light direction L and a view direction V, how bright will we observe the surface to be? Let's assume for simplicity that each of these mirrors is 100% reflective at all angles (silver comes close to that). Then it is clear that the brightness is proportional to the percentage of microfacets from which there are visible reflections, in other words those for which V == R(L, N_u) (here I use N_u for the microfacet normal to distinguish from the overall surface normal N). It is simple to demonstrate that this is equivalent to N_u == H. Therefore we should "plug" H into the microfacet distribution function, which yields the (N dot H) formulation for isotropic surfaces. I can think of no similarly-principled way to derive the reflection vector formulation, and none has appeared in the literature. I hope this has convinced you that the H formulation is superior to R both in terms of realism and theoretical soundness. Thanks, Naty Hoffman > But that's equally true for the reflection vector! If all the > micro-mirrors were perfectly flat, then an infinite specular power > would be applied, and you'd get a perfect reflection of the lighting > environment -- in fact, this is what environment mapping gives you. > > As the mirrors start deviating from the perfectly flat state, the > specular power would decrease, and the specular reflection area would > grow in size. I don't see how you can say that the half-angle > formulation is more meaningful. We're still talking about reflected > light. In the perfectly reflected case, clearly the reflection vector > is 100% meaningful and accurate, and any other formulation would be > less meaningful. I don't see how "meaningfulness" would change as > smoothness goes from 100% to 99.9% or 95% or 50%. > > I do agree that the math gives you a different assumed microfacet > distribution in the case of the reflection formulation versus the > half-angle formulation. Both are approximations, of course. However, > what I don't get, is why the reflection vector approximation is > considered so inferior to the more expensive half-angle vector > approximation. Does it have anything to do with the space integral of > the reflection cone formed by the vector in question spread out by the > power function? If so, how? > > Sincerely, > > jw > > > On Sun, Nov 8, 2009 at 9:41 AM, Nathaniel Hoffman <na...@io...> wrote: > >> The half-angle formulation is not just more physically correct than the >> reflection-vector formulation, it is fundamentally more meaningful. > ... >> The half-vector comes from microfacet theory. Imagine that the surface >> is >> actually a large collection of tiny flat mirrors when viewed under >> magnification. Recall that a mirror only reflects light in the >> reflection >> direction. For given light vector L and view vector V, only mirrors >> which > > > -- > Americans might object: there is no way we would sacrifice our living > standards for the benefit of people in the rest of the world. > Nevertheless, whether we get there willingly or not, we shall soon > have lower consumption rates, because our present rates are > unsustainable. > > ------------------------------------------------------------------------------ > Let Crystal Reports handle the reporting - Free Crystal Reports 2008 > 30-Day > trial. Simplify your report design, integration and deployment - and focus > on > what you do best, core application coding. Discover what's new with > Crystal Reports now. http://p.sf.net/sfu/bobj-july > _______________________________________________ > GDAlgorithms-list mailing list > GDA...@li... > https://lists.sourceforge.net/lists/listinfo/gdalgorithms-list > Archives: > http://sourceforge.net/mailarchive/forum.php?forum_name=gdalgorithms-list > |
From: Jeff R. <je...@8m...> - 2009-11-11 19:25:04
|
The progression Jon mentioned between a rough surface and a smooth one is interesting to me though. The use of the half-vector does break down for very smooth surfaces, it seems. It could be worth considering what that means exactly. I just finished a game where we used cube maps for specular lighting contributions from the sun & sky. Several cube images were used, each "pre-blurred" to a set specular power around the reflection vector. Half-vector was not an option I think (every pixel in the map is a light source). Is there a way to get this "half-vector-like" behavior out of a cube lookup? Feels like the answer is no... On Wed, Nov 11, 2009 at 12:31 PM, Nathaniel Hoffman <na...@io...> wrote: > Jon, > > It is inarguable that H produces much more realistic results - a simple > observation of light streaks on wet roads and similar scenes with a > comparison to renderings of the two formulations proves this without a > doubt. > > However, there are also good, fundamental, theoretical reasons to prefer > H. I seem to be explaining this poorly - I will give it another try, but > first, here are some pointers to other explanations: > > There is a good diagram illustrating the difference in behavior of the two > vectors in Figure 7 of this paper: > http://people.csail.mit.edu/addy/research/ngan05_brdf_eval.pdf. > > There is also some discussion about it in Appendix A of this paper: > > http://graphics.stanford.edu/courses/cs448-05-winter/Schilling-1997-TechRep.pdf > > "Real-Time Rendering, 3rd edition" also has some discussion of it on pages > 249-251. If you don't have a copy of the book, you can "look inside" at > Amazon: > > http://www.amazon.com/Real-Time-Rendering-Third-Tomas-Akenine-Moller/dp/1568814240 > , > click on "search inside this book", look for "half vector" (in quotes) - > you will get a link to page 249. > > OK, now I'll have another go As we both agree, the reflection vector is > fundamental for a perfectly flat mirror. Imagine a directional or point > light shining on the mirror. There are only visible reflections when V == > R(L, N) (view vector is equal to the reflection of the light vector around > the surface normal). > > Now how should we treat a surface which is not perfectly flat? A good > model (which comes to us from fields outside graphics but has been very > successful in graphics) is to treat such a surface as a statistical > collection of stochastically-oriented perfect mirrors, each one too tiny > to be individually visible. A useful description of such a surface for > purposes of rendering is a normal distribution function, or NDF, which > gives the statistical distribution of the microfacet normals relative to > the overall macroscopic normal. > > Given a light direction L and a view direction V, how bright will we > observe the surface to be? Let's assume for simplicity that each of these > mirrors is 100% reflective at all angles (silver comes close to that). > Then it is clear that the brightness is proportional to the percentage of > microfacets from which there are visible reflections, in other words those > for which V == R(L, N_u) (here I use N_u for the microfacet normal to > distinguish from the overall surface normal N). > > It is simple to demonstrate that this is equivalent to N_u == H. Therefore > we should "plug" H into the microfacet distribution function, which yields > the (N dot H) formulation for isotropic surfaces. > > I can think of no similarly-principled way to derive the reflection vector > formulation, and none has appeared in the literature. > > I hope this has convinced you that the H formulation is superior to R both > in terms of realism and theoretical soundness. > > Thanks, > > Naty Hoffman > > > But that's equally true for the reflection vector! If all the > > micro-mirrors were perfectly flat, then an infinite specular power > > would be applied, and you'd get a perfect reflection of the lighting > > environment -- in fact, this is what environment mapping gives you. > > > > As the mirrors start deviating from the perfectly flat state, the > > specular power would decrease, and the specular reflection area would > > grow in size. I don't see how you can say that the half-angle > > formulation is more meaningful. We're still talking about reflected > > light. In the perfectly reflected case, clearly the reflection vector > > is 100% meaningful and accurate, and any other formulation would be > > less meaningful. I don't see how "meaningfulness" would change as > > smoothness goes from 100% to 99.9% or 95% or 50%. > > > > I do agree that the math gives you a different assumed microfacet > > distribution in the case of the reflection formulation versus the > > half-angle formulation. Both are approximations, of course. However, > > what I don't get, is why the reflection vector approximation is > > considered so inferior to the more expensive half-angle vector > > approximation. Does it have anything to do with the space integral of > > the reflection cone formed by the vector in question spread out by the > > power function? If so, how? > > > > Sincerely, > > > > jw > > > > > > On Sun, Nov 8, 2009 at 9:41 AM, Nathaniel Hoffman <na...@io...> wrote: > > > >> The half-angle formulation is not just more physically correct than the > >> reflection-vector formulation, it is fundamentally more meaningful. > > ... > >> The half-vector comes from microfacet theory. Imagine that the surface > >> is > >> actually a large collection of tiny flat mirrors when viewed under > >> magnification. Recall that a mirror only reflects light in the > >> reflection > >> direction. For given light vector L and view vector V, only mirrors > >> which > > > > > > -- > > Americans might object: there is no way we would sacrifice our living > > standards for the benefit of people in the rest of the world. > > Nevertheless, whether we get there willingly or not, we shall soon > > have lower consumption rates, because our present rates are > > unsustainable. > > > > > ------------------------------------------------------------------------------ > > Let Crystal Reports handle the reporting - Free Crystal Reports 2008 > > 30-Day > > trial. Simplify your report design, integration and deployment - and > focus > > on > > what you do best, core application coding. Discover what's new with > > Crystal Reports now. http://p.sf.net/sfu/bobj-july > > _______________________________________________ > > GDAlgorithms-list mailing list > > GDA...@li... > > https://lists.sourceforge.net/lists/listinfo/gdalgorithms-list > > Archives: > > > http://sourceforge.net/mailarchive/forum.php?forum_name=gdalgorithms-list > > > > > > > ------------------------------------------------------------------------------ > Let Crystal Reports handle the reporting - Free Crystal Reports 2008 30-Day > trial. Simplify your report design, integration and deployment - and focus > on > what you do best, core application coding. Discover what's new with > Crystal Reports now. http://p.sf.net/sfu/bobj-july > _______________________________________________ > GDAlgorithms-list mailing list > GDA...@li... > https://lists.sourceforge.net/lists/listinfo/gdalgorithms-list > Archives: > http://sourceforge.net/mailarchive/forum.php?forum_name=gdalgorithms-list > -- Jeff Russell Engineer, 8monkey Labs www.8monkeylabs.com |
From: Nathaniel H. <na...@io...> - 2009-11-11 21:17:14
|
Half-vector doesn't break down for very smooth surfaces, you just get a very peaked NDF that is vanishingly small outside a very small solid angle - so there are only reflections when H is very close to N. In the limit you get the perfect mirror behavior where there are only reflections when N == H (which is the same as saying V == R). In other words, H formulation behavior converges cleanly to R formulation in the limit. Getting "half-vector" like behavior out of an environment map (which is inherently R-based) is possible, but more expensive than traditional methods. There are a few approaches in the literature (note that I haven't tried out any of these myself): 1) The paper "Antialiasing of Environment Maps" by Andreas Schilling has an approach that could be implemented using TEXDD (if hardware supports anisotropic filtering of cubemaps) or with manual filtering. 2) The body of work on prefiltered environment maps could be used for this as well, though there are complex fitting procedures to follow. The relevant papers here are "A Unified Approach to Prefiltered Environment Maps" by Kautz et. al., "Approximation of Glossy Reflection with Prefiltered Environment Maps" by Kautz and McCool, and (much newer) "Efficient Reflectance and Visibility Approximations for Environment Map Rendering" by Green et. al. 3) BRDF importance sampling approaches have also been applied to environment maps. Colbert and Krivanek's work is most relevant here: a SIGGRAPH sketch ("Real-Time Shading with Filtered Importance Sampling") and a GPU Gems 3 chapter ("GPU-Based Importance Sampling"). 4) The SIGGRAPH Asia 2009 paper "All-Frequency Rendering of Dynamic, Spatially-Varying Reflectance" by Wang et. al. warps reflectance lobes from H-space into L-space, for use with prefiltered environment maps. Their technique can even work with a single lookup (in this case it just shrinks the lobe rather than distorting it, but this is still better than the naive approach which results in grossly oversized highlights in some cases). Naty Hoffman > The progression Jon mentioned between a rough surface and a smooth one is > interesting to me though. The use of the half-vector does break down for > very smooth surfaces, it seems. It could be worth considering what that > means exactly. > > I just finished a game where we used cube maps for specular lighting > contributions from the sun & sky. Several cube images were used, each > "pre-blurred" to a set specular power around the reflection vector. > Half-vector was not an option I think (every pixel in the map is a light > source). Is there a way to get this "half-vector-like" behavior out of a > cube lookup? Feels like the answer is no... > > On Wed, Nov 11, 2009 at 12:31 PM, Nathaniel Hoffman <na...@io...> wrote: > >> Jon, >> >> It is inarguable that H produces much more realistic results - a simple >> observation of light streaks on wet roads and similar scenes with a >> comparison to renderings of the two formulations proves this without a >> doubt. >> >> However, there are also good, fundamental, theoretical reasons to prefer >> H. I seem to be explaining this poorly - I will give it another try, but >> first, here are some pointers to other explanations: >> >> There is a good diagram illustrating the difference in behavior of the >> two >> vectors in Figure 7 of this paper: >> http://people.csail.mit.edu/addy/research/ngan05_brdf_eval.pdf. >> >> There is also some discussion about it in Appendix A of this paper: >> >> http://graphics.stanford.edu/courses/cs448-05-winter/Schilling-1997-TechRep.pdf >> >> "Real-Time Rendering, 3rd edition" also has some discussion of it on >> pages >> 249-251. If you don't have a copy of the book, you can "look inside" at >> Amazon: >> >> http://www.amazon.com/Real-Time-Rendering-Third-Tomas-Akenine-Moller/dp/1568814240 >> , >> click on "search inside this book", look for "half vector" (in quotes) - >> you will get a link to page 249. >> >> OK, now I'll have another go As we both agree, the reflection vector is >> fundamental for a perfectly flat mirror. Imagine a directional or point >> light shining on the mirror. There are only visible reflections when V >> == >> R(L, N) (view vector is equal to the reflection of the light vector >> around >> the surface normal). >> >> Now how should we treat a surface which is not perfectly flat? A good >> model (which comes to us from fields outside graphics but has been very >> successful in graphics) is to treat such a surface as a statistical >> collection of stochastically-oriented perfect mirrors, each one too tiny >> to be individually visible. A useful description of such a surface for >> purposes of rendering is a normal distribution function, or NDF, which >> gives the statistical distribution of the microfacet normals relative to >> the overall macroscopic normal. >> >> Given a light direction L and a view direction V, how bright will we >> observe the surface to be? Let's assume for simplicity that each of >> these >> mirrors is 100% reflective at all angles (silver comes close to that). >> Then it is clear that the brightness is proportional to the percentage >> of >> microfacets from which there are visible reflections, in other words >> those >> for which V == R(L, N_u) (here I use N_u for the microfacet normal to >> distinguish from the overall surface normal N). >> >> It is simple to demonstrate that this is equivalent to N_u == H. >> Therefore >> we should "plug" H into the microfacet distribution function, which >> yields >> the (N dot H) formulation for isotropic surfaces. >> >> I can think of no similarly-principled way to derive the reflection >> vector >> formulation, and none has appeared in the literature. >> >> I hope this has convinced you that the H formulation is superior to R >> both >> in terms of realism and theoretical soundness. >> >> Thanks, >> >> Naty Hoffman >> >> > But that's equally true for the reflection vector! If all the >> > micro-mirrors were perfectly flat, then an infinite specular power >> > would be applied, and you'd get a perfect reflection of the lighting >> > environment -- in fact, this is what environment mapping gives you. >> > >> > As the mirrors start deviating from the perfectly flat state, the >> > specular power would decrease, and the specular reflection area would >> > grow in size. I don't see how you can say that the half-angle >> > formulation is more meaningful. We're still talking about reflected >> > light. In the perfectly reflected case, clearly the reflection vector >> > is 100% meaningful and accurate, and any other formulation would be >> > less meaningful. I don't see how "meaningfulness" would change as >> > smoothness goes from 100% to 99.9% or 95% or 50%. >> > >> > I do agree that the math gives you a different assumed microfacet >> > distribution in the case of the reflection formulation versus the >> > half-angle formulation. Both are approximations, of course. However, >> > what I don't get, is why the reflection vector approximation is >> > considered so inferior to the more expensive half-angle vector >> > approximation. Does it have anything to do with the space integral of >> > the reflection cone formed by the vector in question spread out by the >> > power function? If so, how? >> > >> > Sincerely, >> > >> > jw >> > >> > >> > On Sun, Nov 8, 2009 at 9:41 AM, Nathaniel Hoffman <na...@io...> wrote: >> > >> >> The half-angle formulation is not just more physically correct than >> the >> >> reflection-vector formulation, it is fundamentally more meaningful. >> > ... >> >> The half-vector comes from microfacet theory. Imagine that the >> surface >> >> is >> >> actually a large collection of tiny flat mirrors when viewed under >> >> magnification. Recall that a mirror only reflects light in the >> >> reflection >> >> direction. For given light vector L and view vector V, only mirrors >> >> which >> > >> > >> > -- >> > Americans might object: there is no way we would sacrifice our living >> > standards for the benefit of people in the rest of the world. >> > Nevertheless, whether we get there willingly or not, we shall soon >> > have lower consumption rates, because our present rates are >> > unsustainable. >> > >> > >> ------------------------------------------------------------------------------ >> > Let Crystal Reports handle the reporting - Free Crystal Reports 2008 >> > 30-Day >> > trial. Simplify your report design, integration and deployment - and >> focus >> > on >> > what you do best, core application coding. Discover what's new with >> > Crystal Reports now. http://p.sf.net/sfu/bobj-july >> > _______________________________________________ >> > GDAlgorithms-list mailing list >> > GDA...@li... >> > https://lists.sourceforge.net/lists/listinfo/gdalgorithms-list >> > Archives: >> > >> http://sourceforge.net/mailarchive/forum.php?forum_name=gdalgorithms-list >> > >> >> >> >> >> ------------------------------------------------------------------------------ >> Let Crystal Reports handle the reporting - Free Crystal Reports 2008 >> 30-Day >> trial. Simplify your report design, integration and deployment - and >> focus >> on >> what you do best, core application coding. Discover what's new with >> Crystal Reports now. http://p.sf.net/sfu/bobj-july >> _______________________________________________ >> GDAlgorithms-list mailing list >> GDA...@li... >> https://lists.sourceforge.net/lists/listinfo/gdalgorithms-list >> Archives: >> http://sourceforge.net/mailarchive/forum.php?forum_name=gdalgorithms-list >> > > > > -- > Jeff Russell > Engineer, 8monkey Labs > www.8monkeylabs.com > ------------------------------------------------------------------------------ > Let Crystal Reports handle the reporting - Free Crystal Reports 2008 > 30-Day > trial. Simplify your report design, integration and deployment - and focus > on > what you do best, core application coding. Discover what's new with > Crystal Reports now. > http://p.sf.net/sfu/bobj-july_______________________________________________ > GDAlgorithms-list mailing list > GDA...@li... > https://lists.sourceforge.net/lists/listinfo/gdalgorithms-list > Archives: > http://sourceforge.net/mailarchive/forum.php?forum_name=gdalgorithms-list |