gdalgorithms-list Mailing List for Game Dev Algorithms (Page 21)
Brought to you by:
vexxed72
You can subscribe to this list here.
| 2000 |
Jan
|
Feb
|
Mar
|
Apr
|
May
|
Jun
|
Jul
(390) |
Aug
(767) |
Sep
(940) |
Oct
(964) |
Nov
(819) |
Dec
(762) |
|---|---|---|---|---|---|---|---|---|---|---|---|---|
| 2001 |
Jan
(680) |
Feb
(1075) |
Mar
(954) |
Apr
(595) |
May
(725) |
Jun
(868) |
Jul
(678) |
Aug
(785) |
Sep
(410) |
Oct
(395) |
Nov
(374) |
Dec
(419) |
| 2002 |
Jan
(699) |
Feb
(501) |
Mar
(311) |
Apr
(334) |
May
(501) |
Jun
(507) |
Jul
(441) |
Aug
(395) |
Sep
(540) |
Oct
(416) |
Nov
(369) |
Dec
(373) |
| 2003 |
Jan
(514) |
Feb
(488) |
Mar
(396) |
Apr
(624) |
May
(590) |
Jun
(562) |
Jul
(546) |
Aug
(463) |
Sep
(389) |
Oct
(399) |
Nov
(333) |
Dec
(449) |
| 2004 |
Jan
(317) |
Feb
(395) |
Mar
(136) |
Apr
(338) |
May
(488) |
Jun
(306) |
Jul
(266) |
Aug
(424) |
Sep
(502) |
Oct
(170) |
Nov
(170) |
Dec
(134) |
| 2005 |
Jan
(249) |
Feb
(109) |
Mar
(119) |
Apr
(282) |
May
(82) |
Jun
(113) |
Jul
(56) |
Aug
(160) |
Sep
(89) |
Oct
(98) |
Nov
(237) |
Dec
(297) |
| 2006 |
Jan
(151) |
Feb
(250) |
Mar
(222) |
Apr
(147) |
May
(266) |
Jun
(313) |
Jul
(367) |
Aug
(135) |
Sep
(108) |
Oct
(110) |
Nov
(220) |
Dec
(47) |
| 2007 |
Jan
(133) |
Feb
(144) |
Mar
(247) |
Apr
(191) |
May
(191) |
Jun
(171) |
Jul
(160) |
Aug
(51) |
Sep
(125) |
Oct
(115) |
Nov
(78) |
Dec
(67) |
| 2008 |
Jan
(165) |
Feb
(37) |
Mar
(130) |
Apr
(111) |
May
(91) |
Jun
(142) |
Jul
(54) |
Aug
(104) |
Sep
(89) |
Oct
(87) |
Nov
(44) |
Dec
(54) |
| 2009 |
Jan
(283) |
Feb
(113) |
Mar
(154) |
Apr
(395) |
May
(62) |
Jun
(48) |
Jul
(52) |
Aug
(54) |
Sep
(131) |
Oct
(29) |
Nov
(32) |
Dec
(37) |
| 2010 |
Jan
(34) |
Feb
(36) |
Mar
(40) |
Apr
(23) |
May
(38) |
Jun
(34) |
Jul
(36) |
Aug
(27) |
Sep
(9) |
Oct
(18) |
Nov
(25) |
Dec
|
| 2011 |
Jan
(1) |
Feb
(14) |
Mar
(1) |
Apr
(5) |
May
(1) |
Jun
|
Jul
|
Aug
(37) |
Sep
(6) |
Oct
(2) |
Nov
|
Dec
|
| 2012 |
Jan
|
Feb
(7) |
Mar
|
Apr
(4) |
May
|
Jun
(3) |
Jul
|
Aug
|
Sep
(1) |
Oct
|
Nov
|
Dec
(10) |
| 2013 |
Jan
|
Feb
(1) |
Mar
(7) |
Apr
(2) |
May
|
Jun
|
Jul
(9) |
Aug
|
Sep
|
Oct
|
Nov
|
Dec
|
| 2014 |
Jan
(14) |
Feb
|
Mar
(2) |
Apr
|
May
(10) |
Jun
|
Jul
|
Aug
|
Sep
|
Oct
|
Nov
(3) |
Dec
|
| 2015 |
Jan
|
Feb
|
Mar
|
Apr
|
May
|
Jun
|
Jul
|
Aug
|
Sep
|
Oct
(12) |
Nov
|
Dec
(1) |
| 2016 |
Jan
|
Feb
(1) |
Mar
(1) |
Apr
(1) |
May
|
Jun
(1) |
Jul
|
Aug
(1) |
Sep
|
Oct
|
Nov
|
Dec
|
| 2017 |
Jan
|
Feb
|
Mar
|
Apr
|
May
|
Jun
|
Jul
(1) |
Aug
|
Sep
|
Oct
|
Nov
|
Dec
|
| 2022 |
Jan
|
Feb
|
Mar
|
Apr
|
May
|
Jun
|
Jul
|
Aug
|
Sep
|
Oct
|
Nov
(2) |
Dec
|
|
From: Juan L. <re...@gm...> - 2009-11-08 16:37:20
|
Hey eveyone! thanks for the answers, I was actually more looking for an attenuation curve function that retains the convex/concave properties of using pow(), although some of the examples are great |
|
From: pete d. <pb...@po...> - 2009-11-07 01:01:11
|
Without getting too OT and talking about D3D...! I was not referring to PRT per se but of the texture->vertex resampling method it provides: ID3DXPRTEngine::ResampleBuffer (which could do something smarter than filter the texture around the vert's uv). A smarter, probably ad-hoc method to generate samples and generate/fit per-vertex values is what I am after -- I can't afford the compute time to do a proper job sampling (needless to say our function is not analytic, is expensive to compute and is at times hard to analyze) or even fill out and then filter a chart where there is reasonable coverage. I'll rephrase my question a little.. Do folks who do something more clever than straight up point/supersampling to generate per-vertex data (like fitting based on error between interpolated and reference values or proper/ad-hoc sampling) get compellingly better results than those after the occasional touch up by an artist? Or do you just use what comes out of Maya/Max/whatever? Likewise with solving for and using gradients when you want to spend the memory and time (I've had mixed experiences with their cost/benefit at times so don't have a good guess if they'll be worth it). Apologies if this isn't algorithmy enough.. maybe we could analyze the structure of the matrix to solve based on connectivity of the mesh! :) >> gives compelling improvements, etc. Searching on the web/archive didn't >> really find too much (besides D3DX PRT methods that don't say how they >> work, and still require a texture thus an atlas/usable parameterization, >> which I don't really want to deal with). >> > > PRT solves an undersampling problem, but a different one; instead of > figuring out what the best lighting params are for a vertex to > accurately represent a face, it figures out what the best lighting > params are for representing the entire incoming radiance (cosine > convolved, optionally bounced/scattered) at the sample location. > > It can be done per vertex, in which case it doesn't need a UV mapping, > but then it can suffer artefacts from interpolation or unfortunate > sample locations again; hence it solves a related but different problem > and wouldn't solve your issue (other than smoothing things out perhaps > and thus being less vulnerable to it). > > Thanks! Pete |
|
From: Bert P. <be...@bp...> - 2009-11-06 23:46:32
|
This sounds like a standard sampling problem, so all of the literature with regards to sampling, aliasing, undersampling, filtering and reconstruction could be helpful? (including a thread IIRC about it either here or on SWeng) Ie. that may tell you how many samples you need to be artefact free, how to best position and maybe subdivide them, using gradients for better reconstruction, adding a good filter, etc. > gives compelling improvements, etc. Searching on the web/archive didn't > really find too much (besides D3DX PRT methods that don't say how they > work, and still require a texture thus an atlas/usable parameterization, > which I don't really want to deal with). PRT solves an undersampling problem, but a different one; instead of figuring out what the best lighting params are for a vertex to accurately represent a face, it figures out what the best lighting params are for representing the entire incoming radiance (cosine convolved, optionally bounced/scattered) at the sample location. It can be done per vertex, in which case it doesn't need a UV mapping, but then it can suffer artefacts from interpolation or unfortunate sample locations again; hence it solves a related but different problem and wouldn't solve your issue (other than smoothing things out perhaps and thus being less vulnerable to it). hth, bert |
|
From: pete d. <pb...@po...> - 2009-11-06 23:01:13
|
Our engine (I presume, like many others) stores some lighting/misc data per mesh vertex that would really be best stored in a texture. This works reasonably well and in itself doesn't bother me terribly, but the fact that we sample the values at the vertex itself does. The artifacts, ie the occasional dark tri due to occlusion at the verts being much darker than the average over the face, are something our artists have gotten used to painting out. But I'd hope we could give much better baseline results. So instead of simply sampling at vertices, I was planning on generating samples on each face (probably somewhat uniformly over the outside area of the mesh) and fitting (least squares) for the vertex values. Ie try to minimize the actual deviation of the interpolated values over the face from what they should be, instead of simply ignoring the fact that they'll be interpolated.. And in additional to accuracy, it should let us handle hard edges, doublesided tris and a few more situations more accurately than they are now. I presume people have done this before, so before trying it out I thought I'd ask if anyone had positive results, rules of thumb about how dense data needs to be sampled to generate good results in most cases, if applying a similar scheme to additionally generate gradients wrt u&v gives compelling improvements, etc. Searching on the web/archive didn't really find too much (besides D3DX PRT methods that don't say how they work, and still require a texture thus an atlas/usable parameterization, which I don't really want to deal with). Thanks for any input! Pete |
|
From: Simon F. <sim...@po...> - 2009-11-05 11:50:28
|
Jon Watte wrote: > Nathaniel Hoffman wrote: >> lowly cosine power - it's not completely physically meaningless. If >> you are using Blinn-Phong (N dot H, which is much to be preferred >> over original Phong - R dot L), then using a cosine power is >> equivalent to >> > > Except you already have the reflection vector for your environment > mapping, so why not re-use it? > Ignoring any issues on what is more "physically accurate", I always assumed that Blinn introduced his model because if you.. A) assume your view angle didn't change (i.e. viewer is an infinite distance away) and B) your lights are 'parallel/infinitely far away' ...then the method is much cheaper than the "Phong" method. I think these sort of assumptions are valid approximations for software rendering, which was the norm at the time. Once these restrictive conditions are lifted however, it seems to me that the original Phong method is far cheaper. Simon |
|
From: Alen L. <ale...@cr...> - 2009-11-04 22:39:39
|
And in that spirit I would recommend Schlick's BDRF papers. Besides using half-vector and deriving a really good looking physically-based model, Schlick has derived some very nice approximations based on division of polynomials instead of pow() function. That approach is faster, doesn't suffer from that much precision issues, and has another nice property that parameters can be made to fit the 0..1 domain. I consider that a big plus, since it allows for easy storing of the parameter in a texture. Such approximations are very useful even for other applications, not just specular lighting. (Which we still don't have any evidence is what OP needs. ;) ) Alen Wednesday, November 4, 2009, 9:00:19 PM, you wrote: > The difference is in the shape of specular highlights. Where Phong > specular highlights at grazing angles are streched out moon shapes, > the Blinn half-angle highlights retain a more circular shape. Real > world photos of specular surfaces at grazing angles more closely > resemble Blinn shapes than Phong, plus the Blinn model has some good > physical reasoning behind it to do with reflection from distributions > of microfacets. > http://img22.imageshack.us/img22/7/blinn.jpg > http://img526.imageshack.us/img526/758/phong.jpg > - Robin Green. > On Wed, Nov 4, 2009 at 10:14 AM, Jeff Russell <je...@8m...> wrote: >> Not to derail the conversation, but I've never really understood why half >> vectors are preferable to an actual reflection vector, either in terms of >> efficiency or realism. I've always just used reflection, am I missing >> something? > ------------------------------------------------------------------------------ > Let Crystal Reports handle the reporting - Free Crystal Reports 2008 30-Day > trial. Simplify your report design, integration and deployment - and focus on > what you do best, core application coding. Discover what's new with > Crystal Reports now. http://p.sf.net/sfu/bobj-july > _______________________________________________ > GDAlgorithms-list mailing list > GDA...@li... > https://lists.sourceforge.net/lists/listinfo/gdalgorithms-list > Archives: > http://sourceforge.net/mailarchive/forum.php?forum_name=gdalgorithms-list -- Best regards, Alen mailto:ale...@cr... |
|
From: Robin G. <rob...@gm...> - 2009-11-04 20:00:36
|
The difference is in the shape of specular highlights. Where Phong specular highlights at grazing angles are streched out moon shapes, the Blinn half-angle highlights retain a more circular shape. Real world photos of specular surfaces at grazing angles more closely resemble Blinn shapes than Phong, plus the Blinn model has some good physical reasoning behind it to do with reflection from distributions of microfacets. http://img22.imageshack.us/img22/7/blinn.jpg http://img526.imageshack.us/img526/758/phong.jpg - Robin Green. On Wed, Nov 4, 2009 at 10:14 AM, Jeff Russell <je...@8m...> wrote: > Not to derail the conversation, but I've never really understood why half > vectors are preferable to an actual reflection vector, either in terms of > efficiency or realism. I've always just used reflection, am I missing > something? |
|
From: Jeff R. <je...@8m...> - 2009-11-04 18:14:30
|
Not to derail the conversation, but I've never really understood why half
vectors are preferable to an actual reflection vector, either in terms of
efficiency or realism. I've always just used reflection, am I missing
something?
On Wed, Nov 4, 2009 at 12:06 PM, Jon Watte <jw...@gm...> wrote:
> Nathaniel Hoffman wrote:
> > lowly cosine power - it's not completely physically meaningless. If you
> > are using Blinn-Phong (N dot H, which is much to be preferred over
> > original Phong - R dot L), then using a cosine power is equivalent to
> >
>
> Except you already have the reflection vector for your environment
> mapping, so why not re-use it?
>
> As far as I can tell, cos(half-vector dot) behaves the same as cos(0.5 +
> reflection-vector dot), so you can make them equivalent by just
> adjusting the dot product value you put into the power function. Why do
> you think that the half vector is preferable?
>
> When it comes to approximating the power function for a specular
> highlight, you can go with a texture lookup -- 0 .. 1 on one axis, and 0
> .. 200 on the other, for example. Or you can do the cheapest of the cheap:
>
> float cheap_pow(float x, float n)
> {
> x = saturate(1 - (1 - x ) * (n * 0.333));
> return x * x;
> }
>
> Not very accurate, but very cheap, and still creates a soft-ish diffuse
> highlight shape, which tends to be slightly narrower towards the edges.
> It also gets worse below power 5 or so.
>
> Sincerely,
>
> jw
>
>
> --
>
> Revenge is the most pointless and damaging of human desires.
>
>
>
> ------------------------------------------------------------------------------
> Let Crystal Reports handle the reporting - Free Crystal Reports 2008 30-Day
> trial. Simplify your report design, integration and deployment - and focus
> on
> what you do best, core application coding. Discover what's new with
> Crystal Reports now. http://p.sf.net/sfu/bobj-july
> _______________________________________________
> GDAlgorithms-list mailing list
> GDA...@li...
> https://lists.sourceforge.net/lists/listinfo/gdalgorithms-list
> Archives:
> http://sourceforge.net/mailarchive/forum.php?forum_name=gdalgorithms-list
>
--
Jeff Russell
Engineer, 8monkey Labs
www.8monkeylabs.com
|
|
From: Jon W. <jw...@gm...> - 2009-11-04 18:06:53
|
Nathaniel Hoffman wrote:
> lowly cosine power - it's not completely physically meaningless. If you
> are using Blinn-Phong (N dot H, which is much to be preferred over
> original Phong - R dot L), then using a cosine power is equivalent to
>
Except you already have the reflection vector for your environment
mapping, so why not re-use it?
As far as I can tell, cos(half-vector dot) behaves the same as cos(0.5 +
reflection-vector dot), so you can make them equivalent by just
adjusting the dot product value you put into the power function. Why do
you think that the half vector is preferable?
When it comes to approximating the power function for a specular
highlight, you can go with a texture lookup -- 0 .. 1 on one axis, and 0
.. 200 on the other, for example. Or you can do the cheapest of the cheap:
float cheap_pow(float x, float n)
{
x = saturate(1 - (1 - x ) * (n * 0.333));
return x * x;
}
Not very accurate, but very cheap, and still creates a soft-ish diffuse
highlight shape, which tends to be slightly narrower towards the edges.
It also gets worse below power 5 or so.
Sincerely,
jw
--
Revenge is the most pointless and damaging of human desires.
|
|
From: Nathaniel H. <na...@io...> - 2009-11-04 15:37:21
|
> What are you using this for? If it's lighting or shading, there's no > real reason to use a power function on the cosine term to get tighter > specular highlights. It's just a shaping function that people use > because it's easy to control and AFAICT it has no physical basis in > defining a BRDF from the micropolygon point of view. > > - Robin Green. It's true that there is no reason to _exactly_ match a cosine power, and I agree with Robin that for this purpose, any curve which vaguely resembles cosine power will do. As a side note, I just wanted to say a few words in the defense of the lowly cosine power - it's not completely physically meaningless. If you are using Blinn-Phong (N dot H, which is much to be preferred over original Phong - R dot L), then using a cosine power is equivalent to assuming that the microfacet normal distribution follows a cosine power curve. Now, there is no physical reason to assume that microfacet distributions necessarily follow a cosine power curve, but (except for very low powers) this curve very closely matches one that _does_ have a physical basis - the Beckmann distribution (the one used in the Cook-Torrance BRDF). The match is amazingly close considering that Bui-Tong Phong just eyeballed the function; he didn't do any curve fitting. BTW, Beckmann's behavior for very low powers is interesting - it stops behaving like a Gaussianish blob and starts turning inside-out (which makes sense when you look at the definition of the "m" parameter). Beckmann isn't the last word on microfacet distributions; the EGSR 2007 paper "Microfacet Models for Refraction through Rough Surfaces" (which is a great paper overall and well worth reading for anyone interested in microfacet BRDFs) makes a good case for a different curve, with a more gradual falloff. Which again supports Robin's original point. Naty Hoffman |
|
From: Robin G. <rob...@gm...> - 2009-11-04 14:47:44
|
Lack of coffee made be forget this - if you don't give a monkey's
about numerical accuracy and only want power-like curves, go right
ahead and use a fast exp() and log() to build your own power function.
Details of quick'n'dirty medium accuracy implementations are available
here:
http://www.research.scea.com/gdc2003/fast-math-functions.html
or in Game Programming Gems.
- Robin Green.
On Wed, Nov 4, 2009 at 6:32 AM, Robin Green <rob...@gm...> wrote:
> Your real problem is that the range of x is unconstrained. Looking at
> how pow() is calculated:
>
> pow(n,x) = exp(log(x)*n)
>
> You've simplified the exponent to exp(y+0) .. exp(y+1), but as y is
> unconstrained you've really gained nothing. The main problem with
> implementing pow() is that naive code can lose many digits of
> significance during the calculation if you don't calculate the log and
> multiply using extended precision, e.g. if you want a 24-bit result
> you'll need to calculate the intermediate values to 30 or more bits of
> accuracy. This is done by splitting the problem into a high part and a
> low part and combining the two at the reconstruction phase. For
> example, here's an implementation of powf():
>
> http://www.koders.com/c/fidF4B379CC08D80BEE9CD9B65E01302343E03BF4A7.aspx?s=Chebyshev
>
> The exp() is a great function to minimax as it only requires a few
> terms to approach 24-bit accuracy, but log() is a painfully different
> story often requiring table lookups (memory hits) to function well
> across the full number range.
>
> What are you using this for? If it's lighting or shading, there's no
> real reason to use a power function on the cosine term to get tighter
> specular highlights. It's just a shaping function that people use
> because it's easy to control and AFAICT it has no physical basis in
> defining a BRDF from the micropolygon point of view.
>
> - Robin Green.
>
> On Tue, Nov 3, 2009 at 7:45 PM, Juan Linietsky <re...@gm...> wrote:
>> Hi guys! I was wondering if there are fast ways to approximate the
>> curve resulting from pow(n,x) where n is in range [0..1] and x > 0
>> using only floating point (without strange pointer casts/etc)..
>>
>> Cheers
>>
>> Juan Linietsky
>
|
|
From: Robin G. <rob...@gm...> - 2009-11-04 14:32:38
|
Your real problem is that the range of x is unconstrained. Looking at how pow() is calculated: pow(n,x) = exp(log(x)*n) You've simplified the exponent to exp(y+0) .. exp(y+1), but as y is unconstrained you've really gained nothing. The main problem with implementing pow() is that naive code can lose many digits of significance during the calculation if you don't calculate the log and multiply using extended precision, e.g. if you want a 24-bit result you'll need to calculate the intermediate values to 30 or more bits of accuracy. This is done by splitting the problem into a high part and a low part and combining the two at the reconstruction phase. For example, here's an implementation of powf(): http://www.koders.com/c/fidF4B379CC08D80BEE9CD9B65E01302343E03BF4A7.aspx?s=Chebyshev The exp() is a great function to minimax as it only requires a few terms to approach 24-bit accuracy, but log() is a painfully different story often requiring table lookups (memory hits) to function well across the full number range. What are you using this for? If it's lighting or shading, there's no real reason to use a power function on the cosine term to get tighter specular highlights. It's just a shaping function that people use because it's easy to control and AFAICT it has no physical basis in defining a BRDF from the micropolygon point of view. - Robin Green. On Tue, Nov 3, 2009 at 7:45 PM, Juan Linietsky <re...@gm...> wrote: > Hi guys! I was wondering if there are fast ways to approximate the > curve resulting from pow(n,x) where n is in range [0..1] and x > 0 > using only floating point (without strange pointer casts/etc).. > > Cheers > > Juan Linietsky |
|
From: Danny K. <dr...@we...> - 2009-11-04 12:50:43
|
> x^m = exp(m log x) > > > So really you need a fast exp and log function. > > > Can you newton-raphson refine a log and exp calculation? > I've never tried ... perhaps someone else can help there. > > > Failing that you could probably use tables for the > calculations though this, obviously, limits the range of > powers you can perform. I know nothing about this but one thing that pops into my head is that exp and log both have a lot of self-similarity features so I wonder whether that would help. I once experimented with using a look-up table with very few entries and catmull-rom interpolation between them. It was just playing, but I do remember getting surprisingly fast results on sin and cos. Danny (aware he's out of his depth...) |
|
From: Richard F. <ra...@gm...> - 2009-11-04 11:59:26
|
I'm generally having a problem with the definition of the problem. Curve for me defines that this is a differentiation, but i feel that it's probably not. Not using clever bit ops seems pointless if you're after a lot of speed, and a lookup table might be more expensive because of the memory access. I think we need the problem defined better. Give us some context! 2009/11/4 Oscar Forth <os...@tr...> > x^m = exp(m log x) > > So really you need a fast exp and log function. > > Can you newton-raphson refine a log and exp calculation? I've never tried > ... perhaps someone else can help there. > > Failing that you could probably use tables for the calculations though > this, obviously, limits the range of powers you can perform. > > 2009/11/4 Juan Linietsky <re...@gm...> > > Hi guys! I was wondering if there are fast ways to approximate the >> curve resulting from pow(n,x) where n is in range [0..1] and x > 0 >> using only floating point (without strange pointer casts/etc).. >> >> Cheers >> >> Juan Linietsky >> >> >> ------------------------------------------------------------------------------ >> Let Crystal Reports handle the reporting - Free Crystal Reports 2008 >> 30-Day >> trial. Simplify your report design, integration and deployment - and focus >> on >> what you do best, core application coding. Discover what's new with >> Crystal Reports now. http://p.sf.net/sfu/bobj-july >> _______________________________________________ >> GDAlgorithms-list mailing list >> GDA...@li... >> https://lists.sourceforge.net/lists/listinfo/gdalgorithms-list >> Archives: >> http://sourceforge.net/mailarchive/forum.php?forum_name=gdalgorithms-list >> > > > > ------------------------------------------------------------------------------ > Let Crystal Reports handle the reporting - Free Crystal Reports 2008 30-Day > trial. Simplify your report design, integration and deployment - and focus > on > what you do best, core application coding. Discover what's new with > Crystal Reports now. http://p.sf.net/sfu/bobj-july > _______________________________________________ > GDAlgorithms-list mailing list > GDA...@li... > https://lists.sourceforge.net/lists/listinfo/gdalgorithms-list > Archives: > http://sourceforge.net/mailarchive/forum.php?forum_name=gdalgorithms-list > -- fabs(); Just because the world is full of people that think just like you, doesn't mean the other ones can't be right. |
|
From: Oscar F. <os...@tr...> - 2009-11-04 11:32:29
|
x^m = exp(m log x) So really you need a fast exp and log function. Can you newton-raphson refine a log and exp calculation? I've never tried ... perhaps someone else can help there. Failing that you could probably use tables for the calculations though this, obviously, limits the range of powers you can perform. 2009/11/4 Juan Linietsky <re...@gm...> > Hi guys! I was wondering if there are fast ways to approximate the > curve resulting from pow(n,x) where n is in range [0..1] and x > 0 > using only floating point (without strange pointer casts/etc).. > > Cheers > > Juan Linietsky > > > ------------------------------------------------------------------------------ > Let Crystal Reports handle the reporting - Free Crystal Reports 2008 30-Day > trial. Simplify your report design, integration and deployment - and focus > on > what you do best, core application coding. Discover what's new with > Crystal Reports now. http://p.sf.net/sfu/bobj-july > _______________________________________________ > GDAlgorithms-list mailing list > GDA...@li... > https://lists.sourceforge.net/lists/listinfo/gdalgorithms-list > Archives: > http://sourceforge.net/mailarchive/forum.php?forum_name=gdalgorithms-list > |
|
From: Juan L. <re...@gm...> - 2009-11-04 03:45:43
|
Hi guys! I was wondering if there are fast ways to approximate the curve resulting from pow(n,x) where n is in range [0..1] and x > 0 using only floating point (without strange pointer casts/etc).. Cheers Juan Linietsky |
|
From: Gregory J. <gj...@da...> - 2009-10-30 01:58:51
|
Those links at the bottom of each email work. -----Original Message----- From: Robert Chiapperini [mailto:rob...@gm...] Sent: Thursday, October 29, 2009 4:50 PM To: GDA...@li... Subject: [Algorithms] how can i be removed from this mailing list? ---------------------------------------------------------------------------- -- Come build with us! The BlackBerry(R) Developer Conference in SF, CA is the only developer event you need to attend this year. Jumpstart your developing skills, take BlackBerry mobile applications to market and stay ahead of the curve. Join us from November 9 - 12, 2009. Register now! http://p.sf.net/sfu/devconference _______________________________________________ GDAlgorithms-list mailing list GDA...@li... https://lists.sourceforge.net/lists/listinfo/gdalgorithms-list Archives: http://sourceforge.net/mailarchive/forum.php?forum_name=gdalgorithms-list |
|
From: Steve L. <sm...@go...> - 2009-10-30 01:54:56
|
Subject: [Algorithms] how can i be removed from this mailing list? Well, you could start by reading the footer that is at the bottom of every single email you receive from it. |
|
From: Robert C. <rob...@gm...> - 2009-10-29 23:50:12
|
|
From: Ben Y. <shu...@gm...> - 2009-10-22 08:39:05
|
On Thu, Oct 22, 2009 at 4:09 AM, Bert Peers <be...@bp...> wrote: > > Seriously though, in what context is this problem popping up? > > Basically, I am trying to evaluate a better approximation to an SH exponential, as recommended by the paper I mentioned earlier. I'm getting some wierd ringing artifacts with just the OL approach even with 1 sphere blocker that is probably approximation error. Or it could be just due to some random stupidity on my part - although I've checked my stuff and it looks like it's in order. Anyway, that is a subject for another thread... Also, I'm doing all the computations per-pixel in the GPU, with 16 coefficents (4th order... or is that 3? :)) so a more efficient scheme than a general triple-product would be indispensable to an application that is already very very very fill-rate bound. Ben |
|
From: Dan G. <dan...@gm...> - 2009-10-21 23:50:23
|
I was originally encouraged by Christer Ericson to write a replacement. I was interested in doing SH projection on PS3 for cover/threat analysis masking for game. I was inbetween jobs at the time and I wrote most of the first implementation on the plane trip because I wanted to understand how SH worked. There's no real API, just a mess of functions for orders 3, 4, 5 and 6. My plan was to put the work in the Public Domain since it was just an implementation of data that is publicly available. The only thing I used was some data for the triple tensor product that Peter-Pike Sloan gave me. But that's only needed for the code generator (I implemented the algorithm in Snyder's paper) I was originally going to make this the subject of a series of blog posts, but I can't find the energy to write it up and it was over a year ago, so I've probably forgotten most of the details. cheers DanG On Wed, Oct 21, 2009 at 1:09 PM, Bert Peers <be...@bp...> wrote: > Dan Glastonbury schreef: >> I implemented the algorithm from that paper, with the help of >> Peter-Pike Sloan for a yet unreleased replacement of the functions in >> DX. Would people want to see such a library? > > No, of course not :) > > > Seriously though, in what context is this problem popping up? I thought > most projects are 2nd order, 9 coefficients, which is both good enough > and plenty fast to evaluate. Are people going with much higher orders > these days, or have SH's been pushed into a new and much more expensive > application, or what's up? > > > bert > > ------------------------------------------------------------------------------ > Come build with us! The BlackBerry(R) Developer Conference in SF, CA > is the only developer event you need to attend this year. Jumpstart your > developing skills, take BlackBerry mobile applications to market and stay > ahead of the curve. Join us from November 9 - 12, 2009. Register now! > http://p.sf.net/sfu/devconference > _______________________________________________ > GDAlgorithms-list mailing list > GDA...@li... > https://lists.sourceforge.net/lists/listinfo/gdalgorithms-list > Archives: > http://sourceforge.net/mailarchive/forum.php?forum_name=gdalgorithms-list > -- Dan Glastonbury, Dan dot Glastonbury at gmail dot com `Pour encourjay lays ortras' |
|
From: Bert P. <be...@bp...> - 2009-10-21 23:07:01
|
Dan Glastonbury schreef: > I implemented the algorithm from that paper, with the help of > Peter-Pike Sloan for a yet unreleased replacement of the functions in > DX. Would people want to see such a library? No, of course not :) Seriously though, in what context is this problem popping up? I thought most projects are 2nd order, 9 coefficients, which is both good enough and plenty fast to evaluate. Are people going with much higher orders these days, or have SH's been pushed into a new and much more expensive application, or what's up? bert |
|
From: Bert P. <be...@bp...> - 2009-10-21 23:06:04
|
Gary Snethen schreef: > > Hey Gang, > > What are the best forums (or mailing lists, etc.) for serious > discussions on real-time approximations to global illumination and other > advanced rendering techniques? Any of the stuff by Carsten Dachsbacher, Marc Stamminger, Jan Kautz is usually instant win and well presented. Eurographics and Siggraph Asia seem to be picking up the slack from the ever crazier Siggraph. I guess those two _are_ the forum you're after. A lot of the good stuff is announced early on the realtimerendering blog and/or Huang's conference page. > Is anyone here doing research in the area? Crytek also has an interesting variation, discretizing space and flowing light through it in SH space. There's interesting groundwork being done right now, all sorts of approaches are tried out; the framerate is usually down the drain, but it does work. Quite exciting! hth bert |
|
From: Mark W. <Mwa...@to...> - 2009-10-21 23:00:48
|
Stuart, We're using a Dynamic Bounding Volume Tree (DBVT) similar to Bullet's one. We're using this for graphics, physics and entities (3 different trees) with great success. I certainly recommend a pooled allocator of tree nodes if you go with this scheme. Cheers, Mark Torus Games ________________________________ From: Stuart Golodetz [mailto:gda...@gx...] Sent: Friday, 16 October 2009 2:28 AM To: Game Development Algorithms Subject: [Algorithms] Broad-phase collision detection for dynamic objects Hi, Hope this is the right place to ask this. I'm considering a broad-phase collision detection scheme for the dynamic objects in my game and was wondering if I could have some thoughts please? (I'm a student hobby programmer right now and this isn't something I've tackled in depth before.) Basically I started out thinking about a hierarchical grid-type scheme (the sort you'd find in e.g. Christer Ericson's Real-Time Collision Detection book) but my understanding was that as written it works by looking only at the objects' positions at the point of collision detection, and doesn't take account of their movements over the frame. (In other words, they're positioned in the hgrid in the cells where they end up after moving.) Since I'm using a *swept* version of XenoCollide/MPR (www.xenocollide.com) for my narrow-phase collision detection, this seemed like an inappropriate approach for me to use for broad-phase, since it would cull a lot of potential collisions before they got as far as the narrow-phase detector (essentially, I'd be implementing a broad-phase algorithm which would erroneously generate a lot of false negatives - the narrow-phase detector might be able to detect that a bullet went through something, but it would never get the chance). The alternative I'm considering is to adapt the scheme to take into account movement, essentially by doing a 3D Bresenham's algorithm on the hgrid. Normally, you might find an appropriate lowest level for the object based on object and grid sizes, and insert it in all the levels above and including that in the cells which it overlaps (this is one of the variations described in Ericson's book - I think due to Mirtich). What I'm thinking of doing is finding the appropriate lowest level based on the size of the moving object (i.e. taking into account both its size and its movement over the frame), doing the 3D Bresenham's on this lowest layer, then propagating the overlapped cells up to higher layers and inserting the object into the overlapped cells in each layer as before. The extent of the object isn't a problem - it's just like drawing a line with a 'fat' brush in a drawing package - so the line algorithm would only need to be run for the centre of each object. Does this seem workable? I'm concerned that all the line drawing might be costly - though it's only one line per object - and I'm not sure about the trade-off between the cost of drawing the line on a smaller grid and the cost of potentially doing more narrow-phase collision detections if the line's drawn on a larger grid. Have people done this sort of thing before? (I'll keep Googling for it if so!) Also, are there any good alternatives people could recommend, please? Cheers, Stuart |
|
From: Ben Y. <shu...@gm...> - 2009-10-20 11:52:50
|
I did have cursory look at the code generation techniques that was presented in that paper before, but I guess I was looking (hoping?) for a more obvious or fundamental approach to factorizing SH squares that I overlooked. Having said that, if anyone would like to share code or tips on this topic, I for one would be most interested ... |