You can subscribe to this list here.
2000 
_{Jan}

_{Feb}

_{Mar}

_{Apr}

_{May}

_{Jun}

_{Jul}
(390) 
_{Aug}
(767) 
_{Sep}
(940) 
_{Oct}
(964) 
_{Nov}
(819) 
_{Dec}
(762) 

2001 
_{Jan}
(680) 
_{Feb}
(1075) 
_{Mar}
(954) 
_{Apr}
(595) 
_{May}
(725) 
_{Jun}
(868) 
_{Jul}
(678) 
_{Aug}
(785) 
_{Sep}
(410) 
_{Oct}
(395) 
_{Nov}
(374) 
_{Dec}
(419) 
2002 
_{Jan}
(699) 
_{Feb}
(501) 
_{Mar}
(311) 
_{Apr}
(334) 
_{May}
(501) 
_{Jun}
(507) 
_{Jul}
(441) 
_{Aug}
(395) 
_{Sep}
(540) 
_{Oct}
(416) 
_{Nov}
(369) 
_{Dec}
(373) 
2003 
_{Jan}
(514) 
_{Feb}
(488) 
_{Mar}
(396) 
_{Apr}
(624) 
_{May}
(590) 
_{Jun}
(562) 
_{Jul}
(546) 
_{Aug}
(463) 
_{Sep}
(389) 
_{Oct}
(399) 
_{Nov}
(333) 
_{Dec}
(449) 
2004 
_{Jan}
(317) 
_{Feb}
(395) 
_{Mar}
(136) 
_{Apr}
(338) 
_{May}
(488) 
_{Jun}
(306) 
_{Jul}
(266) 
_{Aug}
(424) 
_{Sep}
(502) 
_{Oct}
(170) 
_{Nov}
(170) 
_{Dec}
(134) 
2005 
_{Jan}
(249) 
_{Feb}
(109) 
_{Mar}
(119) 
_{Apr}
(282) 
_{May}
(82) 
_{Jun}
(113) 
_{Jul}
(56) 
_{Aug}
(160) 
_{Sep}
(89) 
_{Oct}
(98) 
_{Nov}
(237) 
_{Dec}
(297) 
2006 
_{Jan}
(151) 
_{Feb}
(250) 
_{Mar}
(222) 
_{Apr}
(147) 
_{May}
(266) 
_{Jun}
(313) 
_{Jul}
(367) 
_{Aug}
(135) 
_{Sep}
(108) 
_{Oct}
(110) 
_{Nov}
(220) 
_{Dec}
(47) 
2007 
_{Jan}
(133) 
_{Feb}
(144) 
_{Mar}
(247) 
_{Apr}
(191) 
_{May}
(191) 
_{Jun}
(171) 
_{Jul}
(160) 
_{Aug}
(51) 
_{Sep}
(125) 
_{Oct}
(115) 
_{Nov}
(78) 
_{Dec}
(67) 
2008 
_{Jan}
(165) 
_{Feb}
(37) 
_{Mar}
(130) 
_{Apr}
(111) 
_{May}
(91) 
_{Jun}
(142) 
_{Jul}
(54) 
_{Aug}
(104) 
_{Sep}
(89) 
_{Oct}
(87) 
_{Nov}
(44) 
_{Dec}
(54) 
2009 
_{Jan}
(283) 
_{Feb}
(113) 
_{Mar}
(154) 
_{Apr}
(395) 
_{May}
(62) 
_{Jun}
(48) 
_{Jul}
(52) 
_{Aug}
(54) 
_{Sep}
(131) 
_{Oct}
(29) 
_{Nov}
(32) 
_{Dec}
(37) 
2010 
_{Jan}
(34) 
_{Feb}
(36) 
_{Mar}
(40) 
_{Apr}
(23) 
_{May}
(38) 
_{Jun}
(34) 
_{Jul}
(36) 
_{Aug}
(27) 
_{Sep}
(9) 
_{Oct}
(18) 
_{Nov}
(25) 
_{Dec}

2011 
_{Jan}
(1) 
_{Feb}
(14) 
_{Mar}
(1) 
_{Apr}
(5) 
_{May}
(1) 
_{Jun}

_{Jul}

_{Aug}
(37) 
_{Sep}
(6) 
_{Oct}
(2) 
_{Nov}

_{Dec}

2012 
_{Jan}

_{Feb}
(7) 
_{Mar}

_{Apr}
(4) 
_{May}

_{Jun}
(3) 
_{Jul}

_{Aug}

_{Sep}
(1) 
_{Oct}

_{Nov}

_{Dec}
(10) 
2013 
_{Jan}

_{Feb}
(1) 
_{Mar}
(7) 
_{Apr}
(2) 
_{May}

_{Jun}

_{Jul}
(9) 
_{Aug}

_{Sep}

_{Oct}

_{Nov}

_{Dec}

2014 
_{Jan}
(14) 
_{Feb}

_{Mar}
(2) 
_{Apr}

_{May}
(10) 
_{Jun}

_{Jul}

_{Aug}

_{Sep}

_{Oct}

_{Nov}

_{Dec}

S  M  T  W  T  F  S 


1
(6) 
2
(19) 
3
(38) 
4
(42) 
5
(13) 
6
(6) 
7
(21) 
8
(59) 
9
(47) 
10
(21) 
11
(22) 
12
(33) 
13
(4) 
14
(14) 
15
(13) 
16
(41) 
17
(27) 
18
(14) 
19
(30) 
20
(6) 
21
(3) 
22
(22) 
23
(2) 
24
(56) 
25
(48) 
26
(31) 
27
(11) 
28

29
(14) 
30
(6) 
31
(11) 



From: Akbar A. <syedali011@ea...>  20010104 23:38:10

hey, okay i've been searching on acm and google for a while now, i just can't seem to find this paper. i used to have it laying on my harddrive somewhere (i think?) but it might have been on another computer.? but anyways, does anyone know where i could find the paper by COOK84 Shade Trees, Computer Graphics 18(3), 22331, (Proc.SIGGRAPH 84`) it'd really be cool if someone had it. so far this was the best link ;/ http://www.cs.utah.edu/~schmelze/gibbon/backgrnd.htm laterz, akbar A. ;vertexabuse.cjb.net "necessity is the mother of strange bedfellows" ninja scroll 
From: Keith Harrison <K<eithhar@bt...>  20010104 22:49:47

The first time I enabled vertex lighting in Q3 I thought that there was no lighting at all (i.e. plain old decal/replace texturing instead of modulation). Turns out that the lighting was very subtle. In my demo I read all of the light entities from the bsp and perform a lighting preprocess, taking each light in turn and casting it's light onto the level geometry. Simple stuff, but I also use the vislist so that each light only casts photons to geometry that it can "see". I never got round to solving the issue of faces the vislist says can be "seen" by a light yet one or more of the faces vertices are occluded  all of the vertices have a lighting calculation performed. If you look at Plate III.6 (a) and Plate III.6 (b) of Foley & van Dam you'll see what I had in mind to do (eventually). Regards, Keith Harrison.  Original Message  From: "Brian Hook" <bwh@...> To: <gdalgorithmslist@...> Sent: Thursday, January 04, 2001 7:25 PM Subject: Re: [Algorithms] Vertex Lighting instead of Lightmaps > At 11:27 AM 1/4/01 +0000, you wrote: > >Anyway, with all of this talk about using vertex lighting instead of > >lightmapping I thought some of you may wish to take a look at the demo. > > Or you can enable vertex lighting in Q3, assuming that feature still > exists. When that was implemented (a concession to lower end cards, > specifically Permedia 2, that had poor blend mode support and/or poor fill > rate), it was done by sampling the lightmaps at the vertices and it looked > remarkably good. Not to mention it gave a fairly sizable performance boost. > > Brian > > > _______________________________________________ > GDAlgorithmslist mailing list > GDAlgorithmslist@... > http://lists.sourceforge.net/mailman/listinfo/gdalgorithmslist > 
From: Idahosa Edokpayi <idahosaedokpayi@ms...>  20010104 20:10:36

Well all I want is simple projectile motion so Euler integration works just fine. I am only curious about RungeKutta. Idaho Original Message From: gdalgorithmslistadmin@... [mailto:gdalgorithmslistadmin@...]On Behalf Of Graham Rhodes Sent: Thursday, January 04, 2001 9:10 AM To: gdalgorithmslist@... Subject: RE: [Algorithms] physics integrators Idaho, Read Jeff Lander's response for some good adviceI'd follow his approach if your system meets the conditions he states in his first sentence. If you cannot do this, you need to be aware that the simple explicit Euler integration (what most people mean by Euler integration) will be unstable if you have any springs, no matter how weak or strong, or if your system otherwise exhibits natural oscillatory behavior such as with magnets. In this case, you will need to use something more sophisticated, and the predictorcorrector methods that Jeff suggests are more straightforward than RungeKutta. Euler may appear okay with very small time steps, but the integration will eventually decay and blow up unless you halt the simulation prematurely. This instability is the reason more sophisticated integrators are eventually used for nontrivial problems. I'd hate for you to become frustrated if your simulation doesn't work with the simple Euler method. The math for that method just doesn't admit oscillations and there's nothing you can do about it except switch methods. (Plug: attend my talk at GDC 2001 to find out why.) Graham Rhodes Original Message From: gdalgorithmslistadmin@... [mailto:gdalgorithmslistadmin@...]On Behalf Of Idahosa Edokpayi Sent: Wednesday, January 03, 2001 11:24 PM To: gdalgorithmslist@... Subject: [Algorithms] physics integrators I realize that using fourth order RungeKutta methods for my particular particle system project is probably overkill and I'll probably just use Euler integration now, but I am curious as it was mentioned in the siggraph notes someone was so kid to link me to earlier. I understand the principles and I can do the math. (I think :) ) But I have two questions. Ki is supposed to be dependent on Ki1. Well for computing position this is a little difficult because velocity is independent of the previous position (at least it will be in my particle system). How does Ki1 factor into Ki if Velocity (which would be F(x,t) in my situation I believe) only needs time? My second question is: Is RungeKutta even valid if F( x, t ) is constant? If I am doing projectile motion and I am trying to find velocity and accelaration is a constant and or independent of time (changing t and x in F(x,t) makes no difference) can I still use RungeKutta? How? Are there any examples I can look at? Idaho _______________________________________________ GDAlgorithmslist mailing list GDAlgorithmslist@... http://lists.sourceforge.net/mailman/listinfo/gdalgorithmslist _______________________________________________ GDAlgorithmslist mailing list GDAlgorithmslist@... http://lists.sourceforge.net/mailman/listinfo/gdalgorithmslist 
From: <jack@re...>  20010104 19:59:58

I think that it was changed in light to redo the lighting at the verts, I _think_ without the shadows being taken into account. That was because of weird artifacts with like one vert being in shadow and spreading the shadow across the whole poly for large polys. Jack Original Message From: Brian Hook [mailto:bwh@...] Sent: Thursday, January 04, 2001 1:26 PM To: gdalgorithmslist@... Subject: Re: [Algorithms] Vertex Lighting instead of Lightmaps At 11:27 AM 1/4/01 +0000, you wrote: >Anyway, with all of this talk about using vertex lighting instead of >lightmapping I thought some of you may wish to take a look at the demo. Or you can enable vertex lighting in Q3, assuming that feature still exists. When that was implemented (a concession to lower end cards, specifically Permedia 2, that had poor blend mode support and/or poor fill rate), it was done by sampling the lightmaps at the vertices and it looked remarkably good. Not to mention it gave a fairly sizable performance boost. Brian _______________________________________________ GDAlgorithmslist mailing list GDAlgorithmslist@... http://lists.sourceforge.net/mailman/listinfo/gdalgorithmslist 
From: Matt Godbolt <matthew@ar...>  20010104 19:35:24

> The DX7 demo to which you refer does suffer from shading > artefacts, but this > is not caused by the shading being linear (since each > triangle is shaded > perspectivecorrect). I believe the artefacts are caused > because the vertex > colours supplied to the HW are not consistent. [snip] I haven't seen the demo you refer to so apologies if I'm barking up the wrong tree, but we're using lightmaps rather than vertex lighting because a lightmap texel acts like a quad  which is /not/ the same as the two triangle equivalent. Consider the quad: white  black     black  black  rendered as a quad (or a bilinearly interpolated 2x2 texture) you would see a white corner fading to black at each other corner. Rendered as two triangles you would get different results dependant on how you tesselated the quad; one way (with diagonal white to black) your would get a similar (although not quite the same) effect as to the quad. Rendered the other way, the white fades off at the blackblack diagonal and the bottomleft hand triangle is rendered totally black. These artifacts can show up badly on all vertexlit games, even with relatively high tesselation. Just my two penneth, and first post, so be gentle on me :) Matt ==  Matt Godbolt Coder, Argonaut Games PLC 
From: Brian Hook <bwh@wk...>  20010104 19:26:08

At 11:27 AM 1/4/01 +0000, you wrote: >Anyway, with all of this talk about using vertex lighting instead of >lightmapping I thought some of you may wish to take a look at the demo. Or you can enable vertex lighting in Q3, assuming that feature still exists. When that was implemented (a concession to lower end cards, specifically Permedia 2, that had poor blend mode support and/or poor fill rate), it was done by sampling the lightmaps at the vertices and it looked remarkably good. Not to mention it gave a fairly sizable performance boost. Brian 
From: Akbar A. <syedali011@ea...>  20010104 19:20:42

>talk about using vertex lighting instead of >lightmapping I thought some of you may wish to take a look at the demo. use per pixel lighting, well worth the effort imho. i really don't know how this has swayed, bump mapping is just a good effort, it adds fake geometric complexity except on silouhutee edges and it's not like ppl pay to much attention to edges in a game.. and then the fact that it's almost free depending on how you set up your rendering code pass... there some problems with banding (remeber that old talk), and i talked to a few ppl designing there engines with this technique( blend ecq to compute fragment color) and they just told me to "restrict those types of lights". this can be a really picky issue since some ppl love to generate lightmaps (see crazy radiosity fans ;) but for me the times are just insane. and levels arent' getting smaller, we are starting to mix indoor+outdoor and we all know radiosity can blow big chunks when you have some outdoor stuff... well it's time for my lunch/arcade break. laterz, akbar A. ;vertexabuse.cjb.net "necessity is the mother of strange bedfellows" ninja scroll Original Message From: gdalgorithmslistadmin@... [mailto:gdalgorithmslistadmin@...]On Behalf Of Keith Harrison Sent: Thursday, January 04, 2001 5:28 AM To: Algorithms List Subject: [Algorithms] Vertex Lighting instead of Lightmaps The vertex lighting discussion in thread about high quality 3D characters reminded me about a demo I wrote a few years ago. Some of you may remember the Direct3D Quake Viewer that I did for Microsoft (using DX5  shows how long ago it was). I also wrote a version that lit the Quake level geometry using the light entities in the level. Simple vertex lighting, but looked quite good (and these were the days before the coloured lighting of Quake2). There are shading artefacts, but that's what we're interested in  why they happen and how to avoid/reduce them. One notable thing was that the lighting calculations were fast enough that I could light the geometry at loadtime. Anyway, with all of this talk about using vertex lighting instead of lightmapping I thought some of you may wish to take a look at the demo. If so then mail me (don't spam the list with "Reply To"'s, please!). If there's enough interest then I'll dig it out of my archives, dust it over and post it on some webspace. Regards, Keith Harrison. _______________________________________________ GDAlgorithmslist mailing list GDAlgorithmslist@... http://lists.sourceforge.net/mailman/listinfo/gdalgorithmslist 
From: Keith Harrison <K<eithhar@bt...>  20010104 19:07:23

The vertex lighting discussion in thread about high quality 3D characters reminded me about a demo I wrote a few years ago. Some of you may remember the Direct3D Quake Viewer that I did for Microsoft (using DX5  shows how long ago it was). I also wrote a version that lit the Quake level geometry using the light entities in the level. Simple vertex lighting, but looked quite good (and these were the days before the coloured lighting of Quake2). There are shading artefacts, but that's what we're interested in  why they happen and how to avoid/reduce them. One notable thing was that the lighting calculations were fast enough that I could light the geometry at loadtime. Anyway, with all of this talk about using vertex lighting instead of lightmapping I thought some of you may wish to take a look at the demo. If so then mail me (don't spam the list with "Reply To"'s, please!). If there's enough interest then I'll dig it out of my archives, dust it over and post it on some webspace. Regards, Keith Harrison. 
From: Keith Harrison <K<eithhar@bt...>  20010104 19:07:18

 Original Message  From: "Mark Wilczynski" <mwilczyn@...> To: <gdalgorithmslist@...> Sent: Wednesday, January 03, 2001 5:25 PM Subject: Re: [Algorithms] Highest Quality Look for 3d Characters <SNIP> > shadows. If you subdivide your meshes to approximate lightmaps you > still get that blocky look because gauraud shading is linear. (there > was a good demo of this with the DX7 SDK  a room with lights jumping > around inside).<SNIP> The bit about shading being linear is wrong. Gouraud shading has been perspectivecorrect on most consumer HW for years, with a perpixel divide by homogenous W. [No doubt some IHV's use one of their multifarious tricks though] The DX7 demo to which you refer does suffer from shading artefacts, but this is not caused by the shading being linear (since each triangle is shaded perspectivecorrect). I believe the artefacts are caused because the vertex colours supplied to the HW are not consistent. It would be real nice if we could tie this down exactly, though. I sent a small radiosityviewer demo to nVidia about three years ago and asked them about the shading artefacts on their TNT  still waiting for an answer. ;) I still have the viewer in my archives in anyone wants to see it. Back to the thread: Take a look at the OpenGL demo "md2bump" from the nVidia website. Bump map/gloss map/specular techniques. Looks fantastic, but don't forget to run at 32bpp to see all of the features. Regards, Keith Harrison. 
From: Peter Lipson <plipson@dr...>  20010104 18:10:47

That game was a good example of being good enough to look worse; the closer you come to 'natural' looks and motion the more jarring the mismatches become. The limited articulation of the characters made them look like a radiocontrolled GIJoe  the range of motions were really good, but there weren't enough joints to look human. Peter Original Message From: gdalgorithmslistadmin@... [mailto:gdalgorithmslistadmin@...]On Behalf Of Mark Wilczynski Sent: Wednesday, January 03, 2001 8:11 PM To: gdalgorithmslist@... You may want to look at an old game called Die by the Sword. It did some of the things you mention. But I guess it didn't help their sales since nobody else bothered with such features. It was the type of game only programmers can fully appreciate (inverse kinematics, accurate (non predefined) body damage, etc). Mark 
From: Tom Forsyth <tomf@mu...>  20010104 15:50:54

> From: Corrinne Yu [mailto:corrinney@...] > > From: "Tom Forsyth" <tomf@...> > > > Having priority levels for your streaming system really > helps smooth out > the > > bumps  at quiet times it can be loading more likely stuff in the > > background, and "just too late" scalable systems allow > really busy or > > badlypredicted times to look bad, but at least stay > playable (rather than > > shuddering to a halt as the disk seeks all over the place). > >  In my experience with putting together a full streamable > (and net file > data real time in game downloadable) engine now is that > priority levels make > the system very complicated very fast. (I tried that route > and end up with a > giant mess.) >  Priority (perhaps your terminology is different or less > strict than how I > am using it here) deals with relative property between resources. In > streaming, often it is a matter of: >  1. this is what you need >  2. this is when you need it (you guess when) >  3. this is what you don't have >  4. this is how long it takes to get here (always assumes > worse case, > average cases will just make you happy :) ) Yes, I think we're talking about different things. My idea of priority is simply a guess of how likely you are to need something. So if your streaming thread doesn't have anything that iit absoloutely _needs_ to load (e.g. the player is standing still or something), then it loads the thing with the next highest priority (e.g the next room) as it's "best guess" as to what is happening next. You can always boost the priority of things if they get closer. But priorities are only used when there's nothing that is definately needed  to do soemthing useful with spare disk bandwidth. If something is definately needed, it gets "maximum" priority, and the whole thing will obviously stall until the item is loaded. >  It may be simpler to get (in a chunk) what's needed there > before you need > it, even if it is a little "out of order" which you need a > little sooner, > which you need a little later. Have a "chunk" of all the > stuff you will need > at time N, then just stream the chunk in without worrying > about whether > texture A or mesh B needs to get there sooner or later. The > logic of this is > simpler, and you do not have the streaming logic thunking or wobbling > between streaming in resource A then dumping resouce A back > and forth near > the same time because your "priority logic" is on a borderline value > (usually happens when running a streaming system on a low > memory or low > resource situation). No  priorities are never used to throw stuff away  that's just a straight LRU thing. Actually, at the moment, the game is such that I never throw textures away  once you have built a room, you are likely to keep looking at that room periodically. There are cases (e.g. if the room is destroyed by the enemy), but they are so rare that it's not worth bothering with. Since the textures are managed by D3D, it handles the actual managemnet  my stuff just decides whether to bother loading the things off disk in the first place. >  The "chunkness" is "logical" and not "execution time" of course, or > otherwise you will chunk every time you chunk in your data. > >  Another clue is that necessary resource may often be geographically > localized, but hardly always. Can't count on your visibility > algos only to > do streaming predictions for you. > >  Another good thing about streaming based on "what you > need", is when what > you need changes (which is often, players are _so_ > unpredictable), it is > easy to dump it out and start a new streaming set. If things > are already > inserted into a priority scheme, one has to go through the in > queue stuff > and push their priority back, or actually more efficiently > IMO to just leave > them alone. True, but you only need to do management of the priority queue when a chunk is loaded, i.e. you have done a disk seek & load. So you don't get all that many of them per second, assuming that you haven't split things into chunks too finely. Fine chunks = lots of work for little benefit. I would rather go for coarse chunks, and leave the finegrained management up to the D3D texture manager or the virtual memory manager as applicable. Obviously on a console without VM support you need to do something cleverer  I haven't given much thought to them yet. >  (Tom, apologies if I am interpreting your priority scheme > in correctly. > Priority with a large group of data, and not individual > resources, IMO may > work better.) Indeed  large groups only. Though I do currently handle each texture on its own, simply because a 512x512 texture is a decentlysized chunk of data (~256k, compressed + mipmaps  bigger without compression). > > (hmmm... I seem to have snipped Corrinne's comments out of > existence  > sorry > > about that  they were good points, and I applaud her daring, I just > didn't > > have any comments on them). > >  Thanks, Tom. I look forward to your own engine streaming work. >  (And looking forward to show you mine). It's a shame I can't retrofit more streaming into the current Startopia engine, such as animations (for various crufty reasons). But just demandloading (and indeed justtoolateloading) the textures has almost halved loading time, which I'm happy with. > Corrinne Yu Tom Forsyth  purely hypothetical Muckyfoot bloke. This email is the product of your deranged imagination, and does not in any way imply existence of the author. 
From: Graham Rhodes <grhodes@se...>  20010104 15:09:42

Idaho, Read Jeff Lander's response for some good adviceI'd follow his approach if your system meets the conditions he states in his first sentence. If you cannot do this, you need to be aware that the simple explicit Euler integration (what most people mean by Euler integration) will be unstable if you have any springs, no matter how weak or strong, or if your system otherwise exhibits natural oscillatory behavior such as with magnets. In this case, you will need to use something more sophisticated, and the predictorcorrector methods that Jeff suggests are more straightforward than RungeKutta. Euler may appear okay with very small time steps, but the integration will eventually decay and blow up unless you halt the simulation prematurely. This instability is the reason more sophisticated integrators are eventually used for nontrivial problems. I'd hate for you to become frustrated if your simulation doesn't work with the simple Euler method. The math for that method just doesn't admit oscillations and there's nothing you can do about it except switch methods. (Plug: attend my talk at GDC 2001 to find out why.) Graham Rhodes Original Message From: gdalgorithmslistadmin@... [mailto:gdalgorithmslistadmin@...]On Behalf Of Idahosa Edokpayi Sent: Wednesday, January 03, 2001 11:24 PM To: gdalgorithmslist@... Subject: [Algorithms] physics integrators I realize that using fourth order RungeKutta methods for my particular particle system project is probably overkill and I'll probably just use Euler integration now, but I am curious as it was mentioned in the siggraph notes someone was so kid to link me to earlier. I understand the principles and I can do the math. (I think :) ) But I have two questions. Ki is supposed to be dependent on Ki1. Well for computing position this is a little difficult because velocity is independent of the previous position (at least it will be in my particle system). How does Ki1 factor into Ki if Velocity (which would be F(x,t) in my situation I believe) only needs time? My second question is: Is RungeKutta even valid if F( x, t ) is constant? If I am doing projectile motion and I am trying to find velocity and accelaration is a constant and or independent of time (changing t and x in F(x,t) makes no difference) can I still use RungeKutta? How? Are there any examples I can look at? Idaho _______________________________________________ GDAlgorithmslist mailing list GDAlgorithmslist@... http://lists.sourceforge.net/mailman/listinfo/gdalgorithmslist 
From: Mark Harris <harrism@cs...>  20010104 14:52:25

> extremely overkill. An Euler integrator would work fine as well. > But if you are only doing sparks or fireworks or something like > that. Don't bother. > > p = p0 + v0t  0.5gt^2 Unless you are very concerned about performance  the equation above actually requires 1 more multiply per component (3 for 3D) than an euler step to solve for both the new position and new velocity. Euler would take 3 * 2 multiplies + 3 * 2 adds + 3 * 2 assignments. This analytic method takes 3 * 3 multiplies + 3 * 2 adds + 3 * 1 assignment. With many particles, that one extra multiply (even traded for one less assignment) difference can be noticeable. > Now if you want the particles to bounce down a set of stairs or > off the head of your character, that is a different issue. Euler works fine for me, even in the case of bouncing particles. It's higherorder forces (springmass, etc.) that cause problems > v(i+1) = vi + 0.5h(3f(i)f(i1)) > > p(i+1) = xi +0.5h(v(i+1) + vi) > > This uses a twostep explicit approach for the velocity term, then > a modified Euler (I think that's the one) equation for the > position term. It it very robust and doesn't require additional f > evaluations like the midpoint and RK methods. This is nice  storing the previous f eval. rather than computing temporary ones is a good win. Mark > > Jeff > > At 10:23 PM 1/3/2001 0600, you wrote: > >I realize that using fourth order RungeKutta methods for my > particular>particle system project is probably overkill and I'll > probably just use > >Euler integration now, but I am curious as it was mentioned in > the siggraph > >notes someone was so kid to link me to earlier. I understand the > principles>and I can do the math. (I think :) ) But I have two > questions. Ki is > >supposed to be dependent on Ki1. Well for computing position > this is a > >little difficult because velocity is independent of the previous > position>(at least it will be in my particle system). How does Ki > 1 factor into Ki if > >Velocity (which would be F(x,t) in my situation I believe) only > needs time? > >My second question is: Is RungeKutta even valid if F( x, t ) is > constant?>If I am doing projectile motion and I am trying to find > velocity and > >accelaration is a constant and or independent of time (changing t > and x in > >F(x,t) makes no difference) can I still use RungeKutta? How? Are > there any > >examples I can look at? > > > >Idaho > > > > > > > >_______________________________________________ > >GDAlgorithmslist mailing list > >GDAlgorithmslist@... > >http://lists.sourceforge.net/mailman/listinfo/gdalgorithmslist > >  >  > Jeff Lander Game Technology Seminars > > Darwin 3D Jan 30  Feb 2, 2001 > http://www.darwin3d.com http://www.techsem.com > > > _______________________________________________ > GDAlgorithmslist mailing list > GDAlgorithmslist@... > http://lists.sourceforge.net/mailman/listinfo/gdalgorithmslist > 
From: Tom Forsyth <tomf@mu...>  20010104 14:50:14

Using IDbased shadowbuffers (also known as Pbuffers or priority buffers) you can do it very nicely and quickly. One nice thing is that you use the same image to do the character>scenery shadow as you do for the character selfshadowing, so the only cost is applying the extra shadowbuffer texture to the character. It may mean an extra pass, but there is no extra geometry to generate  it's just a new texture. Once the various rendertotexture driver bugs get sorted out (it's only been in the API for two years  give them a chance :), this will be a very quick way to do shadows  it doesn't need silhouetteedge generation like stencil buffers. I'm sure there's been plenty of posts about these techniques in the past  a search of the archives should do the trick. I remember a demo being posted as well. Ah yes  http://www.daionet.gr.jp/~masa/ishadowmap/ Tom Forsyth  purely hypothetical Muckyfoot bloke. This email is the product of your deranged imagination, and does not in any way imply existence of the author. > Original Message > From: Daniel Bachler [mailto:DanyX@...] > Sent: 03 January 2001 17:59 > To: gdalgorithmslist@... > Subject: Re: [Algorithms] Highest Quality Look for 3d Characters > > > From: "Mark Wilczynski" <mwilczyn@...> > > objects it's a much harder problem. I still have not seen too many > > (if any) games where there is realistic character > selfshadowing, etc. > > Agreed. I think this is because it's a very subtle effect. > IMHO you could > only notice it at the limbs, but they usally move rather fast > anyway, or > in the face at an extreme closeup. I'd say it doesn't > contribute to the > image too much, so it's probably only reasonable to do it in > games that > depend heavily on a deep shadows for setting a convincing > mood (e.g. some > film noir style game). > > However, from the technical aspect, I would be interested in > how it could > be done, as I couldn't come up with a solution that would be > fast enough > to do in realtime. Maybe some funky stencil technique? > > bye, > Daniel 
From: Pierre Terdiman <p.terdiman@wa...>  20010104 12:53:03

> near the back of the head...you doing vertex coloration to get the shifts in > skin pigment? (looks like it around the nostrils) If so, how is it working > for you? Yes, vertex lighting is very important here. The plain texture just isn't enough, it looks boring flat. Generally speaking, finding a good skin material is very difficult. Skin rendering is actually a vast and complex topic. A nice introduction to the whole thing : http://www.cmagic.co.uk/FRAMES/rb_dwn.htm The goal (as far as I am concerned) is to do that in realtime: http://www.optidigit.com/stevens/facetests.jpg And one day or another, I will :) > What are some of the numbers behind this? Is the model animated? > (probably not because of the pose she's in) If so, do you get the same fps? > What kind of poly throughput are we looking at here? (needless to say it's > an impressive amount) Just curious, are you using subdivision for the model? This model is not animated so far, that's my test model for automatic weights generation. I would also like to try creating procedural motions instead of using keyframed ones  the best idea would probably be to use both anyway. Don't trust the frame counter. It's an old shot and I don't even know if it was a release build. Basically the skinning takes time, certainly, but that's not a real problem if you mix it with progressive meshes and subdivision surfaces. I don't think the model was subdivided here, it's the full one without any special effects/techniques applied, so it must be something like 123000 faces IIRC :) Pierre 
From: Tom Forsyth <tomf@mu...>  20010104 12:41:38

> From: Matthew MacLaurin [mailto:matt@...] > > > IMHO, bumpmaps give the best results for the effort required to > >implement them, and DX8 vertex/pixel shaders make them > easier then ever. > > On the other hand, bumpmaps are problematic with multiple > light sources, > creating an opposition between nicely lighted scenes and nicelooking > characters. Perhaps some tricks with "summarizing" multiple > scene lights > into a single best bumpmapping light will help here. I have posted on this before  yes, it works well. Even just doing one pass of Dot3 with some summing, plus the POVL as well, looks absoloutely fine. I had eight violentlycoloured lights rotating around a bumpmapped sphere, and it was fine. Adding more bumpmap passes would improve quality a bit, but I think going over three bumpmap passes is fairly pointless  once you have three significant light sources on a particular vertex, a fourth one is either too dim to be visible at the bumpmap level, or too bright and the whole thing oversaturates anyway, and most of the bumpmapping gets "whited out". > I'd rather have (when possible) dynamic tesselation and displacement > mapping. I think that in the long term the look you can get in the > laboratory with bump mapping will be gotten in the real world with > displacement and POVL (plain old vertex lighting.) There's just no > substitute for actual geometric detail. Once you've got > enough vertices, > vertex lighting starts to look good all over again. True, but (sticking my neck out a bit here) equivalentvisualquality bumpmapping is always going to be faster than POVL. Bumpmapping is pixel (texel)level, and to match that with POVL means 1pixel triangles, which is overkill. There are always approximations you can make over small pixel areas (otherwise known as triangles) for which you do some vertex calculations (i.e. the transform to surfacelocal space), and then do a far simpler perpixel calculation (dot3) using that data, and beat the quality of equivalenttridensity POVL, or even fourtimestridensity POVL. There is no need to do the full POVL calculations on every pixel  spotlight cones, attenuations, etc, can usually be calculated at lowdensity vertex level and then linearly interpolated across triangles perfectly happily. If the light is really stunningly close to the object, you may want to use a higherres tri mesh, but in general it's always worth deferring a large number of the light calculations to an every20pixels level. Dot3 isn't the final word (e.g. it doesn't do specular bumpmaps very well), but some compromise between vertexlevel calculations and pixellevel ones is certainly sensible. Tom Forsyth  purely hypothetical Muckyfoot bloke. This email is the product of your deranged imagination, and does not in any way imply existence of the author. 
From: Tom Forsyth <tomf@mu...>  20010104 12:18:39

Compared to the performance of the equivalent amount of polygons, performance is fantastic! :) Plus, if doing any sort of progressive meshing (another Good Thing), using bumpmaps for shading rather than standard vertex shading allows you to use much lowertri models before they look bad  the lighting changes with the vertex shading are usually far more visible (with a certain number of tris) than the silhouette and prallax errors. Using bumpmaps means there is very little noticeable shading change when the PM level changes. Most IHVs have SDKs that include various styles of bumpmapping  check out their websites. IMHO, it is also worth using emboss bumpmapping where Dot3 isn't available  it doesn't look as good as Dot3, but then those cards are expected to have slightly lower visual quality  they're older cards. But it still looks better than just vertex shading. Converting from Dot3 bumpmapping to emboss bumpmapping is pretty simple  the maths you do is virtually identical. I did a talk on character rendering, including bumpmapping, hightri generation and PM, at WGDC last year. It needs an update for DX8 (since the definition of DX8 was still fairly fuzzy back then), but the principles are still sound. http://www.muckyfoot.com/downloads/tom.shtml (second item down). Tom Forsyth  purely hypothetical Muckyfoot bloke. This email is the product of your deranged imagination, and does not in any way imply existence of the author. > Original Message > From: Matt Harmon [mailto:harmon@...] > Sent: 02 January 2001 22:58 > To: 'gdalgorithmslist@...' > Subject: RE: [Algorithms] Highest Quality Look for 3d Characters > > > Fnatastic tips! I have not used bump maps in realtime > engines enough to > make decisions about their performance. What links or demos > do you know of > that I can see more the results of such an implementation. > > Best Regards, > Matt 
From: Jeff Lander <jeffl@te...>  20010104 07:31:25

For your simple scenario where there are no changing forces such as collision or tying two particles together with a spring or having the particles magnetically attract to each other, you could even solve the system analytically and forget about the integrators altogether. A particle shot out at some initial velocity and under the influence of constant forces such as gravity found a parabolic arc. The position of the particle at any time can be easily solved analytically. A RK4 integrator would work but would be extremely overkill. An Euler integrator would work fine as well. But if you are only doing sparks or fireworks or something like that. Don't bother. p = p0 + v0t  0.5gt^2 where g is the gravitational force, p0 is the initial position, and v0 is the initial velocity. Now if you want the particles to bounce down a set of stairs or off the head of your character, that is a different issue. For more complex particle motion without things like real stiff springs connecting them, I currently use a predictorcorrector formula I saw in a book by Crenshaw that takes advantage of the fact that for most dynamic simulations we are really solving a second order equation with two equivalent first order equations. It takes the form: v(i+1) = vi + 0.5h(3f(i)f(i1)) p(i+1) = xi +0.5h(v(i+1) + vi) This uses a twostep explicit approach for the velocity term, then a modified Euler (I think that's the one) equation for the position term. It it very robust and doesn't require additional f evaluations like the midpoint and RK methods. Jeff At 10:23 PM 1/3/2001 0600, you wrote: >I realize that using fourth order RungeKutta methods for my particular >particle system project is probably overkill and I'll probably just use >Euler integration now, but I am curious as it was mentioned in the siggraph >notes someone was so kid to link me to earlier. I understand the principles >and I can do the math. (I think :) ) But I have two questions. Ki is >supposed to be dependent on Ki1. Well for computing position this is a >little difficult because velocity is independent of the previous position >(at least it will be in my particle system). How does Ki1 factor into Ki if >Velocity (which would be F(x,t) in my situation I believe) only needs time? >My second question is: Is RungeKutta even valid if F( x, t ) is constant? >If I am doing projectile motion and I am trying to find velocity and >accelaration is a constant and or independent of time (changing t and x in >F(x,t) makes no difference) can I still use RungeKutta? How? Are there any >examples I can look at? > >Idaho > > > >_______________________________________________ >GDAlgorithmslist mailing list >GDAlgorithmslist@... >http://lists.sourceforge.net/mailman/listinfo/gdalgorithmslist  Jeff Lander Game Technology Seminars Darwin 3D Jan 30  Feb 2, 2001 http://www.darwin3d.com http://www.techsem.com 
From: Mark Harris <harrism@cs...>  20010104 07:00:59

Idaho, Euler's method is simple to implement: (this code is heavily based on the SIGGRAPH coursenotes, so it may look a little familiar ;). // particles have vec3's: vPosition and vVelocity void ParticleSystem::EulerUpdate(float rDeltaT) { int i, j; // of course in my real code I don't allocate this every time! Vec3 *pvDerivative = new Vec3[2 * GetNumActiveParticles()]; // compute the derivative at each particle. // this consists of simply the velocity and acceleration. // Velocity should be stored per particle, and acceleration // is total force divided by mass. ParticleDerivative(pvDerivative); // euler: y_(i+1) = y_i + dydt_i*h // dydt_(i+1) = dydt_i + d^2ydt^2_i*h for (j = 0; j < 2 * GetNumActiveParticles(); j++) { pvDerivative[j] *= rDeltaT; } i = 0; for (j = 0; j < GetNumActiveParticles(); j++) { pParticles[j]>vPosition += pvDerivative[i++]; pParticles[j]>vVelocity += pvDerivative[i++]; } rSimulationTime += rDeltaT; } RK4 is a bit more involved  the code is still compact, but it took me a couple tries to get my head around it. The idea is that you are not just evaluating the derivative at the beginning of a timestep. You have to evaluate it, take a subtimestep, then evaluate it again, etc., 4 times. Then, you take a weighted average of the results of each of these miniintegrations (effectively euler steps) to get the result. The effect is that although each step is a bit more expensive, you can take a larger timestep than with euler without losing as much accuracy. > How does Ki1 factor into Ki if Velocity (which would be F(x,t) in my > situation I believe) only needs time? I think these two questions are related (See below). > My second question is: Is RungeKutta even valid if F( x, t ) is > constant?If I am doing projectile motion and I am trying to find > velocity and accelaration is a constant and or independent of time > (changing t and x in F(x,t) makes no difference) can I still use > RungeKutta? How? Are there any examples I can look at? You can use RungeKutta anywhere you can use Euler, don't worry. ;) What I do is think of F(x, t) as just a blackbox function that modifies the state of a particle (in fact, mine is F(Particle*), and can modify any state, from color to force, but that's another discussion). If you think of it this way, you realize that a force function (or functor) need not pay any attention to its arguments  you can have constant forces that always add the same amount of force to any particle. Gravity is such a force. It always adds g*m Newtons to the particles' force accumulator. Or if you are directly keeping track of acceleration, it just adds g m/s/s. Here is my ParticleDerivative function used above, which I use in both my Euler and RK4 solvers. void ParticleSystem::ParticleDerivative(Vec3 *derivArray) { // zero the particles' force accumulators ZeroForces(); // apply each "black box" operator to all particles (see above) ProcessOperators(); int i = 0; for (int j = 0; j < GetNumActiveParticles(); j++) { derivArray[i++] = pParticles[j]>vVelocity; // dxdt = v derivArray[i++] = pParticles[j]>vForce / pParticles[j]>rMass; // dvdt = f/m } } Now, as you look into RK4 more, you'll see that the tricky part is evaluating the derivative multiple times per step: the derivative depends on particle state at any given time. Therefore, you must temporarily set the state of the particles after each "minieuler" step before evaluating the derivative for the next "minieuler" step. (Sorry for not also including the RK4 code as an example, but its about twice as much code and this is getting long  if anyone wants it, I'll send it). I know I didn't directly answer your questions (I hope I didn't misunderstand them!), but by thinking about this and looking over the coursenotes more, I hope you can gain better understanding. Mark PS: if you write your ODE integraters right, you can apply them to any ODE simulation you do  I've used mine both for particle systems and rigidbody simulation.  Original Message  From: "Idahosa Edokpayi" <idahosaedokpayi@...> Date: Wednesday, January 3, 2001 11:23 pm Subject: [Algorithms] physics integrators > I realize that using fourth order RungeKutta methods for my > particularparticle system project is probably overkill and I'll > probably just use > Euler integration now, but I am curious as it was mentioned in the > siggraphnotes someone was so kid to link me to earlier. I > understand the principles > and I can do the math. (I think :) ) But I have two questions. Ki is > supposed to be dependent on Ki1. Well for computing position this > is a > little difficult because velocity is independent of the previous > position(at least it will be in my particle system). How does Ki1 > factor into Ki if > Velocity (which would be F(x,t) in my situation I believe) only > needs time? > My second question is: Is RungeKutta even valid if F( x, t ) is > constant?If I am doing projectile motion and I am trying to find > velocity and > accelaration is a constant and or independent of time (changing t > and x in > F(x,t) makes no difference) can I still use RungeKutta? How? Are > there any > examples I can look at? > > Idaho > > > > _______________________________________________ > GDAlgorithmslist mailing list > GDAlgorithmslist@... > http://lists.sourceforge.net/mailman/listinfo/gdalgorithmslist > 
From: Mark Harris <harrism@cs...>  20010104 06:51:33

Idaho, Euler's method is simple to implement: (this code is heavily based on the SIGGRAPH coursenotes, so it may look a little familiar ;). // particles have vec3's: vPosition and vVelocity void ParticleSystem::EulerUpdate(float rDeltaT) { int i, j; // of course in my real code I don't allocate this every time! Vec3 *pvDerivative = new Vec3[2 * GetNumActiveParticles()]; // compute the derivative at each particle. // this consists of simply the velocity and acceleration. // Velocity should be stored per particle, and acceleration // is total force divided by mass. ParticleDerivative(pvDerivative); // euler: y_(i+1) = y_i + dydt_i*h // dydt_(i+1) = dydt_i + d^2ydt^2_i*h for (j = 0; j < 2 * GetNumActiveParticles(); j++) { pvDerivative[j] *= rDeltaT; } i = 0; for (j = 0; j < GetNumActiveParticles(); j++) { pParticles[j]>vPosition += pvDerivative[i++]; pParticles[j]>vVelocity += pvDerivative[i++]; } rSimulationTime += rDeltaT; } RK4 is a bit more involved  the code is still compact, but it took me a couple tries to get my head around it. The idea is that you are not just evaluating the derivative at the beginning of a timestep. You have to evaluate it, take a subtimestep, then evaluate it again, etc., 4 times. Then, you take a weighted average of the results of each of these miniintegrations (effectively euler steps) to get the result. The effect is that although each step is a bit more expensive, you can take a larger timestep than with euler without losing as much accuracy. > position(at least it will be in my particle system). How does Ki1 > factor into Ki if Velocity (which would be F(x,t) in my situation I > believe) only needs time? I think these two questions are related (See below). > My second question is: Is RungeKutta even valid if F( x, t ) is > constant?If I am doing projectile motion and I am trying to find > velocity and accelaration is a constant and or independent of time > (changing t and x in F(x,t) makes no difference) can I still use > RungeKutta? How? Are there any examples I can look at? You can use RungeKutta anywhere you can use Euler, don't worry. ;) What I do is think of F(x, t) as just a blackbox function that modifies the state of a particle (in fact, mine is F(Particle*), and can modify any state, from color to force, but that's another discussion). If you think of it this way, you realize that a force function (or functor) need not pay any attention to its arguments  you can have constant forces that always add the same amount of force to any particle. Gravity is such a force. It always adds g*m Newtons to the particles' force accumulator. Or if you are directly keeping track of acceleration, it just adds g m/s/s. Here is my ParticleDerivative function used above, which I use in both my Euler and RK4 solvers. void ParticleSystem::ParticleDerivative(Vec3 *derivArray) { // zero the particles' force accumulators ZeroForces(); // apply each "black box" operator to all particles (see above) ProcessOperators(); int i = 0; for (int j = 0; j < GetNumActiveParticles(); j++) { derivArray[i++] = pParticles[j]>vVelocity; // dxdt = v derivArray[i++] = pParticles[j]>vForce / pParticles[j]>rMass; // dvdt = f/m } } Now, as you look into RK4 more, you'll see that the tricky part is evaluating the derivative multiple times per step: the derivative depends on particle state at any given time. Therefore, you must temporarily set the state of the particles after each "minieuler" step before evaluating the derivative for the next "minieuler" step. (Sorry for not also including the RK4 code as an example, but its about twice as much code and this is getting long  if anyone wants it, I'll send it). I know I didn't directly answer your questions (I hope I didn't misunderstand them!), but by thinking about this and looking over the coursenotes more, I hope you can gain better understanding. Mark PS: if you write your ODE integraters right, you can apply them to any ODE simulation you do  I've used mine both for particle systems and rigidbody simulation. > Idaho > > > > _______________________________________________ > GDAlgorithmslist mailing list > GDAlgorithmslist@... > http://lists.sourceforge.net/mailman/listinfo/gdalgorithmslist >  Original Message  From: "Idahosa Edokpayi" <idahosaedokpayi@...> Date: Wednesday, January 3, 2001 11:23 pm Subject: [Algorithms] physics integrators > I realize that using fourth order RungeKutta methods for my > particularparticle system project is probably overkill and I'll > probably just use > Euler integration now, but I am curious as it was mentioned in the > siggraphnotes someone was so kid to link me to earlier. I > understand the principles > and I can do the math. (I think :) ) But I have two questions. Ki is > supposed to be dependent on Ki1. Well for computing position this > is a > little difficult because velocity is independent of the previous > position(at least it will be in my particle system). How does Ki1 > factor into Ki if > Velocity (which would be F(x,t) in my situation I believe) only > needs time? > My second question is: Is RungeKutta even valid if F( x, t ) is > constant?If I am doing projectile motion and I am trying to find > velocity and > accelaration is a constant and or independent of time (changing t > and x in > F(x,t) makes no difference) can I still use RungeKutta? How? Are > there any > examples I can look at? > > Idaho > > > > _______________________________________________ > GDAlgorithmslist mailing list > GDAlgorithmslist@... > http://lists.sourceforge.net/mailman/listinfo/gdalgorithmslist > 
From: Mark Harris <harrism@cs...>  20010104 06:50:41

Idaho, Euler's method is simple to implement: (this code is heavily based on the SIGGRAPH coursenotes, so it may look a little familiar ;). // particles have vec3's: vPosition and vVelocity void ParticleSystem::EulerUpdate(float rDeltaT) { int i, j; // of course in my real code I don't allocate this every time! Vec3 *pvDerivative = new Vec3[2 * GetNumActiveParticles()]; // compute the derivative at each particle. // this consists of simply the velocity and acceleration. // Velocity should be stored per particle, and acceleration // is total force divided by mass. ParticleDerivative(pvDerivative); // euler: y_(i+1) = y_i + dydt_i*h // dydt_(i+1) = dydt_i + d^2ydt^2_i*h for (j = 0; j < 2 * GetNumActiveParticles(); j++) { pvDerivative[j] *= rDeltaT; } i = 0; for (j = 0; j < GetNumActiveParticles(); j++) { pParticles[j]>vPosition += pvDerivative[i++]; pParticles[j]>vVelocity += pvDerivative[i++]; } rSimulationTime += rDeltaT; } RK4 is a bit more involved  the code is still compact, but it took me a couple tries to get my head around it. The idea is that you are not just evaluating the derivative at the beginning of a timestep. You have to evaluate it, take a subtimestep, then evaluate it again, etc., 4 times. Then, you take a weighted average of the results of each of these miniintegrations (effectively euler steps) to get the result. The effect is that although each step is a bit more expensive, you can take a larger timestep than with euler without losing as much accuracy. > position(at least it will be in my particle system). How does Ki1 > factor into Ki if Velocity (which would be F(x,t) in my situation I > believe) only needs time? I think these two questions are related (See below). > My second question is: Is RungeKutta even valid if F( x, t ) is > constant?If I am doing projectile motion and I am trying to find > velocity and accelaration is a constant and or independent of time > (changing t and x in F(x,t) makes no difference) can I still use > RungeKutta? How? Are there any examples I can look at? You can use RungeKutta anywhere you can use Euler, don't worry. ;) What I do is think of F(x, t) as just a blackbox function that modifies the state of a particle (in fact, mine is F(Particle*), and can modify any state, from color to force, but that's another discussion). If you think of it this way, you realize that a force function (or functor) need not pay any attention to its arguments  you can have constant forces that always add the same amount of force to any particle. Gravity is such a force. It always adds g*m Newtons to the particles' force accumulator. Or if you are directly keeping track of acceleration, it just adds g m/s/s. Here is my ParticleDerivative function used above, which I use in both my Euler and RK4 solvers. void ParticleSystem::ParticleDerivative(Vec3 *derivArray) { // zero the particles' force accumulators ZeroForces(); // apply each "black box" operator to all particles (see above) ProcessOperators(); int i = 0; for (int j = 0; j < GetNumActiveParticles(); j++) { derivArray[i++] = pParticles[j]>vVelocity; // dxdt = v derivArray[i++] = pParticles[j]>vForce / pParticles[j]>rMass; // dvdt = f/m } } Now, as you look into RK4 more, you'll see that the tricky part is evaluating the derivative multiple times per step: the derivative depends on particle state at any given time. Therefore, you must temporarily set the state of the particles after each "minieuler" step before evaluating the derivative for the next "minieuler" step. (Sorry for not also including the RK4 code as an example, but its about twice as much code and this is getting long  if anyone wants it, I'll send it). I know I didn't directly answer your questions (I hope I didn't misunderstand them!), but by thinking about this and looking over the coursenotes more, I hope you can gain better understanding. Mark PS: if you write your ODE integraters right, you can apply them to any ODE simulation you do  I've used mine both for particle systems and rigidbody simulation. > Idaho > > > > _______________________________________________ > GDAlgorithmslist mailing list > GDAlgorithmslist@... > http://lists.sourceforge.net/mailman/listinfo/gdalgorithmslist > 
From: Akbar A. <syedali011@ea...>  20010104 06:10:59

>What's "butterfly"? surface type, look up subdivision papers on google.com and you'll find a lot of stuff. surf through my links, i'm pretty sure there is some link to a paper on variuos surface types. http://www.angelfire.com/ab3/nobody/links.html caltech comes to mind... laterz, akbar A. ;vertexabuse.cjb.net "necessity is the mother of strange bedfellows" ninja scroll Original Message From: gdalgorithmslistadmin@... [mailto:gdalgorithmslistadmin@...]On Behalf Of Peter Bertok Sent: Wednesday, January 03, 2001 9:57 PM To: gdalgorithmslist@... Subject: Re: [Algorithms] selfshadowing (was: Highest Quality Look for 3d Characters) > > Agreed, definitely! Cloth & hair are my favorites. Nice hair, but you seriously need some alpha on those eyelashes. 8D > Ok I put a more decent snapshot online (warning, ~350K): > http://www.codercorner.com/NextGen.jpg > > Now you know my secret project :))) 3D pr0n? 8D > My ultimate goal is to: >  animate the hair in a realistic way >  put some cloth on her and animate them as well >  mix skinning, VIPM & "onthefly Butterfly" What's "butterfly"? >  add realistic shadows >  add IK Do you have any links on doing IK bones in realtime 3D? >  add interaction with the environment > > All parts are "easy" (I have all of them, except the last one), the > challenge is to make them work together ! One thing still blocking me is > also the automatic generation of weights & bones attachements. It just > doesn't work so far. _______________________________________________ GDAlgorithmslist mailing list GDAlgorithmslist@... http://lists.sourceforge.net/mailman/listinfo/gdalgorithmslist 
From: Thatcher Ulrich <tu@tu...>  20010104 06:09:56

On Jan 03, 2001 at 09:10 0800, Matthew MacLaurin wrote: > >> > IMHO, bumpmaps give the best results for the effort required to > >> >implement them, and DX8 vertex/pixel shaders make them easier then ever. > > >> On the other hand, bumpmaps are problematic with multiple light sources, > > >> I'd rather have (when possible) dynamic tesselation and displacement > >> mapping. I think that in the long term the look you can get in the > > >problems with tesselation/vertexlighting as I see it. First, you > >really need to get down to 1 triangle per pixel for it to look as good > >as highres textured lighting. And if you get much *more* than 1 > > Nonsense. At much less than 1 tri per pixel, the additional geometric detail > will have a powerful qualitative impact. But unless you tesselate, the lighting won't be modulated down to the pixel level, which is what bump mapping gives you. Don't get me wrong; I'm not dissing displacement mapping or geometric detail. What I'm advocating is the complementary use of bump mapping and geometric detail, so that as you zoom into an object, the shading changes from lowpoly with bump mapping to highpoly with real tesselated bumps (with little bumpmapped bumps on 'em to bite 'em, and so on ad infinitum...). > Finally, I think we're talking about different generations. On current > hardware, throwing a single bump map on top of the stateoftheart is a > fine thing to do. I meant more that longer term, hardware tesselation with > displacement maps is a more general and effective technique. Even with an infinite triangle rate, the aliasing issue remains. The triangle which happens to enclose a pixel's center determines its shade. I guess a chipset with infinite triangle rate would probably have a pretty great fill rate and you could smother the aliasing with a lot of oversampling... given such hardware, I'll grant your point :)  Thatcher Ulrich <tu@...> == Soul Ride  pure snowboarding for the PC  http://soulride.com == realworld terrain  physicsbased gameplay  no view limits 
From: Idahosa Edokpayi <idahosaedokpayi@ms...>  20010104 04:25:07

I realize that using fourth order RungeKutta methods for my particular particle system project is probably overkill and I'll probably just use Euler integration now, but I am curious as it was mentioned in the siggraph notes someone was so kid to link me to earlier. I understand the principles and I can do the math. (I think :) ) But I have two questions. Ki is supposed to be dependent on Ki1. Well for computing position this is a little difficult because velocity is independent of the previous position (at least it will be in my particle system). How does Ki1 factor into Ki if Velocity (which would be F(x,t) in my situation I believe) only needs time? My second question is: Is RungeKutta even valid if F( x, t ) is constant? If I am doing projectile motion and I am trying to find velocity and accelaration is a constant and or independent of time (changing t and x in F(x,t) makes no difference) can I still use RungeKutta? How? Are there any examples I can look at? Idaho 
From: Richard Benson <rbenson@ea...>  20010104 04:12:26

I wrote one which is called Particle Chamber. It has full source included. It isn't 100% ready to drop into an engine as it's code is more readable than optimized, but it shows a wide range of effects and allows you to change parameters in realtime. http://home.earthlink.net/~rbenson/ParticleChamber.zip Also you can find my source and some other helpful info on http://www.particlesystems.com Richard Benson DreamWorks Interactive  EALA  Original Message  From: Idahosa Edokpayi <idahosaedokpayi@...> To: <gdalgorithmslist@...> Sent: Tuesday, January 02, 2001 10:45 PM Subject: [Algorithms] particle systems > I have tried and tried to write my own particle system and various things > have conspired against me to frustrate my eforts. Does anybody have any > advice and or examples of how to implement a real particle system with some > attempt to incorporate semirealistic physics? The problems I ran into most > where distributing my points (which I think I can solve now), particles that > didn't move correctly (when they did move they vibrated up and down) > > Idaho > > > > _______________________________________________ > GDAlgorithmslist mailing list > GDAlgorithmslist@... > http://lists.sourceforge.net/mailman/listinfo/gdalgorithmslist > 