You can subscribe to this list here.
2000 
_{Jan}

_{Feb}

_{Mar}

_{Apr}

_{May}

_{Jun}

_{Jul}
(390) 
_{Aug}
(767) 
_{Sep}
(940) 
_{Oct}
(964) 
_{Nov}
(819) 
_{Dec}
(762) 

2001 
_{Jan}
(680) 
_{Feb}
(1075) 
_{Mar}
(954) 
_{Apr}
(595) 
_{May}
(725) 
_{Jun}
(868) 
_{Jul}
(678) 
_{Aug}
(785) 
_{Sep}
(410) 
_{Oct}
(395) 
_{Nov}
(374) 
_{Dec}
(419) 
2002 
_{Jan}
(699) 
_{Feb}
(501) 
_{Mar}
(311) 
_{Apr}
(334) 
_{May}
(501) 
_{Jun}
(507) 
_{Jul}
(441) 
_{Aug}
(395) 
_{Sep}
(540) 
_{Oct}
(416) 
_{Nov}
(369) 
_{Dec}
(373) 
2003 
_{Jan}
(514) 
_{Feb}
(488) 
_{Mar}
(396) 
_{Apr}
(624) 
_{May}
(590) 
_{Jun}
(562) 
_{Jul}
(546) 
_{Aug}
(463) 
_{Sep}
(389) 
_{Oct}
(399) 
_{Nov}
(333) 
_{Dec}
(449) 
2004 
_{Jan}
(317) 
_{Feb}
(395) 
_{Mar}
(136) 
_{Apr}
(338) 
_{May}
(488) 
_{Jun}
(306) 
_{Jul}
(266) 
_{Aug}
(424) 
_{Sep}
(502) 
_{Oct}
(170) 
_{Nov}
(170) 
_{Dec}
(134) 
2005 
_{Jan}
(249) 
_{Feb}
(109) 
_{Mar}
(119) 
_{Apr}
(282) 
_{May}
(82) 
_{Jun}
(113) 
_{Jul}
(56) 
_{Aug}
(160) 
_{Sep}
(89) 
_{Oct}
(98) 
_{Nov}
(237) 
_{Dec}
(297) 
2006 
_{Jan}
(151) 
_{Feb}
(250) 
_{Mar}
(222) 
_{Apr}
(147) 
_{May}
(266) 
_{Jun}
(313) 
_{Jul}
(367) 
_{Aug}
(135) 
_{Sep}
(108) 
_{Oct}
(110) 
_{Nov}
(220) 
_{Dec}
(47) 
2007 
_{Jan}
(133) 
_{Feb}
(144) 
_{Mar}
(247) 
_{Apr}
(191) 
_{May}
(191) 
_{Jun}
(171) 
_{Jul}
(160) 
_{Aug}
(51) 
_{Sep}
(125) 
_{Oct}
(115) 
_{Nov}
(78) 
_{Dec}
(67) 
2008 
_{Jan}
(165) 
_{Feb}
(37) 
_{Mar}
(130) 
_{Apr}
(111) 
_{May}
(91) 
_{Jun}
(142) 
_{Jul}
(54) 
_{Aug}
(104) 
_{Sep}
(89) 
_{Oct}
(87) 
_{Nov}
(44) 
_{Dec}
(54) 
2009 
_{Jan}
(283) 
_{Feb}
(113) 
_{Mar}
(154) 
_{Apr}
(395) 
_{May}
(62) 
_{Jun}
(48) 
_{Jul}
(52) 
_{Aug}
(54) 
_{Sep}
(131) 
_{Oct}
(29) 
_{Nov}
(32) 
_{Dec}
(37) 
2010 
_{Jan}
(34) 
_{Feb}
(36) 
_{Mar}
(40) 
_{Apr}
(23) 
_{May}
(38) 
_{Jun}
(34) 
_{Jul}
(36) 
_{Aug}
(27) 
_{Sep}
(9) 
_{Oct}
(18) 
_{Nov}
(25) 
_{Dec}

2011 
_{Jan}
(1) 
_{Feb}
(14) 
_{Mar}
(1) 
_{Apr}
(5) 
_{May}
(1) 
_{Jun}

_{Jul}

_{Aug}
(37) 
_{Sep}
(6) 
_{Oct}
(2) 
_{Nov}

_{Dec}

2012 
_{Jan}

_{Feb}
(7) 
_{Mar}

_{Apr}
(4) 
_{May}

_{Jun}
(3) 
_{Jul}

_{Aug}

_{Sep}
(1) 
_{Oct}

_{Nov}

_{Dec}
(10) 
2013 
_{Jan}

_{Feb}
(1) 
_{Mar}
(7) 
_{Apr}
(2) 
_{May}

_{Jun}

_{Jul}
(9) 
_{Aug}

_{Sep}

_{Oct}

_{Nov}

_{Dec}

2014 
_{Jan}
(14) 
_{Feb}

_{Mar}
(2) 
_{Apr}

_{May}
(10) 
_{Jun}

_{Jul}

_{Aug}

_{Sep}

_{Oct}

_{Nov}

_{Dec}

S  M  T  W  T  F  S 





1
(31) 
2
(28) 
3
(10) 
4
(15) 
5
(22) 
6
(21) 
7
(30) 
8
(30) 
9
(44) 
10
(6) 
11
(5) 
12
(50) 
13
(33) 
14
(61) 
15
(68) 
16
(43) 
17
(24) 
18
(14) 
19
(85) 
20
(33) 
21
(30) 
22
(38) 
23
(1) 
24
(5) 
25
(6) 
26
(6) 
27
(29) 
28
(34) 
29
(81) 
30
(61) 
31
(10) 
From: Peter Bertok <bertok@ge...>  20010306 21:27:08

 Original Message  From: "Matt Harmon" <harmon@...> To: <gdalgorithmslist@...> Sent: Wednesday, March 07, 2001 6:05 AM Subject: [Algorithms] realtime metal > i am trying to get my robots in my (directx8 mostly) engine to look more > chrome/metallic  similair to my 3d animator's render in Max. > > I cannot figure out what I need to do to get the effect. I'm trying to > target the lowest speed machines as possible. > > (ps: my engine is not a game, just a "host" or "avatar" type app, so I have > more polys per character since the environment is just an image) > > What's the trick? Environment mapping should give a suitably chrome look. Spherical environment mapping is easy to implement on any platform, but cubic mapping looks a little better in many situations. Anisotropic materials like brushed metal are much harder and require custom vertex shaders, or tricks with cubic environment mapping to simulate BRDFs. (Bidirectional Reflectance Distribution Functions) (That should be more than enough keywords for you to type into http://www.google.com ) 
From: Mark Duchaineau <duchaine@ll...>  20010306 20:34:27

Tom Forsyth wrote: > > > From: Mark Duchaineau [mailto:duchaine@...] [snip] > > 2) The triangle bintree hierachy has a very easy way to walk > > the tree to get one generalized strip, which is ideal for > > maximizing vertexcache coherence on a GPU. This doesn't mean > > that this order of vertices/triangles will agree with the > > progression you had in mind for the vertex arrays. > > It seems you are doomed to some kind of tradeoff here > > (same story I believe with edgecollapse progressions and > > strips). Tom, Charles: have you come up with any clever > > ways to sidestep this tradeoff? > > Yes, there are plenty of methods  that's what my Gem is all about. Hmmm...I'm skeptical, but that would be cool if you could get all of the following at the same time: 0) O(1) cost to get to any random step in the progression 1) completely static index and vertex arrays (no writes when sliding the progression knob) 2) the progression order that came out of anyone's favorite optimizer, including byhand 3) a very small number of array calls per progressive chunk (less than 10, hopefully much less) 4) at least modest triangle count variation per chunk (>=sqrt(2) increase) before the chunks split 5) vertexcache coherent order When you and Charles filled me in a while back on progressive vertex arrays, clearly the O(updates) cost of sliding was an issue. I went off in a corner and worked out how to get (04) above, but (5) got lost for the "interior" of the vertex array, which is dictated by the refinement order in (2) unless you increase the number of calls (3) a lot. So I'll reask my question this way: (a) are the properties I outline above the same as what your Gem is about, and (b) in this context have you managed to get (05)? >> [snip] using AGP memory for geometry > > where the CPU doesn't touch that information at all from frame to > > frame (except to swap in/out a few percent of the blocks each > > frame). > > The problem here is not how much you touch, but that you touch it _at all_. Yes yes, we went through this discussion a long time ago. Are you implying that you can't have multiple vertex arrays, and can't load up a few % of them at the beginning of a frame with writeonly ops? I'm talking about completely static chunks, where some small % of the chunks get replaced every frame. These all live on PCside AGP memory. All you do is change the first/last pointer in the arraydrawing calls. I am NOT talking about tweeking any part of a loaded chunk ever. I'm pretty sure this is the same *per chunk* as what you call "sliding window VIPM". I sure hope you can swap a few % of chunks in/out per frame... I don't know what the D3D calls look like, but OpenGL/Nvidia supports this. > > 7) Note that the VIPM discussions are centered on the > > lowlevel details of what happens within a block, not on good > > general algorithms for how to make optimal decisions about > > the collection of output blocks. In a world where you have > > a collection of objects in space, that is fine. But in a more > > general setting the issues of blocking up large objects, > > dealing with seams, maximizing coherence, and performing the > > global optimization in a time budget each frame are hard > > problems. The ROAM dualqueue, frustum culling, priority deferal > > and stopwhentimeisup ideas play well here. > > This has been discussed before  for large objects, you need some sort of > metaLoD anyway  animation granularity, texture paging, texture mipmapping, > portalisation, material quality, lightingquality, etc all need to be > performed differently on different sections as you get closer to large > objects. ROAM and VIPM are both far too lowlevel to be the only solution > here, and as soon as you use a metaLoD control on objects, you break stuff > into chunks small enough for either method to be a contender. > > IMHO  this is still true of regular heightfield landscapes. As soon as you > apply something like Thatcher's quadtree texturing, or things like detail > maps that only happen to closeup things, or shadowing algorithms with > different qualities at different distances, you break them into chunks as > well, which levels out the playing field (oh dear  terrible metaphor in > this context :) > Yes, there are obviously a lot of things going on at the higher levels of an app, most of which are competing for limited CPU, memory, texture passes, etc. I'm just saying that the ROAM paper covers a lot of the critical midlevel optimizations in that bigger context. I see hierarchies of chunks being needed in general, and ROAMstyle optimization appears to be effective at that level, including paging/texturing. What other methods do a good job optimizing the order of LOD ops, exploit frametoframe coherence (which is critical at the higher levels, like paging decisions), and allow "time's up" abrupt halts to the processing when you really don't want to drop a frame or tear? This is all the stuff other than the triangle bintree things from ROAM, but think of them in a bigger way and they play well. Example: triangle bintrees turn out to be great for "binary" texture hierarchies (factor of two increases in number of texels per unit area instead of factor of four for quadtrees), and the dualqueue optimizer is ideal for prioritizing texture loads from disk, decompression, and managing the videomemory cache. Going to the factoroftwo texture LOD is slick because you can avoid having multiple textures and blends for LOD transitions, because the LOD changes are twice as gradual. Mark D. 
From: Akbar A. <syedali011@ea...>  20010306 19:42:08

>What's the trick? >I cannot figure out what I need to do to get the effect. I'm trying to >target the lowest speed machines as possible. >chrome/metallic use a metal looking texture +texgen. not a metal looking texture though .. http://www.angelfire.com/ab3/nobody/alpha.png if you go out and use a metallic looking texture i think you might be able to get a decently looking effect of what you want (not mettalic but the demo of the screenshot..)? http://www.angelfire.com/ab3/nobody/alpha.zip this seems like one of the cheaper/less accurate ways to do it ? hope this helps, akbar A. ;vertexabuse.cjb.net or ;www.vertexabuse.com cjb goes down once in a while "plastic or metallic. please pick your version of lighting ;) " .me Original Message From: gdalgorithmslistadmin@... [mailto:gdalgorithmslistadmin@...]On Behalf Of Matt Harmon Sent: Tuesday, March 06, 2001 1:06 PM To: 'gdalgorithmslist@...' Subject: [Algorithms] realtime metal i am trying to get my robots in my (directx8 mostly) engine to look more chrome/metallic  similair to my 3d animator's render in Max. I cannot figure out what I need to do to get the effect. I'm trying to target the lowest speed machines as possible. (ps: my engine is not a game, just a "host" or "avatar" type app, so I have more polys per character since the environment is just an image) What's the trick? Best, Matt _______________________________________________ GDAlgorithmslist mailing list GDAlgorithmslist@... http://lists.sourceforge.net/lists/listinfo/gdalgorithmslist 
From: Ratcliff, John <jratcliff@ve...>  20010306 19:27:45

Why not just set up a cubic environment map? John Original Message From: Matt Harmon [mailto:harmon@...] Sent: Tuesday, March 06, 2001 1:06 PM To: 'gdalgorithmslist@...' Subject: [Algorithms] realtime metal i am trying to get my robots in my (directx8 mostly) engine to look more chrome/metallic  similair to my 3d animator's render in Max. I cannot figure out what I need to do to get the effect. I'm trying to target the lowest speed machines as possible. (ps: my engine is not a game, just a "host" or "avatar" type app, so I have more polys per character since the environment is just an image) What's the trick? Best, Matt _______________________________________________ GDAlgorithmslist mailing list GDAlgorithmslist@... http://lists.sourceforge.net/lists/listinfo/gdalgorithmslist 
From: Matt Harmon <harmon@ui...>  20010306 19:07:45

i am trying to get my robots in my (directx8 mostly) engine to look more chrome/metallic  similair to my 3d animator's render in Max. I cannot figure out what I need to do to get the effect. I'm trying to target the lowest speed machines as possible. (ps: my engine is not a game, just a "host" or "avatar" type app, so I have more polys per character since the environment is just an image) What's the trick? Best, Matt 
From: Charles Bloom <cbloom@cb...>  20010306 17:47:57

At 11:12 AM 3/6/2001 +0000, you wrote: >The problem here is not how much you touch, but that you touch it _at all_. >Touching the vertex array in any way at all can stomp on the card's >parallelism. Just so that others know, whenever you unlock a vertex buffer, the card has to stall its pipeline. This is because the card pipe is very deep, and it all starts from the "preT&L cache" which is just a copy of some vertex buffer memory in the GPU's local memory. Now this preT&L cache does NOT have all the fancy mechanisms that the CPU cache has for knowing what's been changed in system memory and dirtying cache lines and whatnot. In fact, there is no concept of a dirty cache line. Thus, whenever AGP vertex buffer memory is changed (via Unlock) the entire preT&L cache must be cleared. Of course, to clear the cache you must wait for the pipeline to finish the work it's doing, which is what leads to the full stall. This is why changing even one vertex in a VB per frame will kill your performance (aside from the parallelism issue that Tom pointed out).  Charles Bloom cbloom@... http://www.cbloom.com 
From: Ignacio Castano <i6298@ho...>  20010306 17:35:55

Cem UZUNLAR wrote: > which is true solidness test for walkable areas? > CONTENTS_SOLID > or > SURF_NONSOLID I just do this: if(!(r_shaderrefs[brush>shaderref].content_flags & mask)) return false; where mask is usually: #define MASK_PLAYERSOLID (CONTENTS_SOLIDCONTENTS_PLAYERCLIPCONTENTS_BODY) note that for monster, misiles, etc. it may be different. The appropiate definitions are in the public quake3 source code. i recommend you writing a trace function, that returns a trace_t struct that would be something like this: typedef struct { bool allsolid; bool startsolid; float fraction; Vec3 endpos; Plane plane; int surfaceFlags; int contents; } trace_t; there you return the the fraction of the movement, the end position, the plane that you have colided with, and the surface and of the content flags. The surface flags are used for footsteps, and some physic effects. Ignacio Castano castanyo@... 
From: Joakim Eriksson <jme@sn...>  20010306 17:27:05

I have been fighting with the problem of adding friction to a ball so it slides at the start then rolls after a while and then stop rolling if it collides with anything for a long while. I I need help I have read just about every paper out there that I could find but I haven't gotten to much smarter. (That includes Baraffs, Jeff's and Chris Hecker's papers/articles) The code builds on Jeff's friction code with some smaller adjustments. 'current' is the current body state When the friction is applied I have also added the line current>m_Torque+=3D CrossProduct(current>m_LastCollisionPos, Vt); to get the ball to roll because of the friction. Then we have the problem with the natural roll. My solution have been to take the magnitude of the position velocity and the rotation velocity times the radie. And then do the following if (rotVel > posVel) current>m_Force +=3D ((current>m_PosVel / rb>m_OneOverMass)  CrossProduct(current>m_AngularMomentum, current>m_LastCollisionPos)) * (rotVel  posVel); This is to keep the ball moving in the current direction of the rolling motion. The problem is that when I get into the natural roll and I apply a force to the side of the ball the ball will decrease its speed but will continue rolling in the last direction. I understand that this is because I only apply a force and that the natural roll condition still applis (Ie the ball is rotating faster than it is moving forward) so he will still try to move in the direction of rotation. Another example is if I roll into a wall. Then the ball will just keep on rolling without decreasing it's speed. So to make the question short. How should I affect the torque of a rolling ball so it=20 behaves naturaly? Best regards, Joakim E.  <www.planestate.net> 
From: kasper <kasper@pu...>  20010306 15:18:17

Eh, yes, I did thet, but I changed it to a plain to 2 plain's (3 plain) intersection test. Thx anyway > Jack Mathews wrote: > > For each plane... > > Generate a huge quad for the plane. > Clip it by the other planes. > > Hopefully this should help you get in the right direction. > > Jack > > Original Message > From: Kasper J Wessing [mailto:k.j.wessing@...] > Sent: Saturday, March 03, 2001 7:51 AM > To: Game Development Algorithms > Subject: [Algorithms] VertexFree Surface > > Does anyone know (or know some links) how to generate a convex > polyhedra out of a set of planes? > I wand to use it for reducing my convex hull! > > I created a convex hull, and now I wand to do "plane lifting" > (see http://www.cs.ualberta.ca/~melax/hull/consimp.html) but I don't > know how to reconstruct the neighboring faces! > > Thanks in Advance, > Kas > >  > Kasper J. Wessing  k.j.wessing@... > http://members.ams.chello.nl/k.j.wessing > > REMEMBER: IF IT LOOKS LIKE A LOT OF WORK  AVOID IT. > TERRY GILLIAM > > _______________________________________________ > GDAlgorithmslist mailing list > GDAlgorithmslist@... > http://lists.sourceforge.net/lists/listinfo/gdalgorithmslist  Kasper J. Wessing Research & Development REMEMBER: IF IT LOOKS LIKE A LOT OF WORK  AVOID IT. TERRY GILLIAM  kasper.j.wessing  kasper@...  http://www.pulse.nl pulse.interactive  (progressive media) wibautstraat 3  9th floor south 1091 GH amsterdam  the netherlands t:+31 (0)20 596 25 00  f:+31 (0)20 596 25 03 pulse.interactive is a division of Xceed inc. (NASDAQ:XCED) 
From: Graham Rhodes <grhodes@se...>  20010306 14:46:24

My First Scene GraphChris, I haven't scrutinized your approach and won't have time in the forseeable future, but I can perhaps offer additional sources of scene graph literature. Its mainly documentation on proven implementations, but that can give you insight into things that are known to work pretty well. In some cases these architectures are outdated. They don't support multitexturing, for example, or vertex blending. But here goes: OpenGL Performer (previously IRIS Performer) documentation  *very* thorough documentation available as PDF files: http://www.sgi.com/software/performer/. Link to the "Developer Information" section from the list on the left. These documents are quite a good resource on scene graphs. There are sufficient examples that you should be able to see why they did something a particular way. The Intrinsic Alchemy game engine (www.intrinsic.com) was built by some folks who were on SGI's Performer team and I believe it is somewhat modeled after Performer, so although Performer itself is old (nearly a decade?), it did seem to form the basis for something brand new. (Performer is available for free for Linux.) Open Inventor. Another SGI scene graph, see http://www.sgi.com/software/inventor/. This one is actually *not* suitable for games. I've used it since 1996 or so now for some NASA work and found it is just not good for large, realtime animated or simulated scenes. Its more suitable for notsolarge still scenes, and for constructing level editors or modeling software. *BUT* there is some documentation available. There is a published book from AddisonWesley called "The Inventor Mentor" that is mainly an intro to Inventor but does also give some ideas about scene graph structures. Dave Eberly's book, "3D Game Engine Design" published *very* recently, has coverage of scene graphs for games. Probably also some stuff on his http://www.magicsoftware.com or http://www.wildmagic.com sites. There are others, but these are some I have found useful, especially the Performer stuff and Eberly's book. Graham Rhodes Original Message From: gdalgorithmslistadmin@... [mailto:gdalgorithmslistadmin@...]On Behalf Of Chris Brodie Sent: Monday, March 05, 2001 10:30 PM To: gdalgorithmslist@... Subject: [Algorithms] My First Scene Graph I'm attempting to take the next step forward from arrays of objects in my engine to a scene graph. I have read david eberly's chapter on the subject. (Which by the way is all I could find on the subject, the net seems overloaded with VRML stuff which makes it difficult to find anything else) http://members.pnc.com.au/~gimp/SceneGraph.zip I have included a sort of template of how I intend on doing the implementation in the link above(7K) The tree is constructed with CNode as a backbone. Other objects are derived from this. In english, I have these: CNode Parent link Bounds List of objects List of states Static std::vector<CNode> ie a database of nodes so we don't have nodes on the heap CContainer Derived from CNode Like an octree node including methods for culling, using the bounds inherited from CNode. 8 Child links CLight, CTransform, CAnimation State classes. These are linked to a node. Activate\Deactivated as traversal passes them. Each state class will affect all subnodes in it's own unique way, that being lighting objects or transforming them or providing some time based modification. I was considering having other states like CTexture, CBlend etc, etc but I fear the amount of work that will be required pushing states up the tree and lowering. E.g. A dust storm consists of a number of particle systems and large branch is set as blend. Someone fires a missle through it, do we reset the blend state to off for the missle or do we walk the tree and reset several blend states for a bunch of smaller tree's. Sounds hard... CMesh A mesh is a vertex array + indicies + texture ST's, etc,etc. For now has a 'CSurface' denoting the texture and material state. This probably should be a normal node derived state like the transform, until I can work out a good was to push states higher in a tree I'll just live with the rebind for each texture First of all I don't know is this is a good or bad model, it's just what I've come up with considering what I've read and my requirements. Some things I'm concerned about are using STL vectors in the nodes for objects\states. I fear that expanding a vector that's within the main vector will cause a realloc...Perhaps these should be in another location. The majority of the tree will consist of Containers (Octree Nodes). Perhaps I should just create them on the heap but use a manager to precreate a bunch of them so I can recycle previously created Nodes. I'm not too experience with cache coherence, but I'd imagine having the nodes in traversal order should speeds things up a bit though. I have used a metric of 1.0f = 1m but I'm unsure of a good size to stop carving up the world at. I had planned on using a sort of Thatcher'ish loose octree type bound on each container (rather than a box I use spheres) by enlarging bounds by say 50100%, to help reduce the amount of object movements. These two topics seems similar as the size of the base level spheres will determine how often objects have to move between containers. I guess the trade off is guessing how often many objects will be close enough that a loose cull will still cull enough to make it worthwhile vs cost of updating more often a tighter tree. Should I render as I walk the tree, storing alpha as I go then just spatially sort the alpha and render? Should I just store all the objects that are to be rendered then depth sort them in two lists(opaque\transparent). I get the feeling that I shouldn't be sorting anything but I can see how a loose octree aprroach will give proper depth sorting (and hence allow the z buffer to be turned off). Should I just be drawing everything as I traverse it in to a zbuffer then just rendering the alpha afterwards only? Many thanks for any comments Chris 
From: Joe Ante <Joe@Titansoft.de>  20010306 14:10:39

Mark, > 2) The triangle bintree hierachy has a very easy way to walk > the tree to get one generalized strip, which is ideal for > maximizing vertexcache coherence on a GPU. I am highly interested in how to create this one strip, specifically how to create this one generalized strip incrementally ? Thanks in advance bye joe 
From: Tom Forsyth <tomf@mu...>  20010306 11:12:02

> From: Mark Duchaineau [mailto:duchaine@...] > > > Chris and All, > > A couple of (hopefully nonreligious :) points come to mind > (hint, I tend to be on the ROAM side of the argument...): We won't hold it against you :) > 1) Much of the discussion on VIPM isn't really concerned > with edgecollapse progressions so much as efficient ways > to use vertex arrays, index lists and AGP memory for *any* > flavor of progression. The vertexarray ideas work well > with RTIN/Triangle Bintree hierarchies such as the > one in the ROAM paper. Indeed. > 2) The triangle bintree hierachy has a very easy way to walk > the tree to get one generalized strip, which is ideal for > maximizing vertexcache coherence on a GPU. This doesn't mean > that this order of vertices/triangles will agree with the > progression you had in mind for the vertex arrays. > It seems you are doomed to some kind of tradeoff here > (same story I believe with edgecollapse progressions and > strips). Tom, Charles: have you come up with any clever > ways to sidestep this tradeoff? Yes, there are plenty of methods  that's what my Gem is all about. [snip] > 5) The fastest "split only" optmizer I've written, measuring > just the triangle creation and not the sending to the graphics > hardware, cranks out >10meg tris per second without and use of > coherence on a 450MHz Pentium 3, without any blocking (that is, > when optimizing individual triangles). If you replace each > triangle by a block of "ROAM readonly progressive index+vertex > arrays" then you get 1001000x speedups with a few percent > degradation in quality over the finegrained optimizer. At this > point of course you are sending triangles in the "Bloom/Forsyth > recommended optimal form," i.e. using AGP memory for geometry > where the CPU doesn't touch that information at all from frame to > frame (except to swap in/out a few percent of the blocks each > frame). The problem here is not how much you touch, but that you touch it _at all_. Touching the vertex array in any way at all can stomp on the card's parallelism. The IHVs want you to drive their cards in one of two ways: (1) No touching. Just render completely static data. (2) No persistence. Just stream vertices through an AGP buffer. Every vertex is written every frame, and the previous contents of the AGP buffer are overwritten each time (which means the app doesn't care where the buffer is put, so the driver can put it in an unused chunk). With (1), the hardware can render with the same buffer each time, and noone cares that the graphics card may be up to a frame and a half behind where the app thinks it is, because the data has not changed. With (2), the driver can simply move the buffer to an unused area of memory. As the hardware finishes rendering with a buffer, it is returned to the unused pool. Again, the hardware can be up to a frame and a half behind. But if you modify a few verts per frame, that forces the CPU to wait for the card to finish rendering with that buffer before allowing modifications. I'm not saying you can't get this sort of stuff to go fast, but there a lot more pitfalls to watch out for. > 6) As Lucas noted, we are happily using ROAMlike hierarchies > on arbitrary manifolds. Yes, that looks pretty interesting for dynamicallygenerated shapes (which is where VIPM falls over). I'll have to have a closer look at this. ROAM is always (mis)presented as a regularheightfield method. > 7) Note that the VIPM discussions are centered on the > lowlevel details of what happens within a block, not on good > general algorithms for how to make optimal decisions about > the collection of output blocks. In a world where you have > a collection of objects in space, that is fine. But in a more > general setting the issues of blocking up large objects, > dealing with seams, maximizing coherence, and performing the > global optimization in a time budget each frame are hard > problems. The ROAM dualqueue, frustum culling, priority deferal > and stopwhentimeisup ideas play well here. This has been discussed before  for large objects, you need some sort of metaLoD anyway  animation granularity, texture paging, texture mipmapping, portalisation, material quality, lightingquality, etc all need to be performed differently on different sections as you get closer to large objects. ROAM and VIPM are both far too lowlevel to be the only solution here, and as soon as you use a metaLoD control on objects, you break stuff into chunks small enough for either method to be a contender. IMHO  this is still true of regular heightfield landscapes. As soon as you apply something like Thatcher's quadtree texturing, or things like detail maps that only happen to closeup things, or shadowing algorithms with different qualities at different distances, you break them into chunks as well, which levels out the playing field (oh dear  terrible metaphor in this context :) [snip] > Mark D. Tom Forsyth  Muckyfoot bloke. What's he up to now? http://www.muckyfoot.com/startopia/cam.html 
From: Slava Klimov <SlavaK<limov@gs...>  20010306 08:21:57

In your case best solution are using texture coordinates computed in camera space and specially generated texture with faded alpha (90 degree sector in general) on second stage. First time method was described by Brian Hook. You can find it in the archives. _____________________________________________________________ Slava Klimov, lead programmer URL: http://www.gscgame.com email: SlavaKlimov@... 
From: Scott Bean <Scott@MadFX.com>  20010306 04:23:28

Wow, this helps lots. This seems easier to understand than some of the papers or pdf's i've been reading. To the black board I go with a chalk in both hands... Thanks Eric  Original Message  From: "Eric Lengyel" <lengyel@...> To: <gdalgorithmslist@...> Sent: Monday, March 05, 2001 10:03 PM Subject: Re: [Algorithms] DotProduct3 bumpmapping question > I happen to be writing about this topic at the moment, so I thought I'd dump > some info here for you. A more detailed explanation of what I've included > below will be published later this year in the book "Mathematics for 3D Game > Programming and Computer Graphics". > > As has already been said, a bump map stores normal vectors in such a way > that the vector pointing straight up (0,0,1) corresponds to the iterated > normal vector across the face of a triangle being rendered. In order to > calculate a useful dot product between the bump map supplied normal and the > direction to the light source at each pixel, we need to transform that light > direction into the space in which the normal vector always points along > (0,0,1). This space is called tangent space or vertex space. > > In order to understand how to find a 3x3 matrix which transforms vectors > from object space into tangent space, we consider the more intuitive problem > of transforming vectors from tangent space into object space. Since the > normal vector at a vertex corresponds to (0,0,1), we know that the zaxis of > our tangent space always gets mapped to that vertex's normal vector. > > A bump map may be applied to a triangle face in any orientation. We want > our tangent space to be aligned such that the xaxis corresponds to the s > direction in the bump map and the yaxis corresponds to the t direction in > the bump map. That is, if P represents a point inside the triangle, we > would like to be able to write > > P  Pa = (s  sa) * F + (t  ta) * G , > > where F and G are tangent vectors aligned to the texture map and Pa, sa, and > ta are the 3D coordinates and texture coordinates at one of the vertices of > the triangle. Finding the vectors F and G actually isn't that hard. > Suppose we have a triangle whose vertices are given by the points Pa, Pb, > and Pc and whose texture coordinates are given by (sa,ta), (sb,tb), and > (sc,tc). We'll do everything relative to Pa, so let > > P1 = Pb  Pa > and > P2 = Pc  Pa. > > Also, let > > (s1,t1) = (sb  sa, tb  ta) > and > (s2,t2) = (sc  sa, tc  ta). > > Then we need to solve the following equations for F and G. > > P1 = s1 * F + t1 * G > P2 = s2 * F + t2 * G > > This is a linear system with six unknowns (three for each F and G), and we > have six equations (the x, y, and z components of the two equations above). > We can write this in matrix form as follows. > > [ P1x P1y P1z ] [ s1 t1 ] [ Fx Fy Fz ] > [ ] = [ ] [ ] > [ P2x P2y P2z ] [ s2 t2 ] [ Gx Gy Gz ] > > Multiplying both sides by the inverse of the (s,t) matrix, we have > > [ Fx Fy Fz ] 1 [ t2 t1 ] [ P1x P1y P1z ] > [ ] =  [ ] [ ] . > [ Gx Gy Gz ] s1*t2  s2*t1 [ s2 s1 ] [ P2x P2y P2z ] > > This gives us the (unnormalized) F and G tangent vectors for the triangle > PaPbPc. To find the tangent vectors for a single *vertex*, we just > average together to tangents for all of the triangles which share that > vertex (similar to the way vertex normals are commonly calculated), and then > normalize them. In the case that neighboring triangles have discontinuous > texture mapping, vertices along the border are generally already duplicated > since they have different mapping coordinates anyway. We do not average > tangents from such triangles together since the result would be bogus. > > Now, once we have the normal vector N and the tangent vectors F and G for a > vertex, we can transform from tangent space into object space using the > matrix > > [ Fx Gx Nx ] > [ ] > [ Fy Gy Ny ] (*) > [ ] > [ Fz Gz Nz ] > > To transform the opposite direction (from object space to tangent space  > what we want to do to the light direction), we can simply use the inverse of > this matrix. Now, it is not necessarily true that the tangent vectors with > each other or with the normal vector, so the inverse of this matrix will not > generally be its transpose. It's safe to assume, however, that the three > vectors will at least be *close* to orthogonal, so using GramSchmidt to > orthogonalize them shouldn't cause any unacceptable distortions. Using this > process, new (unnormalized) F and G vectors are given by > > F' = F  (N dot F) N > G' = G  (N dot G) N  (F' dot G) F' > > Normalizing these vectors and storing them as the tangent and binormal for a > vertex lets one use the matrix > > [ Fx' Fy' Fz' ] > [ ] > [ Gx' Gy' Gz' ] > [ ] > [ Nx Ny Nz ] > > to transform the direction to light from object space into tangent space. > Taking the dot product of the transformed light direction with a sample from > the bump map then produces the correct Lambertian diffuse lighting value. > > If you don't want to have an extra array which stores the pervertex > binormal, you can use the cross product N x F' to obtain mG', where m is > positive or negative one. You need to store the handedness of the basis > since G' obtained from this cross product may point in the wrong direction. > The value of m is positive one if the determinant of the matrix in equation > (*) above is positive and negative if that determinant is negative. I find > it convenient to store this sign in the wcoordinate of the tangent vector > F' and then use (N x F') * Fw' to obtain G'. This works nicely in vertex > programs without having to specify an extra array. > >  Eric Lengyel > > > _______________________________________________ > GDAlgorithmslist mailing list > GDAlgorithmslist@... > http://lists.sourceforge.net/lists/listinfo/gdalgorithmslist > 
From: Scott Bean <Scott@MadFX.com>  20010306 03:37:06

Well, you where 100% right about the method I was trying out with projected cubic normal maps relative to my 1 directional light source! It looked perfect when I was flying about around my object which was wrapped cynlindrically which by fluke kept it in texture space since I was pointing directly at it. So when I flew upside down, I was not alligned with the texture space per say and my bumps where inversed.. that's when I totally realized I had forgotten to take the uv mapping into account... like you said, would be fine for a wall or a FLAT projected texture only on any geometry but not for "pervertex texture space" I guess I can call it... Yep back to the drawing board.. Oh well, at least I can use that projected normal cubic map for good toon shading :) Later.. Scott Bean  Original Message  From: "Tom Forsyth" <tomf@...> To: <gdalgorithmslist@...> Sent: Monday, March 05, 2001 8:09 AM Subject: RE: [Algorithms] DotProduct3 bumpmapping question > The light vector you pass in is not in any sort of sensible "space" in the > conventional sense. It is in a weird space where +ve X points along the line > of constant texture U, +ve Y points along the line of constant texture V, > and +ve Z points perpendicular to them both (which is often quite close to > the normal). So if the texture is mapped upside down, then this space > changes handedness. It's pretty funky, and I don't claim to have any good > intuitive grasp of what it looks like. I just do the maths. > > Actually, I'm not entirely sure that what I've said above is right. It might > be that +ve X points perpendicular to the line of constant texture V, and > +ve Y points perpendicular to the line of constant U. When the texture > coords are mapped at right angles to each other, the results are the same, > but I can never remember what is meant to happen if the lines of constant U > and constant V are, say, 45 degrees to each other. As I say, I just do the > maths and make sure the results are correct. It's the old "ensuring an even > number of sign errors" thing. :) > > > Tom Forsyth  Muckyfoot bloke. > > What's he up to now? > http://www.muckyfoot.com/startopia/cam.html > > > > > Original Message > > From: Scott Bean [mailto:Scott@...] > > Sent: 05 March 2001 11:38 > > To: gdalgorithmslist@... > > Subject: [Algorithms] DotProduct3 bumpmapping question > > > > > > I've looked at most of the Dot3 bump samples from Nvidia or > > other sites and > > I'm still confused about where the mesh's UV coordinates come > > into play for > > the normal map. Now if the normal texture is mapped upside down, this > > changes everything. So I assume everything must be done in > > texture space, > > even if the texture is mapped to a funky gemetric mesh? Where > > exactly does > > the way the Normal texture is mapped come into play? > > > > Is the way the texture is mapped combined somehow with say > > the vertex normal > > in light space or something, then a color vector is created? in effect > > killing two birds with one stone? Then DOT3ed with the light > > vector in the > > TFACTOR as a color value? > > > > I'm confused as you can see... As usual... haha > > _______________________________________________ > GDAlgorithmslist mailing list > GDAlgorithmslist@... > http://lists.sourceforge.net/lists/listinfo/gdalgorithmslist > 
From: Chris Brodie <Chris.B<rodie@ma...>  20010306 03:30:26

I'm attempting to take the next step forward from arrays of objects in my engine to a scene graph. I have read david eberly's chapter on the subject. (Which by the way is all I could find on the subject, the net seems overloaded with VRML stuff which makes it difficult to find anything else) http://members.pnc.com.au/~gimp/SceneGraph.zip I have included a sort of template of how I intend on doing the implementation in the link above(7K) The tree is constructed with CNode as a backbone. Other objects are derived from this. In english, I have these: CNode Parent link Bounds List of objects List of states Static std::vector<CNode> ie a database of nodes so we don't have nodes on the heap CContainer Derived from CNode Like an octree node including methods for culling, using the bounds inherited from CNode. 8 Child links CLight, CTransform, CAnimation State classes. These are linked to a node. Activate\Deactivated as traversal passes them. Each state class will affect all subnodes in it's own unique way, that being lighting objects or transforming them or providing some time based modification. I was considering having other states like CTexture, CBlend etc, etc but I fear the amount of work that will be required pushing states up the tree and lowering. E.g. A dust storm consists of a number of particle systems and large branch is set as blend. Someone fires a missle through it, do we reset the blend state to off for the missle or do we walk the tree and reset several blend states for a bunch of smaller tree's. Sounds hard... CMesh A mesh is a vertex array + indicies + texture ST's, etc,etc. For now has a 'CSurface' denoting the texture and material state. This probably should be a normal node derived state like the transform, until I can work out a good was to push states higher in a tree I'll just live with the rebind for each texture First of all I don't know is this is a good or bad model, it's just what I've come up with considering what I've read and my requirements. Some things I'm concerned about are using STL vectors in the nodes for objects\states. I fear that expanding a vector that's within the main vector will cause a realloc...Perhaps these should be in another location. The majority of the tree will consist of Containers (Octree Nodes). Perhaps I should just create them on the heap but use a manager to precreate a bunch of them so I can recycle previously created Nodes. I'm not too experience with cache coherence, but I'd imagine having the nodes in traversal order should speeds things up a bit though. I have used a metric of 1.0f = 1m but I'm unsure of a good size to stop carving up the world at. I had planned on using a sort of Thatcher'ish loose octree type bound on each container (rather than a box I use spheres) by enlarging bounds by say 50100%, to help reduce the amount of object movements. These two topics seems similar as the size of the base level spheres will determine how often objects have to move between containers. I guess the trade off is guessing how often many objects will be close enough that a loose cull will still cull enough to make it worthwhile vs cost of updating more often a tighter tree. Should I render as I walk the tree, storing alpha as I go then just spatially sort the alpha and render? Should I just store all the objects that are to be rendered then depth sort them in two lists(opaque\transparent). I get the feeling that I shouldn't be sorting anything but I can see how a loose octree aprroach will give proper depth sorting (and hence allow the z buffer to be turned off). Should I just be drawing everything as I traverse it in to a zbuffer then just rendering the alpha afterwards only? Many thanks for any comments Chris 
From: Scott Bean <Scott@MadFX.com>  20010306 03:23:17

Cool we seem to be on the same page. This clears things up nicely. I agree there are many flavors in doing this and I'll sit down and think hard on which will be better and which can eventually be done nicely with vertex shaders. > 3. Compute the uv gradient vectors for each triangle > 4. Sum the gradients at each vertex for every triangle that shares it. > (similar to calculating smooth normals) Do any of the NVidia sample or others do this? That's exactly what I was afraid I would have to do but I could never figure out if any of the samples where doing it. Thanks Even :)  Original Message  From: "Evan Hart" <ehart@...> To: <gdalgorithmslist@...> Sent: Monday, March 05, 2001 8:39 PM Subject: RE: [Algorithms] DotProduct3 bumpmapping question > Scott, > > Generally the way you deal with the problems I think you are concerned > about is by creating a tangent space across the surface. The tangent space > allows the normal map to have +Z (or whatever your convention is) point away > from the surface. To calculate tangent space for an arbitrarily mapped > surface do the following: > > 1. Create surface normals obeying any faceting rules. > 2. Seperate the vertices such that each has unique tex coords and normal > 3. Compute the uv gradient vectors for each triangle > 4. Sum the gradients at each vertex for every triangle that shares it. > (similar to calculating smooth normals) > 5. Possibly orthonormalize the bases (don't change normal) > > Now, you have a normal, tangent, and binormal. When you bump map transform > the light vector at each vertex into the space defined by the normal, > tangent, and binormal. This vector is then interpolated across the surface > for the dp3 operation. You can use a cube map to deal with denormalization. > > There are many flavors of this whole process, and I do not profess to be an > expert on all of them. One twist, is that you use the process above to only > create the tangent in the direction of positive U (or V) and then cross it > with the normal to get the binormal. > > Finally, there is another uglier way to do this. If your texels are all > unique, you can map them all to object space rather than mapping the light > to surface space. > > Hope this helps. > > Evan > > Original Message > From: Scott Bean [mailto:Scott@...] > Sent: Monday, March 05, 2001 7:59 PM > To: gdalgorithmslist@... > Subject: Re: [Algorithms] DotProduct3 bumpmapping question > > > > yeah, but you have to remember, all these calculations are being done in > > texture space. > > See akbar, unless I'm not following right, texture space is WARPED, if the > geometry is complex and the UV wrap is warped and reversed and all weird, > then there is no real texture space. And if you mean the light vector to be > rotated in the current vertice's UV coord. texture space, then how do you do > that... > > For example, would your bump mapping implementation work with a torus object > mapped properly and tiled say 8 times with a reversed(flip) wrap addressing > mode. See at each unit, the texture space is changed... this is what I'm > having problems with... > > > > _______________________________________________ > GDAlgorithmslist mailing list > GDAlgorithmslist@... > http://lists.sourceforge.net/lists/listinfo/gdalgorithmslist > > _______________________________________________ > GDAlgorithmslist mailing list > GDAlgorithmslist@... > http://lists.sourceforge.net/lists/listinfo/gdalgorithmslist > 
From: Eric Lengyel <lengyel@C4Engine.com>  20010306 03:03:35

I happen to be writing about this topic at the moment, so I thought I'd dump some info here for you. A more detailed explanation of what I've included below will be published later this year in the book "Mathematics for 3D Game Programming and Computer Graphics". As has already been said, a bump map stores normal vectors in such a way that the vector pointing straight up (0,0,1) corresponds to the iterated normal vector across the face of a triangle being rendered. In order to calculate a useful dot product between the bump map supplied normal and the direction to the light source at each pixel, we need to transform that light direction into the space in which the normal vector always points along (0,0,1). This space is called tangent space or vertex space. In order to understand how to find a 3x3 matrix which transforms vectors from object space into tangent space, we consider the more intuitive problem of transforming vectors from tangent space into object space. Since the normal vector at a vertex corresponds to (0,0,1), we know that the zaxis of our tangent space always gets mapped to that vertex's normal vector. A bump map may be applied to a triangle face in any orientation. We want our tangent space to be aligned such that the xaxis corresponds to the s direction in the bump map and the yaxis corresponds to the t direction in the bump map. That is, if P represents a point inside the triangle, we would like to be able to write P  Pa = (s  sa) * F + (t  ta) * G , where F and G are tangent vectors aligned to the texture map and Pa, sa, and ta are the 3D coordinates and texture coordinates at one of the vertices of the triangle. Finding the vectors F and G actually isn't that hard. Suppose we have a triangle whose vertices are given by the points Pa, Pb, and Pc and whose texture coordinates are given by (sa,ta), (sb,tb), and (sc,tc). We'll do everything relative to Pa, so let P1 = Pb  Pa and P2 = Pc  Pa. Also, let (s1,t1) = (sb  sa, tb  ta) and (s2,t2) = (sc  sa, tc  ta). Then we need to solve the following equations for F and G. P1 = s1 * F + t1 * G P2 = s2 * F + t2 * G This is a linear system with six unknowns (three for each F and G), and we have six equations (the x, y, and z components of the two equations above). We can write this in matrix form as follows. [ P1x P1y P1z ] [ s1 t1 ] [ Fx Fy Fz ] [ ] = [ ] [ ] [ P2x P2y P2z ] [ s2 t2 ] [ Gx Gy Gz ] Multiplying both sides by the inverse of the (s,t) matrix, we have [ Fx Fy Fz ] 1 [ t2 t1 ] [ P1x P1y P1z ] [ ] =  [ ] [ ] . [ Gx Gy Gz ] s1*t2  s2*t1 [ s2 s1 ] [ P2x P2y P2z ] This gives us the (unnormalized) F and G tangent vectors for the triangle PaPbPc. To find the tangent vectors for a single *vertex*, we just average together to tangents for all of the triangles which share that vertex (similar to the way vertex normals are commonly calculated), and then normalize them. In the case that neighboring triangles have discontinuous texture mapping, vertices along the border are generally already duplicated since they have different mapping coordinates anyway. We do not average tangents from such triangles together since the result would be bogus. Now, once we have the normal vector N and the tangent vectors F and G for a vertex, we can transform from tangent space into object space using the matrix [ Fx Gx Nx ] [ ] [ Fy Gy Ny ] (*) [ ] [ Fz Gz Nz ] To transform the opposite direction (from object space to tangent space  what we want to do to the light direction), we can simply use the inverse of this matrix. Now, it is not necessarily true that the tangent vectors with each other or with the normal vector, so the inverse of this matrix will not generally be its transpose. It's safe to assume, however, that the three vectors will at least be *close* to orthogonal, so using GramSchmidt to orthogonalize them shouldn't cause any unacceptable distortions. Using this process, new (unnormalized) F and G vectors are given by F' = F  (N dot F) N G' = G  (N dot G) N  (F' dot G) F' Normalizing these vectors and storing them as the tangent and binormal for a vertex lets one use the matrix [ Fx' Fy' Fz' ] [ ] [ Gx' Gy' Gz' ] [ ] [ Nx Ny Nz ] to transform the direction to light from object space into tangent space. Taking the dot product of the transformed light direction with a sample from the bump map then produces the correct Lambertian diffuse lighting value. If you don't want to have an extra array which stores the pervertex binormal, you can use the cross product N x F' to obtain mG', where m is positive or negative one. You need to store the handedness of the basis since G' obtained from this cross product may point in the wrong direction. The value of m is positive one if the determinant of the matrix in equation (*) above is positive and negative if that determinant is negative. I find it convenient to store this sign in the wcoordinate of the tangent vector F' and then use (N x F') * Fw' to obtain G'. This works nicely in vertex programs without having to specify an extra array.  Eric Lengyel 
From: Arn <arn@bl...>  20010306 01:40:17

Jason Zisk wrote: > > I almost didn't use the stencil+shadow map method because of that exact > problem. I did eventually come up with an idea to get around it though. > [..] > But you know what? I decided to just slap the stencil shadows on top of the > shadow maps to see what it looked like and it was fine. I even had a bunch From the shot you posted I'll agree with you. The stencil+shadow overlap isn't really noticable at all! Hmm, I'll have to try it now too, thanks! Arn 
From: Evan Hart <ehart@at...>  20010306 01:39:52

Scott, Generally the way you deal with the problems I think you are concerned about is by creating a tangent space across the surface. The tangent space allows the normal map to have +Z (or whatever your convention is) point away from the surface. To calculate tangent space for an arbitrarily mapped surface do the following: 1. Create surface normals obeying any faceting rules. 2. Seperate the vertices such that each has unique tex coords and normal 3. Compute the uv gradient vectors for each triangle 4. Sum the gradients at each vertex for every triangle that shares it. (similar to calculating smooth normals) 5. Possibly orthonormalize the bases (don't change normal) Now, you have a normal, tangent, and binormal. When you bump map transform the light vector at each vertex into the space defined by the normal, tangent, and binormal. This vector is then interpolated across the surface for the dp3 operation. You can use a cube map to deal with denormalization. There are many flavors of this whole process, and I do not profess to be an expert on all of them. One twist, is that you use the process above to only create the tangent in the direction of positive U (or V) and then cross it with the normal to get the binormal. Finally, there is another uglier way to do this. If your texels are all unique, you can map them all to object space rather than mapping the light to surface space. Hope this helps. Evan Original Message From: Scott Bean [mailto:Scott@...] Sent: Monday, March 05, 2001 7:59 PM To: gdalgorithmslist@... Subject: Re: [Algorithms] DotProduct3 bumpmapping question > yeah, but you have to remember, all these calculations are being done in > texture space. See akbar, unless I'm not following right, texture space is WARPED, if the geometry is complex and the UV wrap is warped and reversed and all weird, then there is no real texture space. And if you mean the light vector to be rotated in the current vertice's UV coord. texture space, then how do you do that... For example, would your bump mapping implementation work with a torus object mapped properly and tiled say 8 times with a reversed(flip) wrap addressing mode. See at each unit, the texture space is changed... this is what I'm having problems with... _______________________________________________ GDAlgorithmslist mailing list GDAlgorithmslist@... http://lists.sourceforge.net/lists/listinfo/gdalgorithmslist 
From: Scott Bean <Scott@MadFX.com>  20010306 00:56:55

> yeah, but you have to remember, all these calculations are being done in > texture space. See akbar, unless I'm not following right, texture space is WARPED, if the geometry is complex and the UV wrap is warped and reversed and all weird, then there is no real texture space. And if you mean the light vector to be rotated in the current vertice's UV coord. texture space, then how do you do that... For example, would your bump mapping implementation work with a torus object mapped properly and tiled say 8 times with a reversed(flip) wrap addressing mode. See at each unit, the texture space is changed... this is what I'm having problems with... 