Re: [Algorithms] VIPM With T&L - what about roam
Brought to you by:
vexxed72
From: Mark D. <duc...@ll...> - 2000-09-15 16:29:33
|
Tom Forsyth wrote: > > From: Mark Duchaineau [mailto:duc...@ll...] > > [snip] > > > b) I don't see why you want to use per-vert lighting with > > geometry LOD. In my experience this gives horrible > > artifacts. [snip]. > > With a second cube lookup after the first you can get > > specular for the infinite number of light sources (right now > > no hardware that I know of allows this second lookup > > on the results of the first--paletted textures are a > > total hack as a substitute). You would need some > > blending to approximate local lighting as you > > move from one region of geometry to the next, > > but I think that could work fairly well. I just don't see > > why the L in T&L is at all useful in the long run. Another > > nice thing about putting all your lighting data in the > > textures is that you don't have to send it over the > > saturated AGP bus. > > Ideally, you want a texture-based lighting approach, but with the hard work > done by the graphics card, not the CPU. I use bumpmapping (of any of the > three types), for exactly the reasons you state - it pops far less - but > ideally I'd want to pass the normal vector (or some analagous info) into the > card and have it compute the necessary UV offsets, etc. So there is still > some sort of shading info that needs sending to the card, which is all we > care about for bandwidth calculations. I was referring to a bump-mapping method I have used in a fast software- only renderer I wrote several years ago that will in the near future map well to graphics hardware, but does not today. In this method, literally all the lighting information is static and in textures, only purely geometric information is stored at the vertices. The extra bit of hardware that doesn't exist is the second foll-blown texture lookup that takes as input the results of the first texture lookup(s). No CPU work and no CPU memory used at all. Check out http://muldoon.cipic.ucdavis.edu/~duchaine/mesh.html for some images generated this way (all with a rather boring lighting environment with only 35 light sources, and smooth objects but with fully-resolved actual normals per pixel because these were all made with variants on Catmull-Clark subdivision). The sample code is part of my LibGen open source distribution at http://www.cognigraph.com/LibGen (see LibGen/Surftools/Mesh/mesh.l). Sorry, Linux/UNIX-like systems only...and no ROAM source released yet :(. As for the vertices, ROAM-style meshes in principle require only a bit or so per triangle to store the equivalent information that index lists store. The decode logic could be done pipelined for "free" on graphics hardware with a tiny bit of gate logic and a small 1D array of shorts. Without this lovely fantasy hardware, you still only need 8-bits each in U and V, although the OpenGL API I use only has a "nice" API call for U,V as shorts and xyz as floats for this, so I stick with that. This all means I get my verts down to 16 bytes for triangle bintree progressive arrays. That's a lot of CPU memory and AGP bandwidth saved, at least on dirvers that support that packing format (maybe none do if your indications are correct--that will take some testing). Like I said earlier, it is now known how to convert most any geometry to 2^n+1 by 2^n+1 patches that connect on their mutual edges. This is great for the new wavelet compression methods that our group and the Cal Tech/Bell Labs crowd are working on. This is also great for triangle bintree hierarchies and for organizing your textures. All the connectivity information is implied and the split/merge rules are just fast logic, no tables required unlike PMs. So I think VI-ROAM within chunks is very promising, and VD-ROAM at the macro level compliments this well. So as you can see I'm only a partial convert... ;-) --Mark D. |