RE: [Algorithms] VIPM With T&L - what about roam
Brought to you by:
vexxed72
From: Tom F. <to...@mu...> - 2000-09-13 19:56:36
|
> From: Mark Duchaineau [mailto:duc...@ll...] > > > Tom, > > As a ROAM crusader I must respond ;-)... But of course... > ROAM works just fine on arbitrary geometry. We mentioned it in the > ROAM paper (but didn't implement it). Since then I've implemented > that and it works great using a forest of triangle bintrees. I was actually going to add an aside that maybe ROAM would work for arbitrary geometry, but then thought "nah, if it did, someone would have tried it by now, so there must be some gotcha I haven't spotted". Interesting to note that there isn't a huge gotcha. > Cover your > surface with "patches" that are 2^n + 1 verts on a side, > where adjacent > patches match vertices on their mutual edge, and you are set to go. > The trick is to convert your geometry to "patches" like this. There > is the MAPS algorithm from SIGGRAPH (Lee et al, 98), plus a new > shrink wrap technique for really wild geometry like turbulent CFD > isosurfaces by Bertram et al in Vis 00. Go to > http://graphics.cs.ucdavis.edu/Publications-2000.html > and look for "Bicubic Subdivision-Surface Wavelets for Large-Scale > Isosurface Representation and Visualization." Cool. Hoppe also has some algos for finding suitable base meshes for reparameterisations. Actually, about half his papers use them. So it looks like finding suitable "patches" is not rocket science (or at least, it's well-understood rocket-science). People who have read my talk on what I do to get my high-tri models (start with low-tri models, subdiv surface & displacement map - at http://www.muckyfoot.com/downloads/tom.shtml), will note that since I start with a low-tri mesh, and then tesselate (moderately) evenly, that stage is ready-done for me! > The ROAM dual-queue idea works great on Directed Acyclic > Graph (DAG) style hierarchies, a special case of which > is VDPM. This was shown in a paper by Bernadini and Schikore > "Dynamic viewpoint-dependent level of detail of large triangle meshes" > presented at The IASTED International Conference on Computer > Graphics and Imaging, Halifax, Canada, June 1-4, 1998. An online > preprint is here: http://www.llnl.gov/tid/lof/ (type Schikore as the > name to search on, then click on the last item that comes up). > Additionally, the priority deferral and incremental stripping ideas > work fine for DAGs. The one thing that needs a small re-work is > the frustum culling using halfplane labels and recursion. DAGs > don't have the simple nesting relationship that triangle bintrees do. > > VIPM I see as okay for large collections of little objects, > if you ignore > the fact that you are roughly doubling the transform and download > work for back-facing sections of the object (VD methods can > easily eliminate those from ever hitting the graphics hardware). > If your "little" object isn't, like a ship you decide to walk > into, then you > really need to do view-dependent processing on finer-grained units > than whole objects. Further, you need Yet Another Hierarchy (YAH) > to do frustum culling, and other macroscopic logic to decide how > to balance the LODs on all the objects. So the simlicity of the > VIPM vertex selection for a single object isn't buying you all that > much in overall system simplicity. Agreed, for big things you walk into, you need to split into chunks. But then each chunk is going to be fairly big on screen (a wall, a pillar, a computer console, etc), so the number of tris in it is going to be high when looking at it, so the granularity of the work you are doing for VIPM is still small (i.e. a VIPM "work unit" per couple of thousand tris). However, note that in the case of a thing you walk into, you're going to need some sort of visibility culling, e.g. portals or something (even boring old frustum culling, so you don't load gazillions of textures, needs you to split stuff into some sort of chunk that it can test), so you're still going to need YAH of one kind or another. Also, since lots of these chunks are going to be very similar (we wish we had time to make every bit of a building unique, but ten-year development times are frowned upon :-), and replicating them with VIPM is (a) a doddle and (b) very efficient (you share the vertex and split data, and for some implementations of VIPM you also share large chunks of index data as well). So it's YAH whichever way you do it really. There is a good case for trying to merge more distant collections of objects into one (e.g. make a whole room a VIPM chunk, so that when viewing it from the other end of a long corridor, it doesn't chew too much time), but I'm sure a fairly simple hierarchy of chunks would handle this just as effectively as ROAM. So you have the room as a chunk with is VIPM'd, and when it gets to a certain number of tris, it splits into five different VIPM chunks (that have the same polygon representation at that LOD), and so on, so that when you get close enough for the view-dependent differences to be significant, they are all separate objects as above. This "clumping" needs to be done with some heuristic, and needs to interface with your YAH (visibility culling or otherwise) wether you are are ROAMing or VIPMing, so that seems to be fairly equivalent work to me. Incidentally "VD methods can easily eliminate those from ever hitting the graphics hardware" - replace "easily" with "at no extra effort" and I'll agree :-) But I think the effort per object (or vertex or whatever dodgy benchmark we use) is going to be higher for VDPM than for VIPM. But that's the very core of our religions, so we'll leave that for now. :-) > Correct me if I'm wrong, but in your VIPM code you still need > to compute > the actual strips or indexing on the fly each frame? No. The whole point of VIPM is that you calculate the strips/list/whatever offline. You then make a low number (e.g. five) modifications to that data each frame, according to whether an edge collapses or expands. If an edge doesn't change, then (a) the strip/list information for it doesn't get touched by the CPU, and (b) the edge isn't even considered by the CPU. Basically, there is a list of collapses/expands (which it is depends which way you traverse it). The object has a "current position" pointer. And each frame, the CPU decides how far & in which direction it needs to move the pointer. When moving the pointer, the collapse/expand entries simply say which indices need to be changed, and to what value, so it is a simple memory-copying operation on indices to move the "current position". No thinking required byt he CPU, apart from deciding how many tris it wants to draw for the VIPM chunk - just memory bandwidth really. If you're using indexed lists, the tris are also stored in collapse order, so collapsed tris just get dropped off the end of the list. If using strips, then tris are collapsed out of the middle of strips just by making them degenerate (which hardware bins very quickly, since it spots that two of the three indices are the same). Vertices are also stored in collapse order, so unused verts just fall of the end of the list (i.e. you just tell the T&L pipeline to T&L fewer verts). So if the object doesn't change in LoD (which is usually just a cheesy "size of bounding spehere over distance from viewer"-type heuristic to be honest), then the CPU does no work at all, just points the graphics API at the previous frame's data. If collapses/expands are needed, then the CPU doesn't do any thinking - it just moves (typically) 3-4 indices (WORDs) per collapse/expand around in memory, increments or decrements the tri count by two, increments or decrements the vertex count by one, and points the graphics API at the data. > You can > "precompute" in a coarse-grained way only if you keep something like > power-of-two static LODs, in which case VIPM isn't all that different > from traditional static LODs circa 1976, except that you cut > the vertex > storage in half in some cases. If you use skip-strips or > other techniques > you end up doing work on a fine-grained level, and at that point > you might as well put some VD smarts in. Variants of VIPM recognise that after quite a few collapses, you're going to either have a strip with lots of degenerate tris, or a list with bad vertex caching, and so store several versions (e.g. powers of two numbers of tris) as "baselines" that then get mangled by the above, to keep the number of degenerate/badly cached tris to a sensible limit. But that swapover rarely takes place, and when it does it's always at the same point in the collapse/expand order, so it doesn't cost anything but the extra storage. So the precomputed versions are just to improve the hardware's efficiency a bit - they do not alter the fact that (assuming the camera moves slowly enough) you can have just two tris removed/added per frame all the way from ten to ten million tris. The switch from one version to the next power of two is done when the two describe identical geometries, just in a different strip/list ordering. > All that said, it is certainly worth exploring *somewhat* > coarser-grained > view dependence. Fine-grained view-dependent techniques using ANY > algorithm just can't keep up with graphics hardware (unless > you move veeeery slowly ;-). Hoppe's 1998 IEEE Vis paper is > a step in the right direction, but has a lot of problems and > only works > with terrain. See: > http://www.research.microsoft.com/~hoppe/#svdlod > > --Mark D. |