Re: [Algorithms] VIPM With T&L - what about roam
Brought to you by:
vexxed72
From: Mark D. <duc...@ll...> - 2000-09-13 21:41:52
|
Tom Forsyth wrote: > > From: Mark Duchaineau [mailto:duc...@ll...] > >[snip]> The ROAM dual-queue idea works great on Directed Acyclic > > Graph (DAG) style hierarchies, a special case of which > > is VDPM. [snip] > > > > VIPM I see as okay for large collections of little objects, > > if you ignore > > the fact that you are roughly doubling the transform and download > > work for back-facing sections of the object (VD methods can > > easily eliminate those from ever hitting the graphics hardware). > > If your "little" object isn't, like a ship you decide to walk > > into, then you > > really need to do view-dependent processing on finer-grained units > > than whole objects. Further, you need Yet Another Hierarchy (YAH) > > to do frustum culling, and other macroscopic logic to decide how > > to balance the LODs on all the objects. So the simlicity of the > > VIPM vertex selection for a single object isn't buying you all that > > much in overall system simplicity. > > Agreed, for big things you walk into, you need to split into chunks. But > then each chunk is going to be fairly big on screen (a wall, a pillar, a > computer console, etc), so the number of tris in it is going to be high when > looking at it, so the granularity of the work you are doing for VIPM is > still small (i.e. a VIPM "work unit" per couple of thousand tris). We seem to agree--eek! So the questions are: (1) how big are the chunks, (2) how are seams dealt with, (3) how do you organize VD work on these chunks. The VIPM is slick for one subtask, sliding from coarse to fine with essentially no compute or dowloads within a chunk. This is really orthogonal to what ROAM is all about, and can be used within a ROAM- like system very nicely. ROAM is a good way to get seamless chunks and do the higher-level VD LOD selections. I can see VIPM ideas working great on the "units" that ROAM is optimizing. But I want to see something better than "keep the seams at highest resolution"...which is what Hoppe proposed in Vis 98. > > > However, note that in the case of a thing you walk into, you're going to > need some sort of visibility culling, e.g. portals or something (even boring > old frustum culling, so you don't load gazillions of textures, needs you to > split stuff into some sort of chunk that it can test), so you're still going > to need YAH of one kind or another. Also, since lots of these chunks are > going to be very similar (we wish we had time to make every bit of a > building unique, but ten-year development times are frowned upon :-), and > replicating them with VIPM is (a) a doddle and (b) very efficient (you share > the vertex and split data, and for some implementations of VIPM you also > share large chunks of index data as well). So it's YAH whichever way you do > it really. > YAHp, it's true. But lets keep the number of hierarchies to a minimum to keep our sanity. I think we agree here... And yes, we need realtime general-purpose visibility culling integrated with the VD LOD system, and that is *hard*... Special cases like rooms are solved well for static geometry, but gets a little tricky with geometry dynamically getting approximated with a VD optimizer. More general situations are just not handled in realtime yet. > > There is a good case for trying to merge more distant collections of objects > into one (e.g. make a whole room a VIPM chunk, so that when viewing it from > the other end of a long corridor, it doesn't chew too much time), but I'm > sure a fairly simple hierarchy of chunks would handle this just as > effectively as ROAM. So you have the room as a chunk with is VIPM'd, and > when it gets to a certain number of tris, it splits into five different VIPM > chunks (that have the same polygon representation at that LOD), and so on, > so that when you get close enough for the view-dependent differences to be > significant, they are all separate objects as above. > > This "clumping" needs to be done with some heuristic, and needs to interface > with your YAH (visibility culling or otherwise) wether you are are ROAMing > or VIPMing, so that seems to be fairly equivalent work to me. How to "clump" well is tricky, or at least I haven't heard a good solution yet. The main argument I have is that it is better to break geometry into smaller clumps (the smaller the better from an accuracy point of view), so that the non-uniform view-dependent scaling of errors gets made more uniform, and thus the VIPM is locally close to what you would have gotten with a VD method. Also smaller clumps give better frustum culling and backface culling and visibility culling... The great challenge is to make the clumps as small as possible while having the optimizer keep up with the graphics hardware. > > > Incidentally "VD methods can easily eliminate those from ever hitting the > graphics hardware" - replace "easily" with "at no extra effort" and I'll > agree :-) But I think the effort per object (or vertex or whatever dodgy > benchmark we use) is going to be higher for VDPM than for VIPM. But that's > the very core of our religions, so we'll leave that for now. :-) You will have to fill me in on what you are doing at the macro level for me to compare/contrast with ROAM-like approaches. ROAM does the top-level optimization in the "theoretical optimum" time, so it would be surprising if you could beat the accuracy vesus compute time tradeoff of ROAM. VIPM sounds great for the "low level" optimization. > > > > Correct me if I'm wrong, but in your VIPM code you still need > > to compute > > the actual strips or indexing on the fly each frame? > > No. The whole point of VIPM is that you calculate the strips/list/whatever > offline. You then make a low number (e.g. five) modifications to that data > each frame, according to whether an edge collapses or expands. If an edge > doesn't change, then (a) the strip/list information for it doesn't get > touched by the CPU, and (b) the edge isn't even considered by the CPU. > > Basically, there is a list of collapses/expands (which it is depends which > way you traverse it). The object has a "current position" pointer. And each > frame, the CPU decides how far & in which direction it needs to move the > pointer. When moving the pointer, the collapse/expand entries simply say > which indices need to be changed, and to what value, so it is a simple > memory-copying operation on indices to move the "current position". No > thinking required byt he CPU, apart from deciding how many tris it wants to > draw for the VIPM chunk - just memory bandwidth really. > > If you're using indexed lists, the tris are also stored in collapse order, > so collapsed tris just get dropped off the end of the list. If using strips, > then tris are collapsed out of the middle of strips just by making them > degenerate (which hardware bins very quickly, since it spots that two of the > three indices are the same). Okay--I hadn't known about the tris index order. Very Cool! I hope this really works on current drivers/hardware though... Since you are shifting the index range around, a dumb driver would not understand that it doesn't need to do anything. But this is really slick--it solves the issue of how to smoothly slide between to "chunk" detail levels at essentially no cost. The degenerate strips is an issue--in fact I can't picture a strip surviving after one of its vertices goes away--don't you have to change the indexing? The only way I see this working is if you don't strip but just send three indices per tri in your index arrays. Getting the vertex ordering done "right" sounds like an interesting issue for VIPM. I'll be curious to see what ideas come of that. --Mark D. |