Re: [Algorithms] Terrain performance comparisons
Brought to you by:
vexxed72
From: Jonathan B. <jo...@nu...> - 2003-07-29 19:26:04
|
My recent email about software complexity issues touches on the subject that Luis brought up about algorithm adoption ostensibly being a social issue. I think there are definite technical issues here as well, so I wanted to bring those up. Luis Ackerman wrote: > Again it seems to me that we have are primarily a social limitations, > not technical ones. For all the "have to impliment it myself"ers out > there (for which game development is as notorious as anything else), the > simpler solution is better because it's easier to fully understand and > get right the first time through. I agree that "have to implement it myself" is a problem. But you'll find that this attitude is more prevalent among hobbyists than among professionals. (It used to be big among professionals too, but as the amount of work needed to complete a game gets bigger, it becomes obvious that you just can't do everything yourself and be budget-prudent). And I agree, it would be neat if people put their heads together and provided free, high-quality versions of these algorithms. But there are still a couple of problems with all that. Firstly, it's usually just not the case that you can just take a system like that and slap it into an engine with some glue and expect it to run well. You either need to design the rest of the engine around this major piece, or you need to revamp the piece to fit the engine (or do some combination of both). The more complicated the piece is, the harder this work is. Secondly, there are all the drawbacks of added system complexity that I mentioned in my last email. For *any* kind of algorithm, not just rendering LOD, you really need to justify all the complexity that the algorithm brings into the picture. That complexity costs real money, and it adds nonnegligibly to your chances of slipping schedule or failing the project entirely. Thus, the benefit must be easy to see and it must be large. Otherwise, incorporating such an algorithm is a bad decision, from both an engineering and a business/management standpoint. You can put me in the Tony Hoare camp here: I want an algorithm that is so simple that it obviously not deficient. Unfortunately, that is impossible in any kind of LOD, because the very concept of LOD is one that inherently causes problems (this might be a good subject for a different thread). It's important to remember this: ideally, we don't want to LOD at all. Any graphics LOD indicates a mismatch between what is being rendered, and what actually exists, in the game world. That mismatch *will* show up sometimes in gameplay. If the mismatch causes big enough problems, which it often will regardless of the algorithm in use, we need to deal with those. Again, this complexity is algorithm-independent, it's the minimum we can get away with and still have rendering LOD. So then, any complexity due to our specific algorithm gets layered on top of that. I'd like to keep that added layer small. This hooks into the discussion we were having about ROAM specifics... > I didn't suggest delaying reprioritization (and the methods for doing > so, incidentally, can provide guaranteed bounds) - that's a different > problem. I was observing that ROAM does work in descending-priority > order, so stopping early and just not finishing the lower-priority > updates will "do the right thing." But in a well-optimized algorithm, the per-vertex cost of actually doing a mesh modification is low enough that it's in the same neighborhood as the total-prioritization-per-live-vertex-per-frame cost. So only reducing the vertex modification cost is not very helpful, unless you also stop prioritization. All of which makes the matching of error bounds worse, which is what my next point is all about. > It won't guarantee error bounds > anymore, but that's the tradeoff. This is ok, since guaranteeing bounds > is also not always the best way to go about it. In general it's better > to update approximatly X% (say 90) of the worst stuff 99% of the time, > and always be fast (rather than guaranteeing bounds, and being mostly > fast). Right, and this is my big problem with CLOD. CLOD algorithms spend all this software complexity -- a finite and highly-contended resource, as mentioned earlier -- and what do we get? In such an algorihtm's raw form, we get a guarantee that the spatial error for every vertex on the screen is bounded. Okay; it's unclear whether that's worth all the effort, but at least it's something solid that can be said about what these algorithms achieve. But wait! In order for the system to run smoothly, we need to give up this guarantee. So now what can we say about the algorithm's achievement? It "generally makes it so that vertices are within the error threshold most of the time as long as you're not stressing the system too hard". Is that all? I can do that with a static mesh algorithm, so why all the fuss? ROAM handles morphing surfaces better than static mesh LOD, but in those cases I can just use geomipmapping. So I'm just not sure what ROAM buys me for all that effort. That's all. If someone convinces me that there is a large, clear payoff, I will happily go back to using CLOD algorithms. Now as people keep optimizing CLOD algorithms, they are becoming more like static mesh algorithms, so maybe the two will meet at some point. -Jonathan. |