Re: [Algorithms] Terrain performance comparisons
Brought to you by:
vexxed72
From: Lucas A. <ack...@ll...> - 2003-07-29 00:39:19
|
Jonathan Blow wrote: >>You're welcome to "avoid frame coherence" as much as you like, and we'll >>see how much gamers enjoy headache inducing games. What you probably >>meant is to forget exploiting coherence. >> >> > >Hey man, you know what I meant. > Yet it wasn't what you said. Clear communication is paramount in this medium. >>I know this discussion has taken place at least several times in the >>past, but you still continue to publicly mischaracterize the ROAM family >>of algorithms. I maintain that your particular implimentation >>experiences do not have any bearing on what ROAM is or is not capable >>of, and as such the biggest failings of ROAM remain social in nature, >>not technical. This is not the proper forum an ongoing debate, so for >>the algorithms-list I will just restate the relevant technical points. >> >> > >Well I just want to say that I don't seem to have any special interest in >dissing ROAM, as you may think. I have used it as an example, because >it is one of the more well-known terrain systems. I could have just as >easily made all the same negative statements about my system I >presented at siggraph 2000 (and in fact I often do) -- but not many >people know how that system works, so that wouldn't be as >effective. > > > It's nice of you to say so, since you tend to deride ROAM loudly, publicly, and repeatedly. If there is a deeper issue at hand, as you suggest, I'm all for examining the situation further. >>I was expecting a reapparance of the "cycle of death" arguement, and I >>just don't buy it. Incremental algorithms do not require that you >>always process all the possible work in a given frame. As I noted: >> >> > >Well the "cycle of death" is an actual physical >phenomenon. If "computer science" is actually a science, then this is >one of the kinds of basic entities that it deals with, alongside, say, >the question of how long it takes to sort a group of objects. > >So I think that complex algorithms should be designed around these >core observations about what it means to be a certain kind of >computer program. I'm not really satisfied with the approach of >"design an algorithm ignoring some of these facts, then add >workarounds." What you're explaining here seems to be to >be a workaround that isn't very robust: > > This is an interesting question, but I'll hit all the miscellaneous stuff first. >>>ROAM can quite easily accomodate per-frame time constraints due to the >>>nature of the dual-queue optimizing process: the work happens in order >>>of most important to least important changes, so stopping the proccess >>>early is ok (as the linear cost of incremental work brings diminishing >>>returns), and successive frames will pick up where it left off to fill >>>in the details when the view stabilizes again and coherence increases. >>> >>> > >I did a lot of this kind of thing in my system, and I never came up with >a solution that I thought was good. > >Due to the nature of the bucketing schemes that people tend to use, >you *cannot* delay reprioritization for very many frames in a row or else >you end up, essentially, having to re-do the whole thing from scratch. >When that happens, it's painful. > > I didn't suggest delaying reprioritization (and the methods for doing so, incidentally, can provide guaranteed bounds) - that's a different problem. I was observing that ROAM does work in descending-priority order, so stopping early and just not finishing the lower-priority updates will "do the right thing." It won't guarantee error bounds anymore, but that's the tradeoff. This is ok, since guaranteeing bounds is also not always the best way to go about it. In general it's better to update approximatly X% (say 90) of the worst stuff 99% of the time, and always be fast (rather than guaranteeing bounds, and being mostly fast). Priority approximation, with a cheap function instead of a guaranteed bound, follows here as well. >Duchaineau's original paper mentioned the Newton-Raphson-esque >thing, which is already about guessing how long you can go without >re-evaluating vertices. Clearly, if you wait longer than your conservative >bound for reevaluation, you're likely to update things too late, i.e. suffer >a visual quality hit or a tessellation efficiency hit. A little of that isn't >so bad, but how much is okay? And why spend all this effort and CPU >time proving bounds on 70% of your vertices, only to violate those >bounds with the remaining 30%? > >I'm not saying that there's no solution to this within the ROAM framework. >I am, though, rejecting your characterization of the matter as easy/simple. > > I think we can agree to disagree here. Having already admitted that ROAM is an involved and difficult algorithm to master (with a 'nice' but unforgiving learning curve), our characterization of certain aspects as easy or simple is in light of the fact that they fit well as natural extensions to the algorithm and are not crude hacks or comprimizes. >>Should the cost of updating the output mesh exceed that of creating a >>new one from scratch, this is an easy and correct fallback mode, which >>may then resume coherent updating as warranted. >> >> > >Another thing that I reject, not just about ROAM but about many published >algorithms, is the way they purport to be easy, but in fact are very >complicated due to all the workarounds that need to go in there. >If I start off with one moderately complicated thing, then add 23 >workarounds each of which is "easy", we very quickly end up with >something that is not "easy" at all. So whenever someone says "Oh just >do this one augmentation, it's easy", that is a big warning signal. >If on the other hand, they say "you can do this one augmentation, >but then here's a list of 15 foul interactions that might happen with >other parts of your game system," then I am more likely to trust them. > > >Just so I can share the love and prove I'm not just a bitter anti-ROAM >curmudgeon -- Perspective Shadow Maps are the same way. >Don't use them. > > > Likewise, the implementation reality of ROAM is that it's a collection of several smaller algorithms that have to work well in concert. The potential for ill-behaving interactions is something that has to be understood, but it can be overcome. A well designed system with sufficient modularization and separation of concerns can cope well with major augmentations. A poorly engineered system will self destruct at the mere thought. This seems to be a common theme: whether it is desirable to have a fundamentally simple and infallible system that is rather limited, or a more general and complex system that can be extended to meet more demanding requirements. To roughly quote Tony Hoare (from I-don't-recall-what-paper), a system may be "so simple as to be obviously not deficient, or so complicated as to have no obvious deficiencies." IIRC, that was about programming languages, but this issue has a similar flavor. It doesn't directly apply, because a simpler programming language can be 'complete', whereas most LOD systems can not, so we have to decide how to live with that complexity (or without it and the completeness of the solution). Again it seems to me that we have are primarily a social limitations, not technical ones. For all the "have to impliment it myself"ers out there (for which game development is as notorious as anything else), the simpler solution is better because it's easier to fully understand and get right the first time through. Conversely, for people who want a flexible system and have more demanding requirements, more complex solutions are necessary. Thus, it follows that we shouldn't all be reimplementing the complex solution, but rather refining and improving a publicly available one. I believe this is possible, but it hasn't really happened yet (for ROAM anyway). The more common high quality ROAM implimentations become, the more feasible this is, and the more interest people will have in using and contributing to one. For example, general engine/SDK's are certainly on the rise these days as a result of economic considerations, but they haven't moved in this direction yet. >>The degree of view coherence can be measured from the changes of >>viewpoint, refinement priorities, and hierarchical frustum culling >>results of successive frames. These factors are useful in deciding when >>the incremental update cost may exceed the from-scratch construction >>cost for a frame. >> >> > >Here's an example of what I am talking about. Your language is extremely >hedged in the above paragraph, and rightfully so, because the idea is a bunch >of hand-waving. In practice the only robust way of knowing when the >update cost exceeds some threshold, is to wait for that to happen and then >measure that it happened. > I disagree about this being hedged and hand-waving, and will point out that you've just stated what I left implicit. This is unavoidably implementation-specific (as are any 'update cost' references), so the only way to do it is by instrumenting the code and measuring what's really going on. Given some measurements, the correlation between the above factors and incremental update cost can be established, and used thereafter in a predictive/preventative fashion. I don't think this is necessariy what you're objecting to, but moreover that there isn't a simple predictive formula for everyone to just use. >>Finally, I'll observe that quickly moving viewpoints actually relieve >>the need for nontrivial terrain detail. The details obviously become >>less relevant as the viewpoint moves increasingly fast (that's not to >>say there shouldn't be some detail, just that the particulars become >>increasingly irrelivant). This is practically a no brainer. >> >> > >I don't think it's a no-brainer at all. I have never seen this successfully >done in a game, to the point where I couldn't see the detail snap >as I moved. Have you? > > There's a silent contradiction here: if it were being done successfully in a game, you wouldn't see it. I do think it's possible in practice, but if the detail is changing to accomodate a fast-moving viewpoint, you're right that you will be able to see it. The point, though, is that it shouldn't matter. If your detail is non-trivial to the point of seriously impacting gameplay, then it should be handled properly. So we should ask, what really IS trivial? Perhaps 'non-trivial' is the sticking point here. Would 'relevant' detail be more appropriate? Suggestions are welcome. Anyway, I'm curious as to how much detail is trivial, and how much isn't? This is going to be context-specific, but I think the vast majority is not relevant. Whether it's literally trivial might be more dependant on how much you can add without significant rending cost. Lets say I carve my name on the wall in some game. It's relevant to me since I know what should be there when I look. There will be scales of stuff that I don't care about though, like the larger and smaller particulars of the wall surface. For most games, it seems the 'relevant' scale is going to be approximately whatever the player character size is, or what most affects the players' interactions with the game world. >I think this is one of those widely-held myths that seems like it ought to be >true, but in practice, ends up being not so true at all. > > -Jonathan. > > > So, suppose it isn't true. Shouldn't it be? The faster you go, the less you're going to really see. I've noticed many recent racing games do a boost/blur/perspective-warp for the "you're going stupidly fast" effect. But does anyone think temporal anti-aliasing (blur) is what's missing from terrain algorithms? I think the speed issue might be better viewed as a prediction problem. Could youfactor in how much work you'll have to do by the potential acceleration : current velocity ratio, and the frequency of high-accelerations. If it's low, you know right where the view is headed, but if it's high, you lose coherence. Mark and I once had a discussion about whether you could pull off a view-dependant optimizer based on an eye-motion device, since if you knew exactly what the user was seeing, you could avoid wasting lots of work on most of the screen that's just peripheral. My only conclusion was that videogames would never adpot it because bystanders would hate watching someone else play. Lastly, on the first note (and from the reply to T.Ulritch) - >But if you put Sleep(rand() % 50) into something like full ROAM, >you are really asking for trouble. And in general the >speed cap is a whole lot lower, which isn't cool. > > This is the cost of scalability. For a data set with contrived limitations (which is virtually all modern games), the constant cost can be higher than the "dumb" do-more-work-but-more-simply approach. Where it isn't speed capped is when you need it to scale it waaaay up, and that's where the "dumb" approach will fail. It may not matter so much now, but it's getting more important every day (in fact, it really mirrors the content-creation scalability crisis). >So maybe "frame coherence" isn't a precise enough term to differentiate >between these two cases. Maybe a different phrase, like "timestep >sensitivity" or something. I think it's true to say "a lot of popularly espoused >frame coherence optimizations are dangerously timestep-sensitive." > >So for something like terrain LOD, I would say, there are known >timestep-insensitive ways of doing it, and if you choose something that >has a timestep sensitivity, you really ought to have a good reason >for that. In something like physics, we just don't know how to solve >the problem in a way that isn't timestep-sensitive, and that really >kind of sucks. > >It's a difficult issue because if you drill down far enough, everything has >a timestep sensitivity. Somewhere though, there's a very wide very >fuzzy line, and everything on one side of the line is stable, and everything >on the other side is openly questionable, and everything in the middle >is, well, I don't know. > > > A curious observation. There are popular methods of reconciling these issues, but maybe the situation hasn't recieved proper study. I know, for example, that the current networked-simulation dogma is to use fixed steps for everything, for practical reasons (like latency). What hasn't been thouroughly established is how the workarounds (fixed-step) of limitations in the super-zoomed timestep sensitive cases will be reflected in the results. -Lucas |