Thread: Re: [Algorithms] Terrain performance comparisons (Page 2)
Brought to you by:
vexxed72
From: Adam P. C. <ac...@st...> - 2003-07-29 23:55:19
|
On Tue, 29 Jul 2003, Trent Polack wrote: > When you say CLOD may be "useless" due to the amazing amount of > triangles a card can push per second, just think about WHERE a CLOD > engine puts all those triangles. Ultimately, LOD (even in "dumb" forms) is necessary just to help the asymptotics of terrain rendering. Frustum culling, etc. help get the drawing time chopped down by a constant factor from the 'n^2' triangles; then again, as your viewable distance increases, the number of triangles grows very quickly, especially if you have a wide view angle. LOD helps get this down to something more like linear growth with viewable distance -- which is a -huge- difference if your game has 2-5 mile visibility and details (procedural or otherwise) on the order of a few meters. New hardware will fix constant factors quickly; you'll have to wait much longer to fix n^2 growth; even better, a nice CLOD algorithm can scale with hardware -- fast users get lots of detail, slow users still get good detail, but can trade it for visibility. Not everyone can afford a GeforceFX to play an online MMORPG (heh, they probably spent it on the subscription!) > That said, sometimes an uber-kickass CLOD algorithm just is NOT needed for > some games. As Charles mentioned, Asheron's Call 2 could have a much better > terrain engine, but it's still playable, and pretty, America's Army doesn't do any LOD at all on the terrain and does extremely well -- but the fog plane is also very close (which apparently gives certain Radeon owners an advantage *evil glare*) and the levels are very small. Even then, my GF4 (admittedly not the demon it once was) bogs down when lots of terrain is on the screen -- it even outweighs the characters! :| In theory and practice, it's a tradeoff, but not such an obvious one. Regards, Adam C. > ----- Original Message ----- > From: "Lucas Ackerman" <ack...@ll...> > To: <gda...@li...> > Sent: Tuesday, July 29, 2003 5:25 PM > Subject: Re: [Algorithms] Terrain performance comparrisons > > > > Jonathan Blow wrote: > > > > >Let's look at this in a slightly different way: most graphics LOD > algorithms > > >are focused on decreasing the number of triangles/sec needed to display > > >the scene. But triangles/sec is the single most abundant, > > >uncontended-for resource in all of game development. So WTF are > > >we putting all this energy there? > > > > > > > > > > > LOD is not about quantity, it's about quality. Quantity can increase > > with hardware speed and software efficiency, but quality improves by > > increasing the intellignece with which you allocate your limited > > quantity. If triangles/sec were the fundamental measure of game > > quality, LOD would be pointless. > > > > Decreasing the quantity of triangles required to display part of a scene > > without decreasing the quality is only half of the equation. The other > > half is to increase the quantity used to display the parts of the scene > > that will contribute most to the quality. > > > > It is also possible, of course, to use the flexibility in quality as a > > means to put more stuff in a given scene, or to look at a scene in ways > > you couldn't before. > > > > The reason a lot of time and energy goes into LOD is that it's not an > > easy problem to solve all aspects of in a way that satisfies all people. > > > > -Lucas > > > > > > > > ------------------------------------------------------- > > This SF.Net email sponsored by: Free pre-built ASP.NET sites including > > Data Reports, E-commerce, Portals, and Forums are available now. > > Download today and enter to win an XBOX or Visual Studio .NET. > > > http://aspnet.click-url.com/go/psa00100003ave/direct;at.aspnet_072303_01/01 > > _______________________________________________ > > GDAlgorithms-list mailing list > > GDA...@li... > > https://lists.sourceforge.net/lists/listinfo/gdalgorithms-list > > Archives: > > http://sourceforge.net/mailarchive/forum.php?forum_id=6188 > > > > > > > > ------------------------------------------------------- > This SF.Net email sponsored by: Free pre-built ASP.NET sites including > Data Reports, E-commerce, Portals, and Forums are available now. > Download today and enter to win an XBOX or Visual Studio .NET. > http://aspnet.click-url.com/go/psa00100003ave/direct;at.aspnet_072303_01/01 > _______________________________________________ > GDAlgorithms-list mailing list > GDA...@li... > https://lists.sourceforge.net/lists/listinfo/gdalgorithms-list > Archives: > http://sourceforge.net/mailarchive/forum.php?forum_id=6188 > |
From: Jonathan B. <jo...@nu...> - 2003-07-30 03:17:44
|
LUCAS Ackerman wrote: > LOD is not about quantity, it's about quality. Scene quality is not just about number of triangles. As Thatcher mentioned, shaders are very important for the user experience. An LOD algorithm that's truly about quality will care a lot about shaders, and will provide some answer regarding how to combine shaders in the distance (so we can render efficiently), without popping. Also, such an algorithm will support the use of other quality- enhancing techniques that games have been doing lately, like the use of normal maps to approximate high-res geometry with lower-res meshes. This technique was popularized by Doom 3, but serveral games are doing it (and the capability is in Granny now, and Casey surely had the goal of making it as good a general-use tool as possible). So far as I know, current CLOD algorithms don't even consider the shader amalgamation problem. Also, they make the normal map technique largely infeasible. (It's possible, but very painful, and it uses a much larger amount of texture space, and so it's basically a quagmire). With static meshes, we can use the normal map technique just fine. Also, in recent articles I discussed the use of color blending to morph between static LODs. I certainly didn't invent this idea, but I do like it, since it provides a way to solve the shader popping problem (and it becomes a lot more attractive on DX9+ hardware). So if I we are not content just to render a bunch of Lambert-shaded triangles, CLOD algorithms have some questions they need to answer. I'm not saying there's no answer, but I am saying there's a suspicious lack of CLOD authors asking these questions. Maybe in the future we'll just use one shader for everything anyway, and have the shader parameters driven by data in a texture map. I fantasize about this. But until (if) that happens, CLOD needs to supply an answer if it wants to be an attractive solution. About the quantity issue... > LOD is not about quantity, it's about quality. Quantity can increase > with hardware speed and software efficiency, but quality improves by > increasing the intellignece with which you allocate your limited > quantity. If triangles/sec were the fundamental measure of game > quality, LOD would be pointless. I must confess I still don't understand the point you're trying to make here. "Quantity", when it comes to triangle meshes, is all about achieving quality. Right? The whole point I want to render 100 million triangles per second is so that the scene looks better than it does at 1 million. Yes, intelligently allocating those triangles is a good idea. But it's not like we live in a world where the only choices are CLOD or NO LOD. We have VIPM, we have static meshes. We have other stuff like geomipmapping. All of these things do the task of intelligently allocating triangles; they just have differing costs and benefits. When putting together a game, you want the most benefit at the least cost. Nobody is denying that CLOD algorithms like ROAM do provide benefits. What I am just saying is, we really do need to take an honest look at the cost. Suppose a CLOD algorithm would make my graphics code 100 times more complicated, and would provide a 1% increase in scene quality/speed. Is it a good idea for me to adopt that algorithm? No way! Okay, well suppose a CLOD algorithm would make my code 1% more complicated, and would provide a 100x increase in scene quality/speed. Would I adopt that? Hell yes, I would jump at the opportunity. Well, the Intermediate Value Theorem says that there's somewhere in between those two extremes that's a breakeven point. If some algorithm is K times more complicated, and makes my scene B times better, then it's a wash -- I could do it, or not, it doesn't matter. This breakeven point divides algorithms into two classes. So one important question is, on which side of that point does any particular algorithm lie? And then, once I have used that question to cull a lot of possibilities, I can study the remaining ones and ask: which one is closest to the ideal of infinite simplicity and infinite benefit? That's what I'm talking about here. I don't deny that CLOD provides benefits. I'm saying don't just look at the benefits, look also at the cost. I am reminded of the Adbusters campaign "Economists Must Learn To Subtract". None of my comments, by the way, applies to ROAM 2, since I know nothing about ROAM 2. From what Mark has mentioned, it's chunking and it handles arbitrary 3D topologies, which makes it different enough from ROAM 1 that no assumptions really carry across. Lucas wrote: > IMHO, the big win for CLOD is scalability and Adam Paul Coates wrote: > even better, a nice CLOD algorithm can scale > with hardware -- fast users get lots of detail, slow users still get good > detail, but can trade it for visibility. Yes, CLOD can scale, but it is not necessary for scaling. VIPM scales (you adjust the sliders to a different point!) Static mesh algorithms scale (you just pick higher-res static meshes!) -Jonathan. |
From: Tom F. <tom...@bl...> - 2003-07-30 06:13:04
|
Slightly off on a tangent. Shader LOD is less compelling to me than geometry LOD. If you have pretty much any sort of LOD - even static levels - then you're going to be drawing (very) roughly constant-area triangles. So your vertex-processing load per pixel is roughly constant. And obviously your pixel load per pixel is constant (excepting overdraw, which you can't do much about using LOD - you need occlusion algos, which is orthogonal to the problem). So using geometric LOD has already brought stuff down by an order of magnitude or something. You don't need shader LOD to do that. If you do use shader LOD, then you can get back even more performance by reducing quality in the distance, but it's not the sort of dramatic speedup you get with geometry LOD. My guess is that you get somewhere from 50% more to 100% speedup. Now that's still a whole bunch of cool - I know I've sometimes spent days getting an extra 1% out of my engine - but it can be a lot of work all the way down the art pipeline, and that's continual work of "debugging" artists and so on, not just a one-off of writing the code. Unlike geometric LOD, it doesn't make stuff possible that wasn't possible before, it just makes existing stuff faster. Of course if you're smart you can write your engine with shader LOD in mind and it works without too much pain, but retrofitting it to an engine is moderately evil. I had a point around here somewhere :-) Oh yes, shader LOD is good, but it's not nearly as vital as geometric LOD. It's good to have it, but if you don't and it's hard to retro-fit, don't worry too much about it. TomF. > -----Original Message----- > From: gda...@li... > [mailto:gda...@li...] On > Behalf Of Jonathan Blow > Sent: 30 July 2003 04:17 > To: gda...@li... > Subject: Re: [Algorithms] Terrain performance comparisons > > > > LUCAS Ackerman wrote: > > > LOD is not about quantity, it's about quality. > > Scene quality is not just about number of triangles. As Thatcher > mentioned, shaders are very important for the user experience. > An LOD algorithm that's truly about quality will care a lot about > shaders, and will provide some answer regarding how to combine > shaders in the distance (so we can render efficiently), without > popping. > > Also, such an algorithm will support the use of other quality- > enhancing techniques that games have been doing lately, > like the use of normal maps to approximate high-res geometry > with lower-res meshes. This technique was popularized by > Doom 3, but serveral games are doing it (and the capability > is in Granny now, and Casey surely had the goal of making > it as good a general-use tool as possible). > > So far as I know, current CLOD algorithms don't even consider > the shader amalgamation problem. Also, they make the normal > map technique largely infeasible. (It's possible, but very painful, > and it uses a much larger amount of texture space, and so > it's basically a quagmire). > > With static meshes, we can use the normal map technique just > fine. Also, in recent articles I discussed the use of color blending > to morph between static LODs. I certainly didn't invent this idea, > but I do like it, since it provides a way to solve the shader popping > problem (and it becomes a lot more attractive on DX9+ > hardware). > > So if I we are not content just to render a bunch of Lambert-shaded > triangles, CLOD algorithms have some questions they need to > answer. I'm not saying there's no answer, but I am saying > there's a suspicious lack of CLOD authors asking these > questions. > > Maybe in the future we'll just use one shader for everything anyway, > and have the shader parameters driven by data in a texture map. > I fantasize about this. But until (if) that happens, CLOD needs > to supply an answer if it wants to be an attractive solution. |
From: Charles B. <cb...@cb...> - 2003-07-30 06:58:48
|
I don't think this is right Tom. Let's ignore for the moment the pixel fill cost. The main thing you get from shader LOD is reduced CPU work in setting up those shaders. The problem is that if you have a scene full of objects, unless you merge them together or something you've got quadratic growth of object count in the distance. So, if you try to push your distance out, your CPU time becomes dominated by shader setup for distant objects. Also, on all these separate objects, geometric LOD isn't helping much; once you get them down to 2 triangles, they're still mighty expensive. At some point, other more difficult techniques have to come into play, like impostors, merging objects, etc. I assume all these objects are dynamic in the sense that they may move around relative to each other so no pre-process merging can take place. At 07:12 AM 7/30/2003 +0100, Tom Forsyth wrote: >Slightly off on a tangent. > >Shader LOD is less compelling to me than geometry LOD. If you have >pretty much any sort of LOD - even static levels - then you're going to >be drawing (very) roughly constant-area triangles. So your >vertex-processing load per pixel is roughly constant. And obviously your >pixel load per pixel is constant (excepting overdraw, which you can't do >much about using LOD - you need occlusion algos, which is orthogonal to >the problem). > >So using geometric LOD has already brought stuff down by an order of >magnitude or something. You don't need shader LOD to do that. > >If you do use shader LOD, then you can get back even more performance by >reducing quality in the distance, but it's not the sort of dramatic >speedup you get with geometry LOD. My guess is that you get somewhere >from 50% more to 100% speedup. Now that's still a whole bunch of cool - >I know I've sometimes spent days getting an extra 1% out of my engine - >but it can be a lot of work all the way down the art pipeline, and >that's continual work of "debugging" artists and so on, not just a >one-off of writing the code. > >Unlike geometric LOD, it doesn't make stuff possible that wasn't >possible before, it just makes existing stuff faster. Of course if >you're smart you can write your engine with shader LOD in mind and it >works without too much pain, but retrofitting it to an engine is >moderately evil. > >I had a point around here somewhere :-) Oh yes, shader LOD is good, but >it's not nearly as vital as geometric LOD. It's good to have it, but if >you don't and it's hard to retro-fit, don't worry too much about it. > >TomF. > > > -----Original Message----- > > From: gda...@li... > > [mailto:gda...@li...] On > > Behalf Of Jonathan Blow > > Sent: 30 July 2003 04:17 > > To: gda...@li... > > Subject: Re: [Algorithms] Terrain performance comparisons > > > > > > > > LUCAS Ackerman wrote: > > > > > LOD is not about quantity, it's about quality. > > > > Scene quality is not just about number of triangles. As Thatcher > > mentioned, shaders are very important for the user experience. > > An LOD algorithm that's truly about quality will care a lot about > > shaders, and will provide some answer regarding how to combine > > shaders in the distance (so we can render efficiently), without > > popping. > > > > Also, such an algorithm will support the use of other quality- > > enhancing techniques that games have been doing lately, > > like the use of normal maps to approximate high-res geometry > > with lower-res meshes. This technique was popularized by > > Doom 3, but serveral games are doing it (and the capability > > is in Granny now, and Casey surely had the goal of making > > it as good a general-use tool as possible). > > > > So far as I know, current CLOD algorithms don't even consider > > the shader amalgamation problem. Also, they make the normal > > map technique largely infeasible. (It's possible, but very painful, > > and it uses a much larger amount of texture space, and so > > it's basically a quagmire). > > > > With static meshes, we can use the normal map technique just > > fine. Also, in recent articles I discussed the use of color blending > > to morph between static LODs. I certainly didn't invent this idea, > > but I do like it, since it provides a way to solve the shader popping > > problem (and it becomes a lot more attractive on DX9+ > > hardware). > > > > So if I we are not content just to render a bunch of Lambert-shaded > > triangles, CLOD algorithms have some questions they need to > > answer. I'm not saying there's no answer, but I am saying > > there's a suspicious lack of CLOD authors asking these > > questions. > > > > Maybe in the future we'll just use one shader for everything anyway, > > and have the shader parameters driven by data in a texture map. > > I fantasize about this. But until (if) that happens, CLOD needs > > to supply an answer if it wants to be an attractive solution. > > > >------------------------------------------------------- >This SF.Net email sponsored by: Free pre-built ASP.NET sites including >Data Reports, E-commerce, Portals, and Forums are available now. >Download today and enter to win an XBOX or Visual Studio .NET. >http://aspnet.click-url.com/go/psa00100003ave/direct;at.aspnet_072303_01/01 >_______________________________________________ >GDAlgorithms-list mailing list >GDA...@li... >https://lists.sourceforge.net/lists/listinfo/gdalgorithms-list >Archives: >http://sourceforge.net/mailarchive/forum.php?forum_id=6188 ---------------------------------------------------- Charles Bloom email "cb" http://www.cbloom.com |
From: Jonathan B. <jo...@nu...> - 2003-07-30 07:00:12
|
Tom Forsyth wrote: > Shader LOD is less compelling to me than geometry LOD. If you have > pretty much any sort of LOD - even static levels - then you're going to > be drawing (very) roughly constant-area triangles. So your > vertex-processing load per pixel is roughly constant. And obviously your > pixel load per pixel is constant (excepting overdraw, which you can't do > much about using LOD - you need occlusion algos, which is orthogonal to > the problem). I think we are thinking about shader LOD for different reasons! It sounds to me like you're saying, the purpose of shader LOD is to make far-away pixels cheaper to draw, and that's not really so important in the grand scheme of things. (Correct me if I am misunderstanding you). That is not at all why I think shader LOD is important. The phenomenon I am worried about is that if you don't merge shaders, then there is an artificial limiter on how much you can LOD your geometry, because you can't merge triangles that use different shaders. So what you end up with is, past a certain distance, a lot of small batches of triangles that just won't reduce any further. Your LOD is prevented from operating. And it's really slow to submit all those batches to the hardware, due to all the context switching. I am not worried about the amount of time it takes to actually render a single pixel (which is why I was dreaming about a world in which we use one shader for everything, and just data-drive its parameters). -Jonathan. |
From: Tom F. <tom...@bl...> - 2003-07-30 07:41:23
|
Ah, gotcha. Yes, you and Charles have a good point that when you simplify shaders you can also merge previously different shaders into one. Which decreases the setup time and increases your batching and enables more LOD than you could have had otherwise. There's still annoyingly hard limits on how much you can use batching to improve speed - you still have to get those diffuse textures and orientations and bone matrices to the hardware somehow. But at least you can render the whole person with one shader rather than the 20 you would use up-close. Very true. I'll be over here eating my humble pie. TomF. > -----Original Message----- > From: gda...@li... > [mailto:gda...@li...] On > Behalf Of Jonathan Blow > Sent: 30 July 2003 08:00 > To: gda...@li... > Subject: Re: [Algorithms] Terrain performance comparisons > > > Tom Forsyth wrote: > > > Shader LOD is less compelling to me than geometry LOD. If you have > > pretty much any sort of LOD - even static levels - then > you're going to > > be drawing (very) roughly constant-area triangles. So your > > vertex-processing load per pixel is roughly constant. And > obviously your > > pixel load per pixel is constant (excepting overdraw, which > you can't do > > much about using LOD - you need occlusion algos, which is > orthogonal to > > the problem). > > I think we are thinking about shader LOD for different reasons! > > It sounds to me like you're saying, the purpose of shader LOD > is to make > far-away pixels cheaper to draw, and that's not really so > important in the > grand scheme of things. (Correct me if I am misunderstanding you). > > That is not at all why I think shader LOD is important. > > The phenomenon I am worried about is that if you don't merge shaders, > then there is an artificial limiter on how much you can LOD > your geometry, > because you can't merge triangles that use different shaders. > So what you > end up with is, past a certain distance, a lot of small batches of > triangles that just won't reduce any further. Your LOD is prevented > from operating. And it's really slow to submit all those > batches to the > hardware, due to all the context switching. > > I am not worried about the amount of time it takes to > actually render a > single pixel (which is why I was dreaming about a world in > which we use > one shader for everything, and just data-drive its parameters). > > -Jonathan. |
From: Loic B. <loi...@wa...> - 2003-07-30 09:58:22
|
Hi, I'm not sure I've read the whole thread (it's quite big), but I'd like to know if you people have talked about the memory issue of each main techniques (CLOD, ROAM, etc...)? If that's not the case, could I get some insights? For instance, for a heightmap of say 512*512, how much memory each technique will take? I know that algo like CLOD is adaptive to the terrain curvature and there may be a lots of settings to tweak the overall size, but I still can get an overall idea I guess... Loic |
From: Clint B. <cl...@ha...> - 2003-07-30 16:57:50
|
Loic Baumann wrote: >Hi, >I'm not sure I've read the whole thread (it's quite big), but I'd like to >know if you people have talked about the memory issue of each main >techniques (CLOD, ROAM, etc...)? > >If that's not the case, could I get some insights? > >For instance, for a heightmap of say 512*512, how much memory each technique >will take? I know that algo like CLOD is adaptive to the terrain curvature >and there may be a lots of settings to tweak the overall size, but I still >can get an overall idea I guess... > > take a look here for an evaluation of some algorithms. The last table shows some memory used for given heightmap dimensions: http://www.vterrain.org/LOD/memory.html -c |
From: Thatcher U. <tu...@tu...> - 2003-07-30 14:04:06
|
On Jul 30, 2003 at 01:59 -0500, Jonathan Blow wrote: > > That is not at all why I think shader LOD is important. > > The phenomenon I am worried about is that if you don't merge > shaders, then there is an artificial limiter on how much you can LOD > your geometry, because you can't merge triangles that use different > shaders. So what you end up with is, past a certain distance, a lot > of small batches of triangles that just won't reduce any further. > Your LOD is prevented from operating. And it's really slow to > submit all those batches to the hardware, due to all the context > switching. There's yet another (related) point I would like to make about shader LOD: for decent fill-rate (and antialiasing), it's essential to have a way to scale your texture accesses & shader computations with pixel size. For example, mip-mapping is a form of shader LOD. But mip-mapping basically is good for diffuse lighting on a flat surface, it doesn't work quite right for normal maps etc. I think to be compelling, the next frontier of geometric LOD has to confront the shader LOD issue and unify it with geometric LOD. In my own LOD stuff in the past, I've handled this by using pre-baked mip-mapped textures (w/ surface normals based on the hi-res geometry), and unique texture mapping, which works for diffuse shading with static lights and static geometry, but obviously that's pretty limited. -- Thatcher Ulrich http://tulrich.com |
From: Lucas A. <ack...@ll...> - 2003-07-30 22:00:34
|
Jonathan Blow wrote: >LUCAS Ackerman wrote: > > > >>LOD is not about quantity, it's about quality. >> >> > >Scene quality is not just about number of triangles. > I'm pretty sure I just said that, but yeah. >As Thatcher >mentioned, shaders are very important for the user experience. >An LOD algorithm that's truly about quality will care a lot about >shaders, and will provide some answer regarding how to combine >shaders in the distance (so we can render efficiently), without >popping. > > Shaders are another problem, but I agree that it's another requirement dimension that will impact the relative usefulness of different approaches. It's an open problem for future LOD schemes to address. Also touching on this is the lack of base-mesh or arbitrary topology simplification for extreme views. This also kind of ties in with the observation that triangles aren't a perfect graphics primitive. > >About the quantity issue... > > > >>LOD is not about quantity, it's about quality. Quantity can increase >>with hardware speed and software efficiency, but quality improves by >>increasing the intellignece with which you allocate your limited >>quantity. If triangles/sec were the fundamental measure of game >>quality, LOD would be pointless. >> >> > >I must confess I still don't understand the point you're trying to make here. >"Quantity", when it comes to triangle meshes, is all about achieving >quality. Right? The whole point I want to render 100 million triangles >per second is so that the scene looks better than it does at 1 million. > > Okay, I'll try again. Lets look at it this way: higher quantity allows higher quality, but you don't get to pick the quantity! It's an independant variable. There's just so many triangles/sec a given hardware configuration can push, period. The quality knob is the one you can turn. >Yes, intelligently allocating those triangles is a good idea. But it's not >like we live in a world where the only choices are CLOD or NO LOD. >We have VIPM, we have static meshes. We have other stuff like >geomipmapping. All of these things do the task of intelligently >allocating triangles; they just have differing costs and benefits. When >putting together a game, you want the most benefit at the least cost. > > I'd like to suggest that there are benefit/cost peaks that aren't being explored because they don't fit in the traditional view of what game engines can support. >None of my comments, by the way, applies to ROAM 2, since I know >nothing about ROAM 2. From what Mark has mentioned, it's chunking >and it handles arbitrary 3D topologies, which makes it different enough >from ROAM 1 that no assumptions really carry across. > > Erm, just to be clear, Mark's PVA hierarchies stuff is not part of ROAM 2. It works on a 3d analogy to ROAM, with a tetrahedron-based volume hierarchy, and topology-independant VIPM per-node. It still falls in the ROAM family of algorithms, as the optimizing principles and such are the same, but the (non-surface-oriented) problem domain necessitates totally different data structures and so forth. FWIW, what we call ROAM implimentaions apply to surface hierarchies (ROAM 2 included - though it can do other things within chunks). There isn't a real 2.0 paper yet, but Mark's ROAM 2 info page is here: http://www.cognigraph.com/ROAM_homepage/ROAM2/ >Lucas wrote: > > > >>IMHO, the big win for CLOD is scalability >> >> > >and Adam Paul Coates wrote: > > > >>even better, a nice CLOD algorithm can scale >>with hardware -- fast users get lots of detail, slow users still get good >>detail, but can trade it for visibility. >> >> > >Yes, CLOD can scale, but it is not necessary for scaling. VIPM >scales (you adjust the sliders to a different point!) Static mesh >algorithms scale (you just pick higher-res static meshes!) > > -Jonathan. > > Again, I think we really have different requirements in mind. Scaling with hardware is one thing, adapting to arbitrary scales of view is another, and demanding continuity across scales is yet another. Terrain being the (effectively solved) special case doesn't help either, as it becomes the ubiquitous "me too" result. I do have an example that I can't yet support (since it's not ready for public consumption), which is the visibility algorithm we're working on. It really lives and dies by the connectivity and per-element adaptivity. The result is that it works great for multires geometry, being very adaptive and robust, but also fundamentally requires it. -Lucas |
From: Andras B. <bn...@ma...> - 2003-07-24 18:25:18
|
It's very difficult to compare different terrain rendering algorithms objectively. Performance depends on so many things that it is very hard to do fair comparisons. Quality measurements are also troublesome, it's like comparing MP3 with Vorbis... Which one is better? The more people you ask, the more answers you will get. Every method has its advantages and drawbacks. You have to balance tradeoffs and choose wisely, depending on your requirements. Well, I know I didn't help much, but I just can't... :( Bandi Thursday, July 24, 2003, 1:38:59 PM, De Boer wrote: > Anyone know of a paper where a number of terrain algorithms are compared > objectively in terms of speed/quality? Most of the time I see a paper say on > an extension to an algorithm and they state how much extra fps the extension > had on the old algorithm. Quality isn't usually even measured. But is there > any papers dedicated just to "comparing" terrain algorithms as objectively > as possible? Although opinions of "which algorithms are best" would be > useful I am really looking for papers with objective measurements (opinions > seem to vary way too much). > Here is the old original msg from 2000 if you don't remember it: >>Concerning the speed of terrain LOD algorithms: >>If i made performance claims on my site, people would certainly complain > that the results are entire based on context (type of >dataset, amount of > camera motion) and that their implementation is really fast if i had only > tested it their way :-) >>In fact, it's even harder than that to make meaningful comparisons, since > some algorithms (e.g. LK, TV) enforce a global error >metric, while others > (ROAM, SM) enforce a polygon count target. The first kind will have > framerates that will vary widely, while >the second will produce relatively > stable performance. >>A lot of the discussion on the Algorithms list has been about how many > triangles/second each implementation gets through the >rendering pipeline, > which seems to me a rather silly metric - it's fps at a level of perceived > quality, NOT raw throughput, that >is the value of a LOD algorithm. >>-Ben >>http://vterrain.org/ > -Ryan De Boer |
From: Tom F. <tom...@bl...> - 2003-07-24 20:21:29
|
There was some talk about this ages ago, during the height of the ROAM vs VIPM wars. I think people decided to adopt the moon surface dataset, and obviously there would have been some sort of standardised fly through of it, but we couldn't decide on a good measure of quality, since just measuring pixel-invariance doesn't work all that well for most of the algorithms - even if you could find a decent measure (must be some out there surely?) there's other considerations like "how much does it pop" and so on. That said, a common dataset and framework can't hurt - at least then you can run stuff side by side, put the quality settings at what _you_ consider to be able the same, and check performance. It also depends if you don't mind using 100% of the CPU just for rendering, or if you need the CPU for other things and would like to throw as much stuff onto the GPU as possible (vterrain.org seems to be pretty cursory about this aspect and the routines that use it, but it's very important for many projects). If you have any specific questions, I'm sure we'll be happy to help :-) TomF. > -----Original Message----- > From: gda...@li... > [mailto:gda...@li...] On > Behalf Of Andras Balogh > Sent: 24 July 2003 19:24 > To: gda...@li... > Subject: Re: [Algorithms] Terrain performance comparrisons > > > It's very difficult to compare different terrain rendering algorithms > objectively. Performance depends on so many things that it is very > hard to do fair comparisons. Quality measurements are also > troublesome, it's like comparing MP3 with Vorbis... Which one is > better? The more people you ask, the more answers you will get. > > Every method has its advantages and drawbacks. You have to balance > tradeoffs and choose wisely, depending on your requirements. > > Well, I know I didn't help much, but I just can't... :( > > > Bandi > > > Thursday, July 24, 2003, 1:38:59 PM, De Boer wrote: > > > Anyone know of a paper where a number of terrain algorithms > are compared > > objectively in terms of speed/quality? Most of the time I > see a paper say on > > an extension to an algorithm and they state how much extra > fps the extension > > had on the old algorithm. Quality isn't usually even > measured. But is there > > any papers dedicated just to "comparing" terrain algorithms > as objectively > > as possible? Although opinions of "which algorithms are > best" would be > > useful I am really looking for papers with objective > measurements (opinions > > seem to vary way too much). > > > Here is the old original msg from 2000 if you don't remember it: > > >>Concerning the speed of terrain LOD algorithms: > > >>If i made performance claims on my site, people would > certainly complain > > that the results are entire based on context (type of > >dataset, amount of > > camera motion) and that their implementation is really fast > if i had only > > tested it their way :-) > > >>In fact, it's even harder than that to make meaningful > comparisons, since > > some algorithms (e.g. LK, TV) enforce a global error > >metric, while others > > (ROAM, SM) enforce a polygon count target. The first kind will have > > framerates that will vary widely, while >the second will > produce relatively > > stable performance. > > >>A lot of the discussion on the Algorithms list has been > about how many > > triangles/second each implementation gets through the > >rendering pipeline, > > which seems to me a rather silly metric - it's fps at a > level of perceived > > quality, NOT raw throughput, that >is the value of a LOD algorithm. > >>-Ben > >>http://vterrain.org/ > > > -Ryan De Boer > > > > > ------------------------------------------------------- > This SF.Net email sponsored by: Free pre-built ASP.NET sites including > Data Reports, E-commerce, Portals, and Forums are available now. > Download today and enter to win an XBOX or Visual Studio .NET. > http://aspnet.click-url.com/go/psa00100003ave/direct;at.aspnet > _072303_01/01 > _______________________________________________ > GDAlgorithms-list mailing list > GDA...@li... > https://lists.sourceforge.net/lists/listinfo/gdalgorithms-list > Archives: > http://sourceforge.net/mailarchive/forum.php?forum_id=6188 > |
From: Daniel D. <du...@ya...> - 2003-07-24 21:45:45
|
On Thu, 24 Jul 2003, Tom Forsyth wrote: -->If you have any specific questions, I'm sure we'll be happy to help :-) I have some questions :) I took it upon myself to try to analyze the modern contenders for "terrain engine of the year" to see if any of them performed as a panacea in the realms of quality and performance. Having been frustrated by the algorithm of the week syndrome that seems to prevail; I figured I'd wind up needing to have a couple easily swappable plugins for our engine (maybe even needing to switch between algorithms depending on the context). I'm really trying to come up with a general solution for a general game engine (even if that "general" solution is a set of swappable plugins that present an identical, or similar, interface.) The questions: what are the major contending algorithms for modern systems? the three I've decided to try out are ChunkedLOD, Roam 2.0 and Geomipmapping. Are there any others which have feature sets drastically different from these three? If I want to maintain backwards compatibility with older systems should I use an older algorithm (original roam) or do the newer ones still work even if you can't assume there's a fast GPU? Is it inherently impossible to boil down a set of options for terrain into some kind of common subset? (e.g., all can be heigtmap based, all can deal with texture palletes, all have a similar error metric, etc.) Thanks in advance -- ------------ email: du...@to... www: http://paradox.tosos.com/~duhprey icq: 129354442 She's got a whole brood of kids, like Sendmail, and Postfix, and Apache, and Perl. And some of 'em die young, and some are mentally retarded. Sterling The European finds intercourse with Americans easy and agreeable. Einstein Not a shred of evidence exists in favor of the idea that life is serious. Gill |
From: Tom F. <tom...@bl...> - 2003-07-24 22:36:36
|
That pretty much covers it. My fave is Thatcher's ChunkedLOD because it's so hardware-friendly - pretty much every game I've worked on has been limited by the CPU, not the GPU. There's a sort-of stripped-down variant of it that uses precomputed index lists to cope with the cracks, called "Simplified Terrain Using Interlocking Tiles" by Greg Snook in Game Programming Gems 2. It's incredibly simple, and therefore very fast indeed. There's no morphing, but because it's so simple you can throw loads of tris through it, which can often compensate. Almost everything is precomputed, you just pick a VB and an IB and throw them at the card. TomF. > -----Original Message----- > From: gda...@li... > [mailto:gda...@li...] On > Behalf Of Daniel Duhprey > Sent: 24 July 2003 22:45 > To: gda...@li... > Subject: RE: [Algorithms] Terrain performance comparrisons > > > On Thu, 24 Jul 2003, Tom Forsyth wrote: > > -->If you have any specific questions, I'm sure we'll be > happy to help :-) > > I have some questions :) > > I took it upon myself to try to analyze the modern contenders > for "terrain > engine of the year" to see if any of them performed as a > panacea in the realms > of quality and performance. Having been frustrated by the > algorithm of the > week syndrome that seems to prevail; I figured I'd wind up > needing to have a > couple easily swappable plugins for our engine (maybe even > needing to switch > between algorithms depending on the context). > > I'm really trying to come up with a general solution for a > general game > engine (even if that "general" solution is a set of swappable > plugins that > present an identical, or similar, interface.) > > The questions: > > what are the major contending algorithms for modern systems? > the three I've decided to try out are ChunkedLOD, Roam 2.0 > and Geomipmapping. > Are there any others which have feature sets drastically > different from these > three? > > If I want to maintain backwards compatibility with older > systems should I use > an older algorithm (original roam) or do the newer ones still > work even if you > can't assume there's a fast GPU? > > Is it inherently impossible to boil down a set of options for > terrain into > some kind of common subset? (e.g., all can be heigtmap based, > all can deal > with texture palletes, all have a similar error metric, etc.) > > Thanks in advance > > -- > ------------ > email: du...@to... www: > http://paradox.tosos.com/~duhprey icq: > 129354442 > She's got a > whole brood of kids, like Sendmail, and Postfix, and Apache, and > Perl. And some of 'em die young, and some are mentally > retarded. Sterling > The European finds intercourse with Americans easy and > agreeable. Einstein > Not a shred of evidence exists in favor of the idea that life > is serious. Gill > > > > ------------------------------------------------------- > This SF.Net email sponsored by: Free pre-built ASP.NET sites including > Data Reports, E-commerce, Portals, and Forums are available now. > Download today and enter to win an XBOX or Visual Studio .NET. > http://aspnet.click-url.com/go/psa00100003ave/direct;at.aspnet > _072303_01/01 > _______________________________________________ > GDAlgorithms-list mailing list > GDA...@li... > https://lists.sourceforge.net/lists/listinfo/gdalgorithms-list > Archives: > http://sourceforge.net/mailarchive/forum.php?forum_id=6188 > |
From: Lucas A. <ack...@ll...> - 2003-07-24 23:27:52
|
Daniel Duhprey wrote: >On Thu, 24 Jul 2003, Tom Forsyth wrote: > >-->If you have any specific questions, I'm sure we'll be happy to help :-) > >I have some questions :) > >I took it upon myself to try to analyze the modern contenders for "terrain >engine of the year" to see if any of them performed as a panacea in the realms >of quality and performance. Having been frustrated by the algorithm of the >week syndrome that seems to prevail; I figured I'd wind up needing to have a >couple easily swappable plugins for our engine (maybe even needing to switch >between algorithms depending on the context). > >I'm really trying to come up with a general solution for a general game >engine (even if that "general" solution is a set of swappable plugins that >present an identical, or similar, interface.) > >The questions: > >what are the major contending algorithms for modern systems? > > Hi Daniel, I'd nominate the modern ROAM derivations (2.0 and so forth) for performance, generalization and flexibility, the similar yet oh-so-simple "Visualizations of Large Terrains Made Easy" by Lindstrom and Pascucci (and damn it if Valerio beats us at Settlers of Catan again tonight) for the simplicity/performance tradeoff, VIPM methods (I still hate this name, Mark D thinks we should call em "progressive vertex array sliding window" methods - since view independance isn't the main point, but VIPM seems to have stuck with this crowd) either as stand alone method or for per-chunk GPU optimizations of a CLOD optimizer. There are also lots of domain-specific and cross-domain approaches, like T.Ulritche's ChunkedLOD you mentioned. It really depends on where your interests are, what you consider 'major,' and what kind of 'modern system' you have. I'm obviously in the ROAM camp and there are necessarily a set of considerations that come with that. Geomipmapping is cute (mipmapping easy, GPU use, yay!), but not so smart (ok, maybe I'm a CLOD junkie), and not a real contender in my book. >the three I've decided to try out are ChunkedLOD, Roam 2.0 and Geomipmapping. >Are there any others which have feature sets drastically different from these >three? > > Almost certainly, but it comes down to what you need to do. Your "general game engine" still needs some fixed scope to start from. Do you: Need CLOD adaptivity? This might be important for simulations (requiring high-fidelity object-on-terrain placement), or irrelivant if the terrain is just a pretty backdrop. It's also a scalability consideration - is being able to view from 50,000 feet and then zoom in to a nose-on-the-ground viewpoint necessary? Can you rely on always having some particular type of viewpoint (eg, eye-level and X meters away for a FPS) or is more flexibility required? Have to work from a classic pre-made heightfield? Or want to generate it procedurally as needed? Have a fixed time-per-frame budget? Need strict screen-space error bounds? Some methods can deliver on this, others make greedy approximations, and still some are just wild guesses. Have CPU utilization limits? Or to put it another way, is your terrain rendering prioritized over AI routines? Have a GPU to use? Have to use it? Must prefer it to CPU use? Require "geomorphing"? This is, IMHO, a case of trying to solve the wrong problem. Still, some people demand it. Have memory budget or access pattern issues? This is likely to important on consoles. Require discontinuous viewpoint motion often (which kills incrimental / time-space coherent algorithms away)? Need to load areas dynamically as the viewpoint travels? This pertains to extreme world size/detail situations, and your platform's storage modes (memory vs disk vs network streamed). Have certain texturing requirements or limitations? Tiles or splatting or one-big-texture? Want to modify the terrain at runtime? We all like to blow stuff up (or even build stuff), but do you NEED to? Benefit from being able to generalize your LOD to non-terrain features? This could be useless for your app, or a huge win. The bottom line is that you really need to know your platform, audience, and so forth to pick the right feature set to shoot for. The more of these sorts of questions you can answer for yourself, the better you can evaluate the various options. ROAM derivitives can address just about all of em (possibly simultaneously - if you're a really sharp), but (despite deceptive appearences) it is NOT an easy algorithm to master (I've been studying it for years - ROAM 2.0 improves, but it's still a steep learning/performance curve). Others will excel in certain areas and fail elsewhere. >If I want to maintain backwards compatibility with older systems should I use >an older algorithm (original roam) or do the newer ones still work even if you >can't assume there's a fast GPU? > > ROAM 2.0 is an all-around improvement over classic ROAM, but some things like chunk-output won't benefit an older system (as it's really a GPU utilization technique). Some techniques are GPU exclusive, some utterly fail in GPU utilization, and some can work well either way. If you're just starting development, will you even want to think about pre-GPU HW support by the time your system is in a released game? Why not support the GBA as well then? (notice: that was a joke. although we could discuss the merits of terrain engines in portable game systems ...) >Is it inherently impossible to boil down a set of options for terrain into >some kind of common subset? (e.g., all can be heigtmap based, all can deal >with texture palletes, all have a similar error metric, etc.) > >Thanks in advance > > > I'm afraid so, not all terrain algorithms are cut from the same cloth (raycasting anyone? nurbs/curves/subdivison surfaces? TINs?). The most popular subset is easily "heightfield based." Some other subsets might be these you described or portions of the elements I listed above. So, maybe the options don't boil down into a single subset. Regardless, the game-terrain-flavors you want your renderer to support still might. -Lucas |
From: Trent P. <tr...@po...> - 2003-07-25 00:16:56
|
I'm personally in the Chunked-LOD camp myself. It's flexible, easy-to-implement and speedy. The only problem I can foresee with the algorithm is that I think using a dynamic dataset (ala TreadMarks) may not be feasible. I haven't played around with it much myself, so there's not much to back that statement up with other than mere speculation. I think it would be possible to use a dynamic dataset, but it would probably require a rather heavily modified implementation of the algorithm. This is though, as I said, mere speculation. As for teaching/learning the Chunked-LOD theory and implementation, I tend to use the "stepping stone" method. Basically, I tell people to learn the basics of Geomipmapping (the algoirithm is pretty basic by nature), code a sample implementation, and just mess around with the code a bit to see the pros and cons of the algorithm. Once someone has done that, learning, and implementating for that matter, Chunked-LOD tends to be a lot easier. Of course, it's important to note that Geomipmapping is simply not a real competitor for "best modern terrain algorithm", simply because it lacks a lot of the optimizations that Chunked-LOD does. I personally consider Chunked-LOD an evolution of Geomipmapping. They're very similar in many ways, but the former takes many more steps that increase the overall performance of an implementation. However, with all that said, Lucas definately brings up some very valid points. The ROAM 2.0 algorithm, from what I've seen, produces really incredible results. It's flexible, speedy, and less memory-intensive than Chunked-LOD; not to mention the fact that the scalability of an implementation is amazing. On one of my simple implementations, I can zoom from "way out" in 3D space to a "nose-to-the-ground" view without so much as a studder in performance. Though, on the whole, I think ROAM 2.0 is a much more complex (code-wise) system than Chunked-LOD, and it really requires you to do your homework. Both of these algorithms produce top-notch results. In the end, however, it really depends on which of the two you feel best suits your game. *shrug* --- Trent Polack tr...@po... www.polycat.net |
From: tweety <mi...@bb...> - 2003-07-25 16:22:34
|
I am currently writing a hunting program as a hobby. As such, there aren't currently any animals, animations etc, only a terrain. For it I used a Perlin function to generate it randomly and it looks nice. Until now, I've been generating a display list and sending it all down the pipeline. Now, I'm working to draw each frame just what's visible (a trapezoid) at the origin point (the modelview matrix is the identity matrix). I just use a buffer and fill in the position of the vertexes (the trapezoid) and the height given by my pseudo-random functions. It's faster than it was originally and it has the added benefit to be able to extend to infinity (with the aproximations of float/double...). What do you think of this, being more advanced in this field? ---------------------------------- Peace and love, Tweety mi...@bb... - twe...@us... YahooID: tweety_04_01 > -----Original Message----- > From: gda...@li... > [mailto:gda...@li...] On > Behalf Of Trent Polack > Sent: 25 iulie 2003 03:18 > To: gda...@li... > Subject: Re: [Algorithms] Terrain performance comparrisons > > > I'm personally in the Chunked-LOD camp myself. It's flexible, > easy-to-implement and speedy. The only problem I can foresee with the > algorithm is that I think using a dynamic dataset (ala > TreadMarks) may not > be feasible. I haven't played around with it much myself, so > there's not > much to back that statement up with other than mere > speculation. I think it > would be possible to use a dynamic dataset, but it would > probably require a > rather heavily modified implementation of the algorithm. This > is though, as > I said, mere speculation. > > As for teaching/learning the Chunked-LOD theory and > implementation, I tend > to use the "stepping stone" method. Basically, I tell people > to learn the > basics of Geomipmapping (the algoirithm is pretty basic by > nature), code a > sample implementation, and just mess around with the code a > bit to see the > pros and cons of the algorithm. Once someone has done that, > learning, and > implementating for that matter, Chunked-LOD tends to be a lot easier. > > Of course, it's important to note that Geomipmapping is > simply not a real > competitor for "best modern terrain algorithm", simply > because it lacks a > lot of the optimizations that Chunked-LOD does. I personally consider > Chunked-LOD an evolution of Geomipmapping. They're very > similar in many > ways, but the former takes many more steps that increase the overall > performance of an implementation. > > However, with all that said, Lucas definately brings up some > very valid > points. The ROAM 2.0 algorithm, from what I've seen, produces really > incredible results. It's flexible, speedy, and less > memory-intensive than > Chunked-LOD; not to mention the fact that the scalability of an > implementation is amazing. On one of my simple > implementations, I can zoom > from "way out" in 3D space to a "nose-to-the-ground" view > without so much as > a studder in performance. Though, on the whole, I think ROAM > 2.0 is a much > more complex (code-wise) system than Chunked-LOD, and it > really requires you > to do your homework. > > Both of these algorithms produce top-notch results. In the > end, however, it > really depends on which of the two you feel best suits your > game. *shrug* > --- > Trent Polack > tr...@po... > www.polycat.net > > > > ------------------------------------------------------- > This SF.Net email sponsored by: Free pre-built ASP.NET sites including > Data Reports, E-commerce, Portals, and Forums are available now. > Download today and enter to win an XBOX or Visual Studio .NET. > http://aspnet.click-url.com/go/psa00100003ave/direct;at.aspnet > _072303_01/01 > _______________________________________________ > GDAlgorithms-list mailing list > GDA...@li... > https://lists.sourceforge.net/lists/listinfo/gdalgorithms-list > Archives: > http://sourceforge.net/mailarchive/forum.php?forum_id=6188 > |
From: Pierre T. <p.t...@wa...> - 2003-07-29 13:09:38
|
> I'm personally in the Chunked-LOD camp myself. It's flexible, > easy-to-implement and speedy. The only problem I can foresee with the > algorithm is that I think using a dynamic dataset (ala TreadMarks) may not > be feasible. I haven't played around with it much myself, so there's not > much to back that statement up with other than mere speculation. I think it > would be possible to use a dynamic dataset, but it would probably require a > rather heavily modified implementation of the algorithm. This is though, as > I said, mere speculation. I would be interested to hear more opinions about this (esp. Thatcher's one, of course). I need a dynamic terrain for a project. Time is short. I'm a bit familiar with Thatcher's chunk-LOD implementation, but maybe not enough to imagine all required modifications. It looks like it's possible since the code already performs vertex-morphing all the time, but maybe I'm missing one or two difficulties. Pierre |
From: Thatcher U. <tu...@tu...> - 2003-07-29 14:35:53
|
On Jul 29, 2003 at 03:17 -0700, Pierre Terdiman wrote: > > I'm personally in the Chunked-LOD camp myself. It's flexible, > > easy-to-implement and speedy. The only problem I can foresee with > > the algorithm is that I think using a dynamic dataset (ala > > TreadMarks) may not be feasible. I haven't played around with it > > much myself, so there's not much to back that statement up with > > other than mere speculation. I think > it would be possible to use > > a dynamic dataset, but it would probably require > a rather > > heavily modified implementation of the algorithm. This is though, > > > as I said, mere speculation. > > I would be interested to hear more opinions about this > (esp. Thatcher's one, of course). I need a dynamic terrain for a > project. Time is short. I'm a bit familiar with Thatcher's chunk-LOD > implementation, but maybe not enough to imagine all required > modifications. > > It looks like it's possible since the code already performs > vertex-morphing all the time, but maybe I'm missing one or two > difficulties. Hm. If you're talking large-scale mod'ing of the terrain, I don't think I'd recommend it. (To do it "right", you would probably need to write changed data back to disk, and as-is the code uses variable-sized chunks all packed together in one file, plus the existing preprocessing is global; i.e. local variances can propagate influence arbitrarily far.) On the other hand, if you're talking localized craters and stuff like that, one approach would be to modify verts in memory, and remember where your craters are, so that when you re-load a chunk from disk, you can re-apply any vert mods. There are some problems with this as well; e.g. if you drop a bomb onto a flat area, there may not be many verts there, so the crater would look bad. I suppose you could go for a tesselation that is not adaptive to terrain curvature; something much more like geomips, and then the terrain mods are pretty straightforward. That's probably what I'd recommend. Geomorphing would be good to avoid pops. If you need to draw really big terrains, then a quadtree scheme instead of a flat tiled scheme could help, but if the terrain is not huge, that may not be necessary. -- Thatcher Ulrich http://tulrich.com |
From: Jonathan B. <jo...@nu...> - 2003-07-25 09:10:16
|
> ROAM derivitives can address just about all of em (possibly > simultaneously - if you're a really sharp), but (despite deceptive > appearences) it is NOT an easy algorithm to master (I've been studying > it for years - ROAM 2.0 improves, but it's still a steep > learning/performance curve). Others will excel in certain areas and > fail elsewhere. Do you: Need to have your viewpoint move quickly, while rendering with nontrivial terrain detail? If so, do not bother with any kind of frame-coherent CLOD (including any version of ROAM). In fact, I am still waiting for people to understand that frame coherence is to be avoided in games whenever possible. I am going to submit a siggraph presentation on that (and some other things) for next year. -Jonathan. |
From: Lucas A. <ack...@ll...> - 2003-07-25 21:51:58
|
Jonathan Blow wrote: >>ROAM derivitives can address just about all of em (possibly >>simultaneously - if you're a really sharp), but (despite deceptive >>appearences) it is NOT an easy algorithm to master (I've been studying >>it for years - ROAM 2.0 improves, but it's still a steep >>learning/performance curve). Others will excel in certain areas and >>fail elsewhere. >> >> > >Do you: Need to have your viewpoint move quickly, while rendering >with nontrivial terrain detail? If so, do not bother with any kind of >frame-coherent CLOD (including any version of ROAM). > >In fact, I am still waiting for people to understand that frame coherence is >to be avoided in games whenever possible. I am going to submit a siggraph >presentation on that (and some other things) for next year. > > -Jonathan. > > > You're welcome to "avoid frame coherence" as much as you like, and we'll see how much gamers enjoy headache inducing games. What you probably meant is to forget exploiting coherence. Frankly Jon, this is ridiculous. We all understand that an incrementally updating algorithm must make exceptions for in-coherent cases, but this is allowed for in ROAM (in a simple and flexible manner), and dismissing it out of hand is simply not warranted. I know this discussion has taken place at least several times in the past, but you still continue to publicly mischaracterize the ROAM family of algorithms. I maintain that your particular implimentation experiences do not have any bearing on what ROAM is or is not capable of, and as such the biggest failings of ROAM remain social in nature, not technical. This is not the proper forum an ongoing debate, so for the algorithms-list I will just restate the relevant technical points. Despite frequent, rapid, or large discontinuous viewpoint/direction changes, your system MUST be presenting primarily coherent view sequences to the user, or else they would be left with a high-speed flickering slideshow instead of an animating 3d visualization. Whatever coherence exists for the viewer in a given sequence of frames is also going to exist in the output mesh (and primarily in the hierarchy, not necessarily the rendered leaves, though they will tend to update in a spatially sparse manner). Should the cost of updating the output mesh exceed that of creating a new one from scratch, this is an easy and correct fallback mode, which may then resume coherent updating as warranted. The degree of view coherence can be measured from the changes of viewpoint, refinement priorities, and hierarchical frustum culling results of successive frames. These factors are useful in deciding when the incremental update cost may exceed the from-scratch construction cost for a frame. Exploiting frame coherence is abitrarily more scalable (both in framerate and output size) than not doing so, despite the issues with incoherent updates. To double the fps of a non-incremental system (for a given framerate independant view motion) doubles the cost of the entire process (updating the mesh and rendering it), whereas it hardly impacts the mesh updating cost of an incremental system, since the change in work per frame would be cut half. To double the output size has similar implications (besides doubling rendering time): for a non-incremental system it doubles the large update cost of rebuilding the mesh from scratch, whereas for an incremental system the it only doubles the small incremental change cost. Finally, I'll observe that quickly moving viewpoints actually relieve the need for nontrivial terrain detail. The details obviously become less relevant as the viewpoint moves increasingly fast (that's not to say there shouldn't be some detail, just that the particulars become increasingly irrelivant). This is practically a no brainer. ROAM can quite easily accomodate per-frame time constraints due to the nature of the dual-queue optimizing process: the work happens in order of most important to least important changes, so stopping the proccess early is ok (as the linear cost of incremental work brings diminishing returns), and successive frames will pick up where it left off to fill in the details when the view stabilizes again and coherence increases. Our ongoing work indicates that ROAM will continue to cleanly scale into the hundreds of millions of triangles per second as hardware capabilites grow, and we're always curious to see how competing methods will fare and what new approaches will become popular. Incidentally, my current research is on visibility algorithms for multires geometry, and despite the currently impressive adaptivity and robustness of the latest method, my biggest fear remains that its scalability will be limited without an incremental scheme to exploit coherence. -Lucas |
From: Jonathan B. <jo...@nu...> - 2003-07-26 00:13:33
|
> You're welcome to "avoid frame coherence" as much as you like, and we'll > see how much gamers enjoy headache inducing games. What you probably > meant is to forget exploiting coherence. Hey man, you know what I meant. > I know this discussion has taken place at least several times in the > past, but you still continue to publicly mischaracterize the ROAM family > of algorithms. I maintain that your particular implimentation > experiences do not have any bearing on what ROAM is or is not capable > of, and as such the biggest failings of ROAM remain social in nature, > not technical. This is not the proper forum an ongoing debate, so for > the algorithms-list I will just restate the relevant technical points. Well I just want to say that I don't seem to have any special interest in dissing ROAM, as you may think. I have used it as an example, because it is one of the more well-known terrain systems. I could have just as easily made all the same negative statements about my system I presented at siggraph 2000 (and in fact I often do) -- but not many people know how that system works, so that wouldn't be as effective. > I was expecting a reapparance of the "cycle of death" arguement, and I > just don't buy it. Incremental algorithms do not require that you > always process all the possible work in a given frame. As I noted: Well the "cycle of death" is an actual physical phenomenon. If "computer science" is actually a science, then this is one of the kinds of basic entities that it deals with, alongside, say, the question of how long it takes to sort a group of objects. So I think that complex algorithms should be designed around these core observations about what it means to be a certain kind of computer program. I'm not really satisfied with the approach of "design an algorithm ignoring some of these facts, then add workarounds." What you're explaining here seems to be to be a workaround that isn't very robust: >> ROAM can quite easily accomodate per-frame time constraints due to the >> nature of the dual-queue optimizing process: the work happens in order >> of most important to least important changes, so stopping the proccess >> early is ok (as the linear cost of incremental work brings diminishing >> returns), and successive frames will pick up where it left off to fill >> in the details when the view stabilizes again and coherence increases. I did a lot of this kind of thing in my system, and I never came up with a solution that I thought was good. Due to the nature of the bucketing schemes that people tend to use, you *cannot* delay reprioritization for very many frames in a row or else you end up, essentially, having to re-do the whole thing from scratch. When that happens, it's painful. Duchaineau's original paper mentioned the Newton-Raphson-esque thing, which is already about guessing how long you can go without re-evaluating vertices. Clearly, if you wait longer than your conservative bound for reevaluation, you're likely to update things too late, i.e. suffer a visual quality hit or a tessellation efficiency hit. A little of that isn't so bad, but how much is okay? And why spend all this effort and CPU time proving bounds on 70% of your vertices, only to violate those bounds with the remaining 30%? I'm not saying that there's no solution to this within the ROAM framework. I am, though, rejecting your characterization of the matter as easy/simple. > Should the cost of updating the output mesh exceed that of creating a > new one from scratch, this is an easy and correct fallback mode, which > may then resume coherent updating as warranted. Another thing that I reject, not just about ROAM but about many published algorithms, is the way they purport to be easy, but in fact are very complicated due to all the workarounds that need to go in there. If I start off with one moderately complicated thing, then add 23 workarounds each of which is "easy", we very quickly end up with something that is not "easy" at all. So whenever someone says "Oh just do this one augmentation, it's easy", that is a big warning signal. If on the other hand, they say "you can do this one augmentation, but then here's a list of 15 foul interactions that might happen with other parts of your game system," then I am more likely to trust them. Just so I can share the love and prove I'm not just a bitter anti-ROAM curmudgeon -- Perspective Shadow Maps are the same way. Don't use them. > The degree of view coherence can be measured from the changes of > viewpoint, refinement priorities, and hierarchical frustum culling > results of successive frames. These factors are useful in deciding when > the incremental update cost may exceed the from-scratch construction > cost for a frame. Here's an example of what I am talking about. Your language is extremely hedged in the above paragraph, and rightfully so, because the idea is a bunch of hand-waving. In practice the only robust way of knowing when the update cost exceeds some threshold, is to wait for that to happen and then measure that it happened. > Exploiting frame coherence is abitrarily more scalable (both in > framerate and output size) than not doing so, despite the issues with > incoherent updates. To double the fps of a non-incremental system (for > a given framerate independant view motion) doubles the cost of the > entire process (updating the mesh and rendering it), whereas it hardly > impacts the mesh updating cost of an incremental system, since the > change in work per frame would be cut half. To double the output size > has similar implications (besides doubling rendering time): for a > non-incremental system it doubles the large update cost of rebuilding > the mesh from scratch, whereas for an incremental system the it only > doubles the small incremental change cost. Well, this all depends on the nature of the non-incremental system, doesn't it? Chunk LOD is basically doing nothing to update the mesh between frames. 2 times 0 = not very much. [Yeah, it's doing something, but it's so amortized as to be basically nonexistent; see earlier discussion about small-grain versus large-grain coherence]. I'm not saying *all* non-incremental systems are better than *all* incremental systems. I am saying that incremental systems have inherent drawbacks and so they should be avoided if a comparable non-incremental approach is available. > Finally, I'll observe that quickly moving viewpoints actually relieve > the need for nontrivial terrain detail. The details obviously become > less relevant as the viewpoint moves increasingly fast (that's not to > say there shouldn't be some detail, just that the particulars become > increasingly irrelivant). This is practically a no brainer. I don't think it's a no-brainer at all. I have never seen this successfully done in a game, to the point where I couldn't see the detail snap as I moved. Have you? I think this is one of those widely-held myths that seems like it ought to be true, but in practice, ends up being not so true at all. -Jonathan. |
From: Lucas A. <ack...@ll...> - 2003-07-29 00:39:19
|
Jonathan Blow wrote: >>You're welcome to "avoid frame coherence" as much as you like, and we'll >>see how much gamers enjoy headache inducing games. What you probably >>meant is to forget exploiting coherence. >> >> > >Hey man, you know what I meant. > Yet it wasn't what you said. Clear communication is paramount in this medium. >>I know this discussion has taken place at least several times in the >>past, but you still continue to publicly mischaracterize the ROAM family >>of algorithms. I maintain that your particular implimentation >>experiences do not have any bearing on what ROAM is or is not capable >>of, and as such the biggest failings of ROAM remain social in nature, >>not technical. This is not the proper forum an ongoing debate, so for >>the algorithms-list I will just restate the relevant technical points. >> >> > >Well I just want to say that I don't seem to have any special interest in >dissing ROAM, as you may think. I have used it as an example, because >it is one of the more well-known terrain systems. I could have just as >easily made all the same negative statements about my system I >presented at siggraph 2000 (and in fact I often do) -- but not many >people know how that system works, so that wouldn't be as >effective. > > > It's nice of you to say so, since you tend to deride ROAM loudly, publicly, and repeatedly. If there is a deeper issue at hand, as you suggest, I'm all for examining the situation further. >>I was expecting a reapparance of the "cycle of death" arguement, and I >>just don't buy it. Incremental algorithms do not require that you >>always process all the possible work in a given frame. As I noted: >> >> > >Well the "cycle of death" is an actual physical >phenomenon. If "computer science" is actually a science, then this is >one of the kinds of basic entities that it deals with, alongside, say, >the question of how long it takes to sort a group of objects. > >So I think that complex algorithms should be designed around these >core observations about what it means to be a certain kind of >computer program. I'm not really satisfied with the approach of >"design an algorithm ignoring some of these facts, then add >workarounds." What you're explaining here seems to be to >be a workaround that isn't very robust: > > This is an interesting question, but I'll hit all the miscellaneous stuff first. >>>ROAM can quite easily accomodate per-frame time constraints due to the >>>nature of the dual-queue optimizing process: the work happens in order >>>of most important to least important changes, so stopping the proccess >>>early is ok (as the linear cost of incremental work brings diminishing >>>returns), and successive frames will pick up where it left off to fill >>>in the details when the view stabilizes again and coherence increases. >>> >>> > >I did a lot of this kind of thing in my system, and I never came up with >a solution that I thought was good. > >Due to the nature of the bucketing schemes that people tend to use, >you *cannot* delay reprioritization for very many frames in a row or else >you end up, essentially, having to re-do the whole thing from scratch. >When that happens, it's painful. > > I didn't suggest delaying reprioritization (and the methods for doing so, incidentally, can provide guaranteed bounds) - that's a different problem. I was observing that ROAM does work in descending-priority order, so stopping early and just not finishing the lower-priority updates will "do the right thing." It won't guarantee error bounds anymore, but that's the tradeoff. This is ok, since guaranteeing bounds is also not always the best way to go about it. In general it's better to update approximatly X% (say 90) of the worst stuff 99% of the time, and always be fast (rather than guaranteeing bounds, and being mostly fast). Priority approximation, with a cheap function instead of a guaranteed bound, follows here as well. >Duchaineau's original paper mentioned the Newton-Raphson-esque >thing, which is already about guessing how long you can go without >re-evaluating vertices. Clearly, if you wait longer than your conservative >bound for reevaluation, you're likely to update things too late, i.e. suffer >a visual quality hit or a tessellation efficiency hit. A little of that isn't >so bad, but how much is okay? And why spend all this effort and CPU >time proving bounds on 70% of your vertices, only to violate those >bounds with the remaining 30%? > >I'm not saying that there's no solution to this within the ROAM framework. >I am, though, rejecting your characterization of the matter as easy/simple. > > I think we can agree to disagree here. Having already admitted that ROAM is an involved and difficult algorithm to master (with a 'nice' but unforgiving learning curve), our characterization of certain aspects as easy or simple is in light of the fact that they fit well as natural extensions to the algorithm and are not crude hacks or comprimizes. >>Should the cost of updating the output mesh exceed that of creating a >>new one from scratch, this is an easy and correct fallback mode, which >>may then resume coherent updating as warranted. >> >> > >Another thing that I reject, not just about ROAM but about many published >algorithms, is the way they purport to be easy, but in fact are very >complicated due to all the workarounds that need to go in there. >If I start off with one moderately complicated thing, then add 23 >workarounds each of which is "easy", we very quickly end up with >something that is not "easy" at all. So whenever someone says "Oh just >do this one augmentation, it's easy", that is a big warning signal. >If on the other hand, they say "you can do this one augmentation, >but then here's a list of 15 foul interactions that might happen with >other parts of your game system," then I am more likely to trust them. > > >Just so I can share the love and prove I'm not just a bitter anti-ROAM >curmudgeon -- Perspective Shadow Maps are the same way. >Don't use them. > > > Likewise, the implementation reality of ROAM is that it's a collection of several smaller algorithms that have to work well in concert. The potential for ill-behaving interactions is something that has to be understood, but it can be overcome. A well designed system with sufficient modularization and separation of concerns can cope well with major augmentations. A poorly engineered system will self destruct at the mere thought. This seems to be a common theme: whether it is desirable to have a fundamentally simple and infallible system that is rather limited, or a more general and complex system that can be extended to meet more demanding requirements. To roughly quote Tony Hoare (from I-don't-recall-what-paper), a system may be "so simple as to be obviously not deficient, or so complicated as to have no obvious deficiencies." IIRC, that was about programming languages, but this issue has a similar flavor. It doesn't directly apply, because a simpler programming language can be 'complete', whereas most LOD systems can not, so we have to decide how to live with that complexity (or without it and the completeness of the solution). Again it seems to me that we have are primarily a social limitations, not technical ones. For all the "have to impliment it myself"ers out there (for which game development is as notorious as anything else), the simpler solution is better because it's easier to fully understand and get right the first time through. Conversely, for people who want a flexible system and have more demanding requirements, more complex solutions are necessary. Thus, it follows that we shouldn't all be reimplementing the complex solution, but rather refining and improving a publicly available one. I believe this is possible, but it hasn't really happened yet (for ROAM anyway). The more common high quality ROAM implimentations become, the more feasible this is, and the more interest people will have in using and contributing to one. For example, general engine/SDK's are certainly on the rise these days as a result of economic considerations, but they haven't moved in this direction yet. >>The degree of view coherence can be measured from the changes of >>viewpoint, refinement priorities, and hierarchical frustum culling >>results of successive frames. These factors are useful in deciding when >>the incremental update cost may exceed the from-scratch construction >>cost for a frame. >> >> > >Here's an example of what I am talking about. Your language is extremely >hedged in the above paragraph, and rightfully so, because the idea is a bunch >of hand-waving. In practice the only robust way of knowing when the >update cost exceeds some threshold, is to wait for that to happen and then >measure that it happened. > I disagree about this being hedged and hand-waving, and will point out that you've just stated what I left implicit. This is unavoidably implementation-specific (as are any 'update cost' references), so the only way to do it is by instrumenting the code and measuring what's really going on. Given some measurements, the correlation between the above factors and incremental update cost can be established, and used thereafter in a predictive/preventative fashion. I don't think this is necessariy what you're objecting to, but moreover that there isn't a simple predictive formula for everyone to just use. >>Finally, I'll observe that quickly moving viewpoints actually relieve >>the need for nontrivial terrain detail. The details obviously become >>less relevant as the viewpoint moves increasingly fast (that's not to >>say there shouldn't be some detail, just that the particulars become >>increasingly irrelivant). This is practically a no brainer. >> >> > >I don't think it's a no-brainer at all. I have never seen this successfully >done in a game, to the point where I couldn't see the detail snap >as I moved. Have you? > > There's a silent contradiction here: if it were being done successfully in a game, you wouldn't see it. I do think it's possible in practice, but if the detail is changing to accomodate a fast-moving viewpoint, you're right that you will be able to see it. The point, though, is that it shouldn't matter. If your detail is non-trivial to the point of seriously impacting gameplay, then it should be handled properly. So we should ask, what really IS trivial? Perhaps 'non-trivial' is the sticking point here. Would 'relevant' detail be more appropriate? Suggestions are welcome. Anyway, I'm curious as to how much detail is trivial, and how much isn't? This is going to be context-specific, but I think the vast majority is not relevant. Whether it's literally trivial might be more dependant on how much you can add without significant rending cost. Lets say I carve my name on the wall in some game. It's relevant to me since I know what should be there when I look. There will be scales of stuff that I don't care about though, like the larger and smaller particulars of the wall surface. For most games, it seems the 'relevant' scale is going to be approximately whatever the player character size is, or what most affects the players' interactions with the game world. >I think this is one of those widely-held myths that seems like it ought to be >true, but in practice, ends up being not so true at all. > > -Jonathan. > > > So, suppose it isn't true. Shouldn't it be? The faster you go, the less you're going to really see. I've noticed many recent racing games do a boost/blur/perspective-warp for the "you're going stupidly fast" effect. But does anyone think temporal anti-aliasing (blur) is what's missing from terrain algorithms? I think the speed issue might be better viewed as a prediction problem. Could youfactor in how much work you'll have to do by the potential acceleration : current velocity ratio, and the frequency of high-accelerations. If it's low, you know right where the view is headed, but if it's high, you lose coherence. Mark and I once had a discussion about whether you could pull off a view-dependant optimizer based on an eye-motion device, since if you knew exactly what the user was seeing, you could avoid wasting lots of work on most of the screen that's just peripheral. My only conclusion was that videogames would never adpot it because bystanders would hate watching someone else play. Lastly, on the first note (and from the reply to T.Ulritch) - >But if you put Sleep(rand() % 50) into something like full ROAM, >you are really asking for trouble. And in general the >speed cap is a whole lot lower, which isn't cool. > > This is the cost of scalability. For a data set with contrived limitations (which is virtually all modern games), the constant cost can be higher than the "dumb" do-more-work-but-more-simply approach. Where it isn't speed capped is when you need it to scale it waaaay up, and that's where the "dumb" approach will fail. It may not matter so much now, but it's getting more important every day (in fact, it really mirrors the content-creation scalability crisis). >So maybe "frame coherence" isn't a precise enough term to differentiate >between these two cases. Maybe a different phrase, like "timestep >sensitivity" or something. I think it's true to say "a lot of popularly espoused >frame coherence optimizations are dangerously timestep-sensitive." > >So for something like terrain LOD, I would say, there are known >timestep-insensitive ways of doing it, and if you choose something that >has a timestep sensitivity, you really ought to have a good reason >for that. In something like physics, we just don't know how to solve >the problem in a way that isn't timestep-sensitive, and that really >kind of sucks. > >It's a difficult issue because if you drill down far enough, everything has >a timestep sensitivity. Somewhere though, there's a very wide very >fuzzy line, and everything on one side of the line is stable, and everything >on the other side is openly questionable, and everything in the middle >is, well, I don't know. > > > A curious observation. There are popular methods of reconciling these issues, but maybe the situation hasn't recieved proper study. I know, for example, that the current networked-simulation dogma is to use fixed steps for everything, for practical reasons (like latency). What hasn't been thouroughly established is how the workarounds (fixed-step) of limitations in the super-zoomed timestep sensitive cases will be reflected in the results. -Lucas |
From: Jonathan B. <jo...@nu...> - 2003-07-29 19:26:04
|
My recent email about software complexity issues touches on the subject that Luis brought up about algorithm adoption ostensibly being a social issue. I think there are definite technical issues here as well, so I wanted to bring those up. Luis Ackerman wrote: > Again it seems to me that we have are primarily a social limitations, > not technical ones. For all the "have to impliment it myself"ers out > there (for which game development is as notorious as anything else), the > simpler solution is better because it's easier to fully understand and > get right the first time through. I agree that "have to implement it myself" is a problem. But you'll find that this attitude is more prevalent among hobbyists than among professionals. (It used to be big among professionals too, but as the amount of work needed to complete a game gets bigger, it becomes obvious that you just can't do everything yourself and be budget-prudent). And I agree, it would be neat if people put their heads together and provided free, high-quality versions of these algorithms. But there are still a couple of problems with all that. Firstly, it's usually just not the case that you can just take a system like that and slap it into an engine with some glue and expect it to run well. You either need to design the rest of the engine around this major piece, or you need to revamp the piece to fit the engine (or do some combination of both). The more complicated the piece is, the harder this work is. Secondly, there are all the drawbacks of added system complexity that I mentioned in my last email. For *any* kind of algorithm, not just rendering LOD, you really need to justify all the complexity that the algorithm brings into the picture. That complexity costs real money, and it adds nonnegligibly to your chances of slipping schedule or failing the project entirely. Thus, the benefit must be easy to see and it must be large. Otherwise, incorporating such an algorithm is a bad decision, from both an engineering and a business/management standpoint. You can put me in the Tony Hoare camp here: I want an algorithm that is so simple that it obviously not deficient. Unfortunately, that is impossible in any kind of LOD, because the very concept of LOD is one that inherently causes problems (this might be a good subject for a different thread). It's important to remember this: ideally, we don't want to LOD at all. Any graphics LOD indicates a mismatch between what is being rendered, and what actually exists, in the game world. That mismatch *will* show up sometimes in gameplay. If the mismatch causes big enough problems, which it often will regardless of the algorithm in use, we need to deal with those. Again, this complexity is algorithm-independent, it's the minimum we can get away with and still have rendering LOD. So then, any complexity due to our specific algorithm gets layered on top of that. I'd like to keep that added layer small. This hooks into the discussion we were having about ROAM specifics... > I didn't suggest delaying reprioritization (and the methods for doing > so, incidentally, can provide guaranteed bounds) - that's a different > problem. I was observing that ROAM does work in descending-priority > order, so stopping early and just not finishing the lower-priority > updates will "do the right thing." But in a well-optimized algorithm, the per-vertex cost of actually doing a mesh modification is low enough that it's in the same neighborhood as the total-prioritization-per-live-vertex-per-frame cost. So only reducing the vertex modification cost is not very helpful, unless you also stop prioritization. All of which makes the matching of error bounds worse, which is what my next point is all about. > It won't guarantee error bounds > anymore, but that's the tradeoff. This is ok, since guaranteeing bounds > is also not always the best way to go about it. In general it's better > to update approximatly X% (say 90) of the worst stuff 99% of the time, > and always be fast (rather than guaranteeing bounds, and being mostly > fast). Right, and this is my big problem with CLOD. CLOD algorithms spend all this software complexity -- a finite and highly-contended resource, as mentioned earlier -- and what do we get? In such an algorihtm's raw form, we get a guarantee that the spatial error for every vertex on the screen is bounded. Okay; it's unclear whether that's worth all the effort, but at least it's something solid that can be said about what these algorithms achieve. But wait! In order for the system to run smoothly, we need to give up this guarantee. So now what can we say about the algorithm's achievement? It "generally makes it so that vertices are within the error threshold most of the time as long as you're not stressing the system too hard". Is that all? I can do that with a static mesh algorithm, so why all the fuss? ROAM handles morphing surfaces better than static mesh LOD, but in those cases I can just use geomipmapping. So I'm just not sure what ROAM buys me for all that effort. That's all. If someone convinces me that there is a large, clear payoff, I will happily go back to using CLOD algorithms. Now as people keep optimizing CLOD algorithms, they are becoming more like static mesh algorithms, so maybe the two will meet at some point. -Jonathan. |
From: Thatcher U. <tu...@tu...> - 2003-07-29 05:20:16
|
On Jul 25, 2003 at 02:41 -0700, Lucas Ackerman wrote: > > [...] and as such the biggest failings of ROAM > remain social in nature, not technical. Some (hopefully helpful) observations: 1. My guess is that after static model-swapping, ROAM is probably the most influential LOD method in the history of game development. As far as I can tell, "ROAM" is the first word on the average game programmers' lips when asked about terrain. Total guess on my part; I don't have data or anything. So don't fret too much over perceived failings. 2. In my opinion, the *real* reasons people don't always use LOD (of whatever flavor) are based on practical engineering economics, not anything social or purely technical. The fact is that except for a few game genres, like flight sims, there are way more important things on the agenda than scalable LOD. Take shading. Maybe 50% of the traffic on this list concerns shading techniques. And that's because quality of shading has huge leverage over the player experience, and is still heavily resource-constrained. Whereas LOD generally isn't making or breaking anybody's game nowadays, due to faster hardware. So anything (e.g. LOD) that takes developer time and makes shading more complicated has to have an extra big payoff, over and above any intrinsic benefit. 3. IMO, that does not in the slightest take away from the value of researching things like ROAM. A technique doesn't have to be ubiquitous in every application to be successful. And if your real goal is to make ubiquitous algorithms, then you need to embrace honest critiques from practictioners, or else you'll just be frustrated. -- Thatcher Ulrich http://tulrich.com |