Thread: RE: [Algorithms] Scalability costs
Brought to you by:
vexxed72
From: Tom F. <to...@mu...> - 2000-07-31 11:42:35
|
Well, that's the real trick, isn't it? You certainly need some sort of order-of-magnitude scalability - existing cards and systems have that large a variation in performance - so you need to invest the coder development time anyway. The real trick is getting that scalability without chewing through artist and animator time. And for most things, the stuff I talked about at WGDC(*) fits the bill nicely, which is cool. Particle systems are dead easy to scale as long as you build it into the system at design time (ditto for explosions, lightning, etc). And landscapes have been talked about endlessly... Then there's the basic scalabiity of what texturing and rendering you use - that's a case of picking sensible fallback rendering styles when your fabulous eight-texture bumpmapping shadowed thing doesn't run fast enough on a Voodoo1. Resoloution, texture memory, display depth, etc. fallbacks (basically, make mipmaps, and be ready to ditch any stencil-buffer or destination alpha effects). All these things need to be done anyway to cope with current cards, and it's a good idea to plan ahead for two reasons. First, you can re-use a lot of your engine (assuming it fits the game type), and second, if you slip by six months, your game doesn't look obsolete when it's released (which certainly has happened to some games that slip). Tom Forsyth - Muckyfoot bloke. Whizzing and pasting and pooting through the day. (*) http://www.muckyfoot.com/downloads/tom.shtml > -----Original Message----- > From: Jamie Fowlston [mailto:j.f...@re...] > Sent: 31 July 2000 12:06 > To: gda...@li... > Subject: Re: [Algorithms] Scalability costs > > > It's a nice idea, but is it worth the cost in development time? > > Jamie > > > Jim Offerman wrote: > > > Hail to scalability :-) > > > > Jim Offerman > > > > Innovade > > - designing the designer > > > > ----- Original Message ----- > > From: "Tom Forsyth" <to...@mu...> > > To: <gda...@li...> > > Sent: Monday, July 31, 2000 11:04 AM > > Subject: RE: [Algorithms] Terrain Organization > > > > > Indeed - my goal for graphics systems is that when they > are released, they > > > should be able to bring even the best existing machine > down to sub-1Hz > > speed > > > if you turn all the detail levels up, and yet have > perfectly acceptable > > > performance on five-year-old machines with the auto-LoD > system on. That > > way > > > you know (a) you've done a good job of scaling stuff, (b) > you're pretty > > sure > > > you can re-use the engine (or something derived from it) > for the next > > > project, and (c) people will still play your game in five > years - it won't > > > look sad. Oh yes, and (d) you might get some good > bundling deals with new > > > graphics cards - and they're always welcome. > > > > > > Tom Forsyth - Muckyfoot bloke. > > > Whizzing and pasting and pooting through the day. > > > > > > > -----Original Message----- > > > > From: Jim Offerman [mailto:j.o...@in...] > > > > Sent: 31 July 2000 08:09 > > > > To: gda...@li... > > > > Subject: Re: [Algorithms] Terrain Organization > > > > > > > > > > > > > As for not everyone having TnL cards, well, I thought that > > > > any game which > > > > is > > > > > *now* in development will probably run on fast systems. By > > > > fast I mean > > > > either > > > > > TnL, or on CPUs which are fast enough that regular cards > > > > don't suffer too > > > > > much. Anyway, this is up to you, it depends on you budget > > > > and a lot of > > > > other > > > > > things. > > > > > > > > If you are going to implement some form of CLOD (this is > > > > where Tom Forsyth > > > > would promote using VIPM - which is not a bad idea at all), I > > > > think you > > > > should couple the runtime error calculations to your fps... > > > > i.e. if your fps > > > > drops, crank down all lod levels in the game until the > fps reaches an > > > > acceptable level. Your gime might be spitting out 100K tris > > > > per second on a > > > > souped up PIII with GeForce2 GTS and pump through a humble > > > > 15K on a PII with > > > > TNT PCI. > > > > > > > > And, since you have tied your CLOD calculations to the > fps you could > > > > (theoratically) also truly design your game for the future - > > > > i.e. use models > > > > which by todays standards contain far to many triangles, the > > > > CLOD will keep > > > > performance acceptable on today's high end hardware and > > > > increase detail on > > > > tomorrow's hardware. Imagine playing Half-Life again, > finding that the > > > > overall detail has increased by a factor ten. > > > > > > > > Jim Offerman > > > > > > > > Innovade > > > > - designing the designer > > > > > > _______________________________________________ > > > GDAlgorithms-list mailing list > > > GDA...@li... > > > http://lists.sourceforge.net/mailman/listinfo/gdalgorithms-list > > > > > > > > > > _______________________________________________ > > GDAlgorithms-list mailing list > > GDA...@li... > > http://lists.sourceforge.net/mailman/listinfo/gdalgorithms-list > > > _______________________________________________ > GDAlgorithms-list mailing list > GDA...@li... > http://lists.sourceforge.net/mailman/listinfo/gdalgorithms-list > |
From: Tom F. <to...@mu...> - 2000-07-31 13:43:53
|
> From: Jamie Fowlston [mailto:j.f...@re...] > > Jim Offerman wrote: > > > > It's a nice idea, but is it worth the cost in development time? > > > > I think that any form of scalability (whether it is > achieved through some > > pre-calculated LoD levels or through advanced techniques, like the > > forementioned VIPM, or even just scalability/modularity of > your engine it > > self) is worth the cost in development time, since it > potentially increases > > the replayability of your game and/or the reusability of > your engine. > > > > So your game can potentially sell more copies of a longer > period, since it > > can still compete with the newest releases (hence you earn > more money with > > it) > > Is this true in the PC market? For consoles, you get most > sales in early days, > very little afterward. Tom's suggestion that you can still be > looking good in n > years time isn't matched by sales in our circumstances. There are a few games that have the distinction of earning money well past the usual sell-by date. True, most of them tend to be the "sim" or "theme" games, but there is the odd graphics-heavy one (e.g. Half-Life, though it is helped by having more superb mods than you can empty a full mag at). Having a good scalable engine can only help... > > and you can potentially reduce the development cycle of > your next game, > > since you already have a good basis to start from. > > Great idea, does it really happen in practice? We seem to end > up starting from > scratch every time. Or at least taking the old stuff, using > it as a base, and > then rewriting most of it. New consoles don't help, of course :) It does happen when the engine has some nice generalities to it. Even if none of the actual code get shared (which is the usual case, admittedly), the people involved usualy just rewrite what they had before, maybe with a few neat tricks they've learned in the meantime. You can rewrite something very quickly if you've done it once and solved all the hard problems. It's that sort of "reuse" I meant, rather than the actual code itself. [snip] > Algorithms that scale well are a good thing... he says, > frantically trying to > convince himself the thread's not OT.... :) I don't think it's even close to OT - this is a games algorithms list, and LoD algorithms are amongst the trickiest and most varied algorithms out there. > Jamie Tom Forsyth - Muckyfoot bloke. Whizzing and pasting and pooting through the day. |
From: Jamie F. <j.f...@re...> - 2000-07-31 17:39:19
|
> > It does happen when the engine has some nice generalities to it. Even if > none of the actual code get shared (which is the usual case, admittedly), > the people involved usualy just rewrite what they had before, maybe with a > few neat tricks they've learned in the meantime. You can rewrite something > very quickly if you've done it once and solved all the hard problems. It's > that sort of "reuse" I meant, rather than the actual code itself. Plenty of ideas live past the original code, that's true :) > > > [snip] > > Algorithms that scale well are a good thing... he says, > > frantically trying to > > convince himself the thread's not OT.... :) > > I don't think it's even close to OT - this is a games algorithms list, and > LoD algorithms are amongst the trickiest and most varied algorithms out > there. True, this just seems peripheral to the algorithms themselves :) Jamie |
From: Tom F. <to...@mu...> - 2000-07-31 18:12:49
|
> From: Jamie Fowlston [mailto:j.f...@re...] [snip] > Not regardless of hardware. For us, drawing several instances > of the same model > saves us DMA bandwidth. CLOD trashes the ability to do that. VIPM won't. Since you use the same vertex data for all levels of detail, multiple instances are very cheap. All you change is a bunch of indices. On two of the future consoles I know well enough(*) to comment on, this will be nice and fast. You'll need to DMA across the new indices, but that's tiny amounts of memory. You may even be able to do the VIPM expands/collapses on the ... you know ... thing that does the transforms - the expand/collapse routine is absurdly simple. Which will make multiple instances awesomely cheap! Oh dear - I'm off again about VIPM. Sorry list :-) > Polygons are very > cheap to draw, so we might as well draw them.... Unless > anyone else knows > better? :) > > Maybe when we've had more experience of the machine we'll > change our minds. :) > > Jamie Tom Forsyth - Muckyfoot bloke. Whizzing and pasting and pooting through the day. (*) The one I don't know about, I knew exactly one hard fact about it - its name. And then they changed it, and I can't remember what to. So I now know precisely zero about it :-) |
From: Jamie F. <j.f...@re...> - 2000-07-31 18:32:55
|
Still have to DMA the vertex data, which is larger for VIPMed models (am I right? I've not implemented VIPM, but think I've understood the general idea). So total transfer is larger than it would be for non-VIPMed model. We're expecting to be DMA speed constrained rather than draw speed constrained. Have you actually implemented VIPM on... that machine? :) Some sort of algorithms-style section in the support newsgroup would be nice, and I could stop waffling and talk specifics.... :) Talk about LoD would be useful. I'll post something.... I know some stuff about other consoles... but AFAIK, I can't say which or how much. Aren't NDAs fun? :) Jamie Tom Forsyth wrote: > > From: Jamie Fowlston [mailto:j.f...@re...] > > [snip] > > > Not regardless of hardware. For us, drawing several instances > > of the same model > > saves us DMA bandwidth. CLOD trashes the ability to do that. > > VIPM won't. Since you use the same vertex data for all levels of detail, > multiple instances are very cheap. All you change is a bunch of indices. On > two of the future consoles I know well enough(*) to comment on, this will be > nice and fast. You'll need to DMA across the new indices, but that's tiny > amounts of memory. You may even be able to do the VIPM expands/collapses on > the ... you know ... thing that does the transforms - the expand/collapse > routine is absurdly simple. Which will make multiple instances awesomely > cheap! > > Oh dear - I'm off again about VIPM. Sorry list :-) > > > Polygons are very > > cheap to draw, so we might as well draw them.... Unless > > anyone else knows > > better? :) > > > > Maybe when we've had more experience of the machine we'll > > change our minds. :) > > > > Jamie > > Tom Forsyth - Muckyfoot bloke. > Whizzing and pasting and pooting through the day. > > (*) The one I don't know about, I knew exactly one hard fact about it - its > name. And then they changed it, and I can't remember what to. So I now know > precisely zero about it :-) > > _______________________________________________ > GDAlgorithms-list mailing list > GDA...@li... > http://lists.sourceforge.net/mailman/listinfo/gdalgorithms-list |
From: Jim O. <j.o...@in...> - 2000-07-31 19:33:09
|
As I understand it, you not supposed to touch the actual vertex data of a VIPMed and, if the model is at full detail you will have exactly the same data as when you render a 'normal' model. However, in addition you have some 'control data' to govern the process of adding/removing detail (which, like Tom mentions, is all done by manipulating the index list). Since you never send the control data to the rendering API, it won't even know the difference... Jim Offerman Innovade - designing the designer ----- Original Message ----- From: "Jamie Fowlston" <j.f...@re...> To: <gda...@li...> Sent: Monday, July 31, 2000 8:32 PM Subject: Re: [Algorithms] VIPM on a machine with DMA :) > Still have to DMA the vertex data, which is larger for VIPMed models (am I > right? I've not implemented VIPM, but think I've understood the general idea). > So total transfer is larger than it would be for non-VIPMed model. We're > expecting to be DMA speed constrained rather than draw speed constrained. > > Have you actually implemented VIPM on... that machine? :) Some sort of > algorithms-style section in the support newsgroup would be nice, and I could > stop waffling and talk specifics.... :) Talk about LoD would be useful. I'll > post something.... > > I know some stuff about other consoles... but AFAIK, I can't say which or how > much. Aren't NDAs fun? :) > > Jamie > > > Tom Forsyth wrote: > > > > From: Jamie Fowlston [mailto:j.f...@re...] > > > > [snip] > > > > > Not regardless of hardware. For us, drawing several instances > > > of the same model > > > saves us DMA bandwidth. CLOD trashes the ability to do that. > > > > VIPM won't. Since you use the same vertex data for all levels of detail, > > multiple instances are very cheap. All you change is a bunch of indices. On > > two of the future consoles I know well enough(*) to comment on, this will be > > nice and fast. You'll need to DMA across the new indices, but that's tiny > > amounts of memory. You may even be able to do the VIPM expands/collapses on > > the ... you know ... thing that does the transforms - the expand/collapse > > routine is absurdly simple. Which will make multiple instances awesomely > > cheap! > > > > Oh dear - I'm off again about VIPM. Sorry list :-) > > > > > Polygons are very > > > cheap to draw, so we might as well draw them.... Unless > > > anyone else knows > > > better? :) > > > > > > Maybe when we've had more experience of the machine we'll > > > change our minds. :) > > > > > > Jamie > > > > Tom Forsyth - Muckyfoot bloke. > > Whizzing and pasting and pooting through the day. > > > > (*) The one I don't know about, I knew exactly one hard fact about it - its > > name. And then they changed it, and I can't remember what to. So I now know > > precisely zero about it :-) > > > > _______________________________________________ > > GDAlgorithms-list mailing list > > GDA...@li... > > http://lists.sourceforge.net/mailman/listinfo/gdalgorithms-list > > > _______________________________________________ > GDAlgorithms-list mailing list > GDA...@li... > http://lists.sourceforge.net/mailman/listinfo/gdalgorithms-list > |
From: Pierre T. <p.t...@wa...> - 2000-08-04 00:31:20
|
Hi, Here I go again with a question about the virtual vertices in the modified Butterfly scheme. Here's an excerpt from Sig'96 paper: "Edges which are not on the boundary but which have a vertex which is on the boundary are subdivided as before while any vertices in the stencil which would be on the other side of the boundary are replaced with "virtual" vertices." Let's examine two cases. In all of them we are in the situation described above. 1) The boundary vertex has a valence of 4: How many virtual vertices am I supposed to create? Until now, I've alwayd blindly created enough virtual vertices to transform the vertex into a regular one (valence = 6), but this is not explicitely written in Zorin's paper. What if I just use the extraordinary scheme for valence = 4, without creating virtual vertices? 2) The boundary vertex has a valence of more than 6, say 7: Here's the case I'm having a lot of trouble with. Actually that's why I'm suddenly writing this message. I recently found that one is a particular mesh, and my code crashed. Because, of course, I tried to create enough virtual vertices to reach valence 6, ie "-1" vertex. Err... What ?! Here, the only correct behaviour looks like using the standard extraordinary scheme for valence = 7, without creating virtual vertices. But if this is the case, why shouldn't I just do the same in 1)....? And if this is not the correct behaviour, what is it? Am I supposed to create virtual vertices in that case? How many and how ? Puzzled again. /* Pierre Terdiman * Home: p.t...@wa... Software engineer * Zappy's Lair: www.codercorner.com */ |
From: Paul A. <fr...@fu...> - 2000-08-08 09:03:18
|
Hi all, I just checked out the image of the day on www.flipcode.com that showed a landscape using the ROAM algorithm. The landscape does look nice so I decided I may move my arse and look into landscape rendering which I promised myself I would do for some time now :-) Just wondered if there are any pointers to good documentation about ROAM or other such algorithms for LOD on landscapes... I guess there are plenty but any ones that stick out from the rest ? (appolagies, i am new to the list :) Thanks Paul. |