Thread: RE: [Algorithms] VIPM on a machine with DMA :)
Brought to you by:
vexxed72
From: Tom F. <to...@mu...> - 2000-07-31 19:04:01
|
> From: Jamie Fowlston [mailto:j.f...@re...] > > > Still have to DMA the vertex data, which is larger for VIPMed > models (am I > right? I've not implemented VIPM, but think I've understood > the general idea). No, the vertex data is precisely as big as your most-detailed model - that is, the most detailed model you want to draw that frame (or that instance or whatever). There is zero bloating of vertex data. The index data is indexed lists, so that's very slightly bloated from indexed strips in some cases, but probably not so you'd notice - indices are tiny compared to the vertices anyway. And the collapse/expand info is moderately big thinking about it (12 bytes per collapse on average I think - I can't remember), so it may not be a win to do the collapse/expand on the "non-CPU unit" - who can tell? It's pretty quick on both to be honest. > So total transfer is larger than it would be for non-VIPMed > model. We're > expecting to be DMA speed constrained rather than draw speed > constrained. Indeed, that's what I woudl expect too - the DMA seems to be the hardest thing to get right about the machine. But fortunately there is no bloat. Go and read Charles Bloom's excellent VIPM tutorial (one day I'll get around to doing my own - actually, there's not much point - it'll be identical to Charles'). (http://208.55.130.3/3d/ - sixth link down). > Have you actually implemented VIPM on... that machine? :) Some sort of > algorithms-style section in the support newsgroup would be > nice, and I could > stop waffling and talk specifics.... :) Talk about LoD would > be useful. I'll > post something.... No, I haven't actually played with that machine yet, except in my head from the specs (and I'm not on the newsgroup or anything). Busy enough playing with the Dreamcast! I have implemented it on the PC, using similar sorts of semantics - Optimised VBs. There, DMA speed (well, AGP bus speed) is king as well. > I know some stuff about other consoles... but AFAIK, I can't > say which or how > much. Aren't NDAs fun? :) Grrrr. :-) > Jamie Tom Forsyth - Muckyfoot bloke. Whizzing and pasting and pooting through the day. |
From: Tom F. <to...@mu...> - 2000-08-01 10:07:48
|
The 12 bytes per vertex may be worth it if it means you can draw many instances at once without any extra indices being DMA'd in for each one. Hmmm... actually, that may not be true - it's total memory that is a problem as well - once DMA bandwidth is below a certain level (which it probably will be for indices), it ceases to be the bottleneck. You're right - do the expand/collapses on the CPU. I'm sure you must be able to get indices fast. Vertex reuse (_after_ the T&L is done) will be a fairly big timesaver - it certainly is in the PC/DX world. Tom Forsyth - Muckyfoot bloke. Whizzing and pasting and pooting through the day. > -----Original Message----- > From: Jamie Fowlston [mailto:j.f...@re...] > Sent: 01 August 2000 10:39 > To: gda...@li... > Subject: Re: [Algorithms] VIPM on a machine with DMA :) > > > Tom Forsyth wrote: > > > > From: Jamie Fowlston > [mailto:j.f...@re...] > > > > > > > > > Still have to DMA the vertex data, which is larger for VIPMed > > > models (am I > > > right? I've not implemented VIPM, but think I've understood > > > the general idea). > > > > No, the vertex data is precisely as big as your > most-detailed model - that > > is, the most detailed model you want to draw that frame (or > that instance or > > whatever). There is zero bloating of vertex data. > > That sounds a lot saner then. > > > > > > > The index data is indexed lists, so that's very slightly > bloated from > > indexed strips in some cases, but probably not so you'd > notice - indices are > > tiny compared to the vertices anyway. > > True. Just using indices in the first place can significantly > slow things down, > though.... Hmmm. Rethink time on aspects of inherited > graphics engine. Not so > sure about that now... depends on vertex reuse.... > Possibilities there. > > > And the collapse/expand info is > > moderately big thinking about it (12 bytes per collapse on > average I think - > > I can't remember), so it may not be a win to do the > collapse/expand on the > > "non-CPU unit" - who can tell? It's pretty quick on both to > be honest. > > That's too much data for us if it turns into roughly 12 bytes > per vertex. > > > > > > So total transfer is larger than it would be for non-VIPMed > > > model. We're > > > expecting to be DMA speed constrained rather than draw speed > > > constrained. > > > > Indeed, that's what I woudl expect too - the DMA seems to > be the hardest > > thing to get right about the machine. But fortunately there > is no bloat. Go > > and read Charles Bloom's excellent VIPM tutorial (one day > I'll get around to > > doing my own - actually, there's not much point - it'll be > identical to > > Charles'). (http://208.55.130.3/3d/ - sixth link down). > > I've been meaning to :) I'm too busy to do the things that > would save me > time.... :) > > Jamie > > > _______________________________________________ > GDAlgorithms-list mailing list > GDA...@li... > http://lists.sourceforge.net/mailman/listinfo/gdalgorithms-list > |
From: Jamie F. <j.f...@re...> - 2000-08-01 10:33:03
|
Tom Forsyth wrote: > The 12 bytes per vertex may be worth it if it means you can draw many > instances at once without any extra indices being DMA'd in for each one. > Hmmm... actually, that may not be true - it's total memory that is a problem > as well - once DMA bandwidth is below a certain level (which it probably > will be for indices), it ceases to be the bottleneck. You're right - do the > expand/collapses on the CPU. There's a whole load of balancing to do, and being in the first wave means noone has any real idea of where the balance should be struck. Hohum :) I may actually get round to reading the papers this time :) > > > I'm sure you must be able to get indices fast. Vertex reuse (_after_ the T&L > is done) will be a fairly big timesaver - it certainly is in the PC/DX > world. We're not sure whether it would be a win or not, as the degree of vertex reuse is currently unclear. Then we'll see :) Jamie |
From: Jamie F. <j.f...@re...> - 2000-08-01 09:38:38
|
Tom Forsyth wrote: > > From: Jamie Fowlston [mailto:j.f...@re...] > > > > > > Still have to DMA the vertex data, which is larger for VIPMed > > models (am I > > right? I've not implemented VIPM, but think I've understood > > the general idea). > > No, the vertex data is precisely as big as your most-detailed model - that > is, the most detailed model you want to draw that frame (or that instance or > whatever). There is zero bloating of vertex data. That sounds a lot saner then. > > > The index data is indexed lists, so that's very slightly bloated from > indexed strips in some cases, but probably not so you'd notice - indices are > tiny compared to the vertices anyway. True. Just using indices in the first place can significantly slow things down, though.... Hmmm. Rethink time on aspects of inherited graphics engine. Not so sure about that now... depends on vertex reuse.... Possibilities there. > And the collapse/expand info is > moderately big thinking about it (12 bytes per collapse on average I think - > I can't remember), so it may not be a win to do the collapse/expand on the > "non-CPU unit" - who can tell? It's pretty quick on both to be honest. That's too much data for us if it turns into roughly 12 bytes per vertex. > > > So total transfer is larger than it would be for non-VIPMed > > model. We're > > expecting to be DMA speed constrained rather than draw speed > > constrained. > > Indeed, that's what I woudl expect too - the DMA seems to be the hardest > thing to get right about the machine. But fortunately there is no bloat. Go > and read Charles Bloom's excellent VIPM tutorial (one day I'll get around to > doing my own - actually, there's not much point - it'll be identical to > Charles'). (http://208.55.130.3/3d/ - sixth link down). I've been meaning to :) I'm too busy to do the things that would save me time.... :) Jamie |