AW: [Algorithms] OpenGL LockArrayEXT ... workaround
Brought to you by:
vexxed72
From: Maik G. <mai...@gm...> - 2000-08-26 01:46:38
|
didnt you say its off topic to speak about that opengl stuff : ) but anyway .... i was just askin if theres a "common way ..." to get the full performance boost out of the LockArrayEXT optimization .... even the latest nvidia - drivers (at least for my tnt2) dont optimize anything if your vertex-array-size is over a special value ... --> you got to split your vertex-arrays in little pieces (4096 vectors in my case ) or you WONT get the performance boost of ... 25% with a vertex-array-size over that its kinda useless to use this function on a TNT2 ... thats the problem ... but as some people said ... there is no other way to get full performance boost than splitting the one huge vertex array in alot of little ones ... so i guess this topic is finished =) -----Ursprungliche Nachricht----- Von: gda...@li... [mailto:gda...@li...]Im Auftrag von Brian Sharp Gesendet: Freitag, 25. August 2000 05:58 An: gda...@li... Betreff: Re: [Algorithms] OpenGL LockArrayEXT ... workaround you wrote: > > are you sure 4096 is the maxium size for optimized compiled vertex arrays? > > I just though it was 1024, since q3 uses 1K vertex buffers (according to >Brian Sharp's conference). My conference? Any information I have on Q3A is pretty much hearsay from Brian Hook. Or maybe you're referring to Brian Hook, not Brian Sharp? We're more different than our last names: I never worked for id. ;-) At 11:30 AM 8/25/00 +0800, you wrote: >It depends on the accelerator generally. Remember Brian worked on the 3dfx >ogl drivers, which may mean his 1k number was based on those, rather than >other drivers. I think most of the T&L cards are 4/8k. So first of all, I don't see any actual defined limit (in the EXT_compiled_vertex_array spec) to the number of vertices you can lock. The original poster should have no problem locking 100,000 vertices. If it's an issue of what implementations optimize for, that's kind of lame to write to that, especially because that's totally volatile and will probably change in most drivers before you ship. In the existing 3dfx driver, at least, it'll optimize whatever you throw at it. Sure, that's transformation happening on the CPU, but it guarantees that you don't end up retransforming verts. The downside to that is the hypothetical case where someone locks a 30 million vert array, and the driver allocates a ton of memory. It can't be any more than the app already has allocated, but it's nonetheless potential for a lot of memory allocation. So for future work it's being donw a different way, but it won't "not optimize" the arrays if they're over some size. I'm not really sure what the motivation would be for a driver to not optimize arrays over a certain size. I mean, they could only pretransform a set number and treat any more as regular verts to be transformed. Hmm. Especially if it's a hardware T&L implementation, where you'd obviously want to have a tiered caching system in the driver already, capable of locked arrays on the host bigger than your hardware cache. I'm confused. So, what was the original poster asking about? Why couldn't he lock all his verts? A driver crash? A slowdown? A spec misreading? -Brian ========== GLSetup, Inc. _______________________________________________ GDAlgorithms-list mailing list GDA...@li... http://lists.sourceforge.net/mailman/listinfo/gdalgorithms-list |