----- Original Message -----From: Jeff RussellSent: Wednesday, May 28, 2008 8:30 PMSubject: [Algorithms] why not geometry?So, say hypothetically that a person wanted to render a tree trunk. The typical approach is to use solid triangles for the trunk and a few major branches and use alpha-tested or blended geometry for the smaller branches and twigs, the goal being of course to reduce the triangle count to get good performance. One then ends up rendering for example something like 1-2k triangles instead of the 50k or more that might be required to render the 'full' tree. Seems logical.
But my question is - is this still wise? And if so, why?
- A typical vertex shader often compiles into about as many instructions as a fragment shader nowadays (for us in the 50-100 range). I would expect then that from a pure computation standpoint computing a single vertex would be roughly as fast as a single pixel on a modern GPU, since vertices and fragments now share ALU in most cases.
- Geometry in this case saves fill rate. Even if the tradeoff for vertices to fragments isnt 1:1, drawing your tree branches with geometry will avoid the massive overdraw that large alpha-tested triangles can incur. Even proper sorting won't save all those fragments that get discarded by the alpha test.
- Geometry looks better! You get antialiasing, proper depth buffer interaction, parallax, etc. all of which are trickier to attain in full for impostor geometry.
- The advent of the 'geometry shader' makes replicating parts of your data stream more feasible. I could see a situation of local geometry instancing where a large branch has the same 'twig' present in several different positions/orientations/scales along its length. This means you allow maybe 100 tris in your vertex buffer for a given twig, and the geometry shader replicates it at draw time using a constant table to maybe 16 instances or so to fill out your tree. Then you'd get a very large number of triangles generated by a single draw call that only passes in a fraction of the data.I suppose maybe the memory problems alone could be what hurt vertex processing so much, given there's so much data flying around. Plus I've heard that geometry shaders aren't even very fast yet.Any thoughts? Has anyone gone down this road - using lots of triangles in place of flat textures?Jeff Russell
This SF.net email is sponsored by: Microsoft
Defy all challenges. Microsoft(R) Visual Studio 2008.
GDAlgorithms-list mailing list