Pawel Salek wrote:
> On 2003.11.27 06:20, Alexander Stohr wrote:
>> There is a comparison from R.Scheidegger, it is linked at
>> the dri web page at Documents page, "Other Documents"
>> section comparing several Radeon 9000 capable drivers
>> including the DRI drivers. [snip]
> This review is very good but I would really like to see the numbers -
> and some analysis! - for recently released game "Savage" that was even
> announced on Slashdot. I tried it on my ATI8500LE (I know it is not a
> state-of-the-art graphics card any more) and I have got mixed feelings.
> Binary ATI drivers appeared to have problems with character rendering
> (but performance was almost decent) and DRI drivers could render only
> single frame every few seconds and had usual problems with s3TC but the
> program can be asked not to compress textures - I wish it could detect
> the missing extension automatically as ID-software games do.
> I contacted S2Games and they said that: "The open-source DRI drivers
> don't support the Vertex Buffer Object OpenGL extension [...]. We fall
> back to other methods like VAR and such on cards that don't support
> VBO, but really it comes down to the driver optimizing for high poly
> counts. We use the same code in both Linux and Windows (since it's all
> OpenGL), and in Windows we get fast speeds on both ATI and NVidia. In
> Linux, ATI seems like a non-starter on both driver sets, whereas the
> binary NVidia drivers work like a charm. In games like RTCW, they deal
> in the realm of 2,000-5,000 polygons per frame. In more recent games
> like Savage, we push more like 100,000 polygons per frame, since we're
> doing full outdoor scenes. [...] The VAR that we use is the
> NV_vertex_array_range extension. If the driver don't support that, I
> believe we fall back to the CVA extension (compiled vertex array)."
> I can imagine many applications that require many polygons to be
> displayed: CAD, molecular modelling and I understand such extensions
> are not patented, are they? Does anybody know why
> GL_EXT_compiled_vertex_array cannot give same performance as
GL_EXT_compiled_vertex_array only allows a single set of arrays to be
locked at any given time, thus limiting optimizations. Also, it
doesn't directly express the concept of storing vertex data in the
graphics card's local memory (as GL_ARB_vertex_buffer_object does).
> Would that be much work to implement this extension?
I've implemented GL_ARB_vertex_buffer_object in Mesa for the upcoming
5.1 release. It'll be supported in the DRI drivers eventually, but
probably won't be really optimized for a while after that.