From: Corbin S. <mos...@gm...> - 2010-03-28 00:59:29
|
On Sat, Mar 27, 2010 at 5:34 PM, Luca Barbieri <lu...@lu...> wrote: > Having drivers capable of doing "send-to-hardware-and-forget-about-it" > on arbitrary state setting could be a nice thing instead, but > unfortunately a lot of hardware fundamentally can't do this, since for > instance: > 1. Shaders need to be all seen to be linked, possibly modifying the > shaders themselves (nv30) > 2. Constants need to be written directly into the fragment program (nv30-nv40) > 3. Fragment programs depend on the viewport to implement > fragment.position (r300) > 4. Fragment programs depend on bound textures to specify normalization > type and emulate NPOT (r300, r600?, nv30) > and so on... > 5. Sometimes sampler state and textures must be seen together since > the hardware mixes it To be fair, this is all "old hardware sucks at new APIs." We're stretching a bit with r300 and nv30, hardware never really meant for this kind of generalized pluggable pipeline setup. r500 and nv40 are better, but it's not until r600 and nv50 that we really are completely unburdened from all of this old suckage. That's life, unfortunately. Also, I'm sure there's always going to be hardware that has quirks, regardless of the set of functionality we expose. We're just going to have to aim for the biggest common subsets plus the least painful way of adding the full pipeline features. Gallium thankfully no longer resembles its original target too much, but it's become a reasonable abstraction. -- When the facts change, I change my mind. What do you do, sir? ~ Keynes Corbin Simpson <Mos...@gm...> |