From: Stephen J B. <sj...@li...> - 2002-09-16 19:44:05
|
On Mon, 16 Sep 2002, Michael L Brasher wrote: > i have played around with the vertex programs and fragment-programs. the arb > just realeased their vertex program extension, but other than that, all i know > about is nvidia's vertex programs and a fragment program extension that > they've released that will work on their next-gen nv30 card. now, i've only > been able to get these to run using nvidia's Cg language, which is both > windows centric and still very under development. Cg is certainly still under development (ie it's still pretty flakey) - but it's not especially Windows-centric. I've been using it sucessfully for Vertex programming under Linux for about a month now. > they promise to release an > opengl fragment program with their official release of Cg, but who knows when > that will be. Presumably in sync with the release of the NV30 hardware. The support for fragment programs (even under DirectX) is pretty limited on present hardware. Also, Remember that Cg is OpenSourced. > if there is some other instance of an opengl fragment program > out there that does not rely on Cg or windows, i'd very much like to hear > about it. Someone told me that they'd had some success running the Cg compiler with the 'profile' set to the DirectX fragment setup - then taking the resulting 'machine-code' and loading that into an OpenGL fragment shader. I've not tried to replicate that experiment though - so at this point, I regard it as 'hearsay evidence'. > the trouble with using a vertex program is that no matter what i do, the > fragment interpolation is still linear. thus, i'm fundamentally unable to > represent a higher order polynomial without the use of a fragment program. Use texture. You can regard a texture map as an arbitary function lookup for either 1D, 2D or 3D functions. Surely, even reloading texture on-the-fly, you'll get faster rendering times than using a software renderer. Another possibility is to use multipass techniques to evaluate the function you need. Remember, the graphics pipe implements per-pixel multiplications and additions when you blend things. Any function you can think of that can be decomposed into 0..1 add's and multiplies can be implemented with enough rendering passes (although roundoff error is still a problem). I just find it very hard to believe that what you need to do is impossible with standard OpenGL...and even if it takes 50 rendering passes, it'll still run faster on a GeForce card than under software-only-Mesa. > the benefit of using mesa is that since it's open source i have much better > control over exactly what i'm doing and how i implement things. the downside > is that implementing these things is probably going to require a good deal of > work on my part. Well, that and the fact that as soon as you start using something that's not OpenGL-like, you can forever wave goodbye to using hardware accelleration. ---- Steve Baker (817)619-2657 (Vox/Vox-Mail) L3Com/Link Simulation & Training (817)619-2466 (Fax) Work: sj...@li... http://www.link.com Home: sjb...@ai... http://www.sjbaker.org |