From: Michael L B. <mbrasher@MIT.EDU> - 2002-09-16 19:04:28
|
hi, > You might be able to do this without inventing new extensions/features. > The recent OpenGL extensions for vertex and fragment-level programming are > pretty powerful. i have played around with the vertex programs and fragment-programs. the arb just realeased their vertex program extension, but other than that, all i know about is nvidia's vertex programs and a fragment program extension that they've released that will work on their next-gen nv30 card. now, i've only been able to get these to run using nvidia's Cg language, which is both windows centric and still very under development. they promise to release an opengl fragment program with their official release of Cg, but who knows when that will be. if there is some other instance of an opengl fragment program out there that does not rely on Cg or windows, i'd very much like to hear about it. > Instead of implementing glFoo3f() (in parallel to glColor, glNormal, etc) > you could use glVertexAttribNV or a spare set of texture coords with > glMultiTexCoord(). > > As for polynomial interpolation, won't you need some extra coefficients of > some sort? that's exactly right, i will need extra coefficients - that's why i need some way of passing them to my shader (e.g. glFoo). right now, i'm focusing on quadratic interpolation, so in 2D that means i need 6 coefs, and i could very easily use extra tex coords. the problem is that as i go to higher order interpolation, i'll need more and more coefs, and i could run out of space fairly quickly. plus, i'd rather create something new so i don't have to worry about conflicts if i decide i need texcoord0 later on. my hope is that more work now will mean less work later. > Perhaps you could use a 1-D texture to simulate a polynomial function which > converts linearly-interpolated texture coordinates to a non-linear range. > The values in the texture could represent your lighting function, or they > might be used as coordinates to index a subsequent texture (aka "dependent > texture reads"). if i were to use 1D textures, for a quadratic interpolation, i'd need 6 1D textures, and then i'd additionally need to scale this output by the coefs in the shader. if i can't do that, then i'd have to scale the textures beforehand, so i'd basically have to create new textures for each triangle - which is quite slow. in fact, the simple solution i've already implemented is to create a 2D texture by doing this interpolation beforehand, and then letting opengl/mesa linearly map that to a triangle. the major drawback to this is that each texture is different, which means i have to generate a new texture for each triangle. this takes approximately 600 times as long to render than gouraud shaded triangles using nvidia's opengl drivers. the reason i'm trying to change mesa, is that my hope is that changing the interpolation of the rasterizer will not add significant overhead to all the work that mesa already does. by my timings, mesa runs around 10 times slower than opengl for linear gouraud shading. even if i double or triple this in changing the interpolation, i'll still be much better off than if i generate all the textures. > If you're just implementing a new shading algorithm, have you considered > using vertex programs? > > Some of these things aren't currently implemented in Mesa, but I think > you'd be better-off leveraging existing extensions than creating new ones. the trouble with using a vertex program is that no matter what i do, the fragment interpolation is still linear. thus, i'm fundamentally unable to represent a higher order polynomial without the use of a fragment program. i've been able to write a fragment program to do this using nvidia's nv30 emulator, but until Cg gets finalized or opengl 2.0 comes out, i'm pretty much at the mercy of nvidia to really support fragment programs. believe me, if i could figure out a way to use the new extensions to opengl, i would. there's a decent probability that if i get all this implemented based on mesa, that i'll end up scrapping it once one of these other solutions gets released. the trouble is that i don't knokw when any of this will happen... it could be next month or it could be next summer. and i can't really halt my research until that happens. the benefit of using mesa is that since it's open source i have much better control over exactly what i'm doing and how i implement things. the downside is that implementing these things is probably going to require a good deal of work on my part. for now, all i really need to change is the rasterizer... i can get away with using GL_TRIANGLES and texcoords to pass the polynomial coefs. however, it would be nice to change those eventually. anyways, thanks for taking the time to consider my situation. you'll probably be hearing from me later, when i have more specific questions to ask :) -mike |