From: Brian P. <br...@pr...> - 2000-06-13 00:26:26
|
ralf willenbacher wrote: > > > attached are my thoughts on a bumpmapping extension for mesa. > i would like to hear some opinions :> > i am thinking about poor loki entertainment when they > have to port a game that uses the directX bumpmapping function. I'm not familiar with DirectX's bumpmapping. Is your extension a copy of it? My comments follow. > i ripped this style from texgen_emboss from nvidia > > Name > MESA_perpixel_lighting (or MESA_anisotropic_blending) > > Extension String > GL_MESA_perpixel_lighting (or GL_MESA_anisotropic_blending) > > Dependencies > ARB_multitexture. > > Overview > The extension allows lumination per pixel of textures depending on 4 direction vectors. > Its done with a texture containing height differences of X and Y (U/V) in texture space. > The texture may contain a third component that specifies the reflection of the surface material. > > > Issues > > why depending on arb_multitexture ? > most hardware implements anisotropic lighting in the second texture stage and outputs > the luminance to the first. Huh? > it cannot be used as the first texture stage because > then someone might want to use the second texture stage for something else which would require > another pass in the driver (is this really that bad ?). could be used in single texture mode too > for flat triangles with carvings on it. I don't see any dependency on multitexture for this extension. > perpixel or anisotropic ? > dunno.. anisotropic will make some people to run for dictionaries. I'm not sure either name is really accurate. > New Procedures and Functions > > glLightDirectionMESA(GLfloat left, GLfloat right, GLfloat bottom, GLfloat top) > sets the direction the light is shining on the surface. > is clamped to -1.0/1.0 > negative numbers mean darken, positive brighten. > defaults to 0.0, 0.0, 0.0, 0.0 You aren't defining a light direction so much as four blending weights. Perhaps glBlendWeightsMESA(GLfloat left, GLfloat right, GLfloat bottom, GLfloat top) > New Tokens > > GL_DXDYDL (or DUDVL ?) > defines a texture format to load an image containing delta X(U), delta Y(V) and > material luminance (L). > to be used in glTexImage2D > DX and DY are signed, L is unsigned. > > GL_DXDY (or DUDV) > the same without luminance. > > GL_FUNC_PERPIXEL_LIGHTING (or _ANISOTROPIC) > to be used in glBlendEquation. could be a problem bacause it clashes with glBlendFunc... I would think that this feature would be implemented in the texture environment, not in the fragment blending stage. I.e. GL_FUNC_PERPIXEL_LIGHTING would really be a glTexEnv mode. > for hardware that does not have native bumpmapping (oops.. um.. perpixel.. > anisotropic.. emboss.. nvidia magic) support an internal texture format might be > specified that contain 4 seperate luminance maps to be applied seperate to the > framebuffer/ first stage texture. could be left to the drivers though.. > > the following additions to the opengl specification could be wrong > because the 1.2 .pdf looks funny in xpdf. i will stick to name in braces > > Additions to Chapter 2 of the 1.2 Specification (OpenGL Operation) > > when glTexImage2D is called with either GL_DXDYL or GL_DXDY the incoming pixels will > be converted to byte size. when luminance is not given (GL_DXDY) a luminance of > 255 (full luminance) will be assumed. DX and DY will be clamped to -127/127 > when a border is given the bordercolor will be taken as DX,DY,L. alpha will be ignored. It's against the grain of OpenGL to specify the internal storage format for something like this. I'd just say that the image is stored internally with values in [-1, 1] and indeterminate precision. > Additions to Chapter 3 of the 1.2 Specification (Rasterization) > > in single texture mode where the blendequation is GL_FUNC_PERPIXEL the lumination > will be applied to > the frame buffer like a GL_LUMINANCE texture. > > the luminance from the GL_FUNC_PERPIXEL blending equation is calculated like this: > incoming texture is of type GL_DXDYL, left, right, top, bottom is set by glLightDirectionMESA > > for every pixel > > dX = DX / 127.0 > dY = DY / 127.0 > l = L / 255.0 > > luminance = 1.0 ; start with no luminance > > if(dX >= 0.0) luminance += left*dX > else luminance += right*(-dX) > if(dy >= 0.0) luminance += top*dY > else luminance += bottom*(-dY) > > luminance *= l; > > the destination pixel RGBA > R*l, G*l, B*l, Alpha is unchanged. > > in the multitexture case the destination will be the first texture stage.. Here's how I would phrase it: let dx, dy = the sampled texture value luminance = 1.0 if(dX >= 0.0) luminance += left*dX else luminance += right*(-dX) if(dy >= 0.0) luminance += top*dY else luminance += bottom*(-dY) luminance *= l; let (R, G, B, A) be the texture unit's incoming fragment color. The outgoing fragment color (R', G', B', A') is then: R' = R * l G' = G * l B' = B * l A' = A That said, however, I don't see how this extension is really going to work. I think I can see this working on a planer polygon, with the four blend weights getting recalculated depending on the surface orientation. But how is it supposed to work for a non-planar surface? The luminance calculation has to be dependent on the polygon's orientation with respect to the light position. Either the DXDV data or the blend weights would have to be modified per polygon. Or maybe determine which parts of the texture get mapped to which triangles and modify the DX DY values accordingly (sounds hard). Finally, is it your expectation that this feature can be implemented in terms of current hardware? Or, is it supposed to be a software solution. I don't think either is practical. -Brian |