Thread: [Algorithms] Scaling Models
Brought to you by:
vexxed72
From: John W. <jo...@de...> - 2000-07-27 16:06:52
|
If you set a scaling matrix as part of the transformation pipeline what happens to the normals? I currently use D3D\OpenGL to do the transformation but use my own lighting code. If I set a scaling factor to each of the 3 axes the angles of most of the faces in the model will change (unless the scaling factor is the same for all three axes). An extreme example is when XSc=ZSc=1.0f; YSc=0.0f; All normals will either become 0,0,0 (for triangles with a normal perpendicular to the Yaxis) or 0,1,0 for all other normals. Rebuilding the normal list is not an option as the models are shared multiple times with different scalings. Anybody else encountered this problem? And how does D3D\OpenGL cater for this? ---- John White Senior Software Engineer Deep Red Games Ltd. jo...@de... Privileged/Confidential Information may be contained in this message. If you are not the addressee indicated in this message (or responsible for delivery of the message to such person), you may not copy or deliver this message to anyone. In such case, you should destroy this message and kindly notify the sender by reply email. Please advise immediately if you or your employer does not consent to Internet email for messages of this kind. Opinions, conclusions and other information in this message that do not relate to the official business of my firm shall be understood as neither given nor endorsed by it. |
From: Jamie F. <j.f...@re...> - 2000-07-27 16:13:51
|
Off the top of my head: to get the normal back, you need to perform the inverse scale on it, and renormalise. Pass on the D3D / OpenGL gubbins, never used either :) Jamie John White wrote: > If you set a scaling matrix as part of the transformation pipeline what > happens to the normals? > I currently use D3D\OpenGL to do the transformation but use my own lighting > code. If I set a scaling factor to each of the 3 axes the angles of most of > the faces in the model will change (unless the scaling factor is the same > for all three axes). An extreme example is when XSc=ZSc=1.0f; YSc=0.0f; All > normals will either become 0,0,0 (for triangles with a normal perpendicular > to the Yaxis) or 0,1,0 for all other normals. > > Rebuilding the normal list is not an option as the models are shared > multiple times with different scalings. > > Anybody else encountered this problem? And how does D3D\OpenGL cater for > this? > > ---- > John White > Senior Software Engineer > Deep Red Games Ltd. > jo...@de... > > Privileged/Confidential Information may be contained in this message. If > you are not the addressee indicated in this message (or responsible for > delivery of the message to such person), you may not copy or deliver this > message to anyone. In such case, you should destroy this message and > kindly notify the sender by reply email. Please advise immediately if you > or your employer does not consent to Internet email for messages of this > kind. Opinions, conclusions and other information in this message that do > not relate to the official business of my firm shall be understood as > neither given nor endorsed by it. > > _______________________________________________ > GDAlgorithms-list mailing list > GDA...@li... > http://lists.sourceforge.net/mailman/listinfo/gdalgorithms-list |
From: <ro...@do...> - 2000-07-29 06:04:05
|
Jamie Fowlston wrote: >Off the top of my head: to get the normal back, you need to perform the inverse >scale on it, and renormalise. > > The top of your head is correct. |
From: Tom H. <to...@3d...> - 2000-07-27 20:15:06
|
At 05:06 PM 7/27/2000 +0100, you wrote: >If you set a scaling matrix as part of the transformation pipeline what >happens to the normals? >I currently use D3D\OpenGL to do the transformation but use my own lighting >code. If I set a scaling factor to each of the 3 axes the angles of most of >the faces in the model will change (unless the scaling factor is the same >for all three axes). An extreme example is when XSc=ZSc=1.0f; YSc=0.0f; All >normals will either become 0,0,0 (for triangles with a normal perpendicular >to the Yaxis) or 0,1,0 for all other normals. > >Rebuilding the normal list is not an option as the models are shared >multiple times with different scalings. > >Anybody else encountered this problem? And how does D3D\OpenGL cater for >this? In GL the normals are scaled and rotated by the modelview matrix (homogenous matrix stuff ... translations are ignored). GL allows you to normalize the transformed normals with: glEnable(GL_NORMALIZE); This does come with an added per-normal cost, but on T&L cards like the GeForce, the operation is done by the hardware so you don't lose the ability to use hardware transforms. I don't know if there is a similar operation under DX, but it seems likely. Something just dawned on me though, you use the API do the transforms, but you do your own lighting ... If you're querying the API for the results of the transformations you're really going to ruin performance on hardware T&L cards. If you don't need the transformed coordinates to do your lighting, then you probably don't need to send normals to the API at all. They're only used for lighting and some of the texture coordinate generation modes (spherical I think). Specific OpenGL and Direct3D issues should be taken to their respective mailing lists. Tom |
From: <ro...@do...> - 2000-07-29 06:06:30
|
Tom Hubina wrote: > >In GL the normals are scaled and rotated by the modelview matrix >(homogenous matrix stuff ... translations are ignored). GL allows you to >normalize the transformed normals with: > It is a very good question what the API does when the modelview matrix contains scaling, uniform or non-uniform. If it applies the modelview matrix (except the translation) to normals when the modelview matrix contains non-uniform scaling, it is incorrect. Does it detect when the linear part of the modelview matrix is non-orthogonal, and do the right thing? That would be very expensive. Or it could just do the transpose(inverse(L)) routine to transform normals in all cases? That would be very wasteful in the majority of cases that the app uses only orthogonal linear parts of the model and view transformations. |
From: Tom H. <to...@3d...> - 2000-07-29 21:26:10
|
At 04:57 AM 7/29/2000 +0000, you wrote: >It is a very good question what the API does when the modelview matrix >contains scaling, uniform or non-uniform. If it applies the modelview >matrix (except the translation) to normals when the modelview matrix >contains non-uniform scaling, it is incorrect. Does it detect when >the linear part of the modelview matrix is non-orthogonal, and do the >right thing? That would be very expensive. Or it could just do the >transpose(inverse(L)) routine to transform normals in all cases? That >would be very wasteful in the majority of cases that the app uses only >orthogonal linear parts of the model and view transformations. The transpose(inverse) for normals slipped my mind. Oops. OpenGL does do the transpose(inverse) of course. The transpose(inverse) operation only needs to be performed once when the matrix is loaded, and only if the API has state currently enabled that uses normals (directional lighting or spherical environment mapping). If these states are enabled after the fact, the matrix can be created when the state changes. In the case where it needs to do the computation, whether or not it checks for linearity depends on the expense of the check I guess. The driver writers are usually pretty good at doing minimal amounts of work to speed things up as best they can. Tom |
From: <Lea...@en...> - 2000-07-30 23:01:01
|
> The transpose(inverse) for normals slipped my mind. Oops. OpenGL does do > the transpose(inverse) of course. As your probably aware, OpenGL only takes the transpose of the 3x3 rotation matrix contained within the 4x4 transformation matrix if the transformation calls were only rotations, translations... scales, shears and the like cause the matrix inverse to be calculated and then transposed. > The transpose(inverse) operation only needs to be performed once when the > matrix is loaded, and only if the API has state currently enabled that uses > normals (directional lighting or spherical environment mapping). If these > states are enabled after the fact, the matrix can be created when the state > changes. In the case where it needs to do the computation, whether or not > it checks for linearity depends on the expense of the check I guess. The > driver writers are usually pretty good at doing minimal amounts of work to > speed things up as best they can. This is where I am going to advocate using the OpenGL API to do the transformation work... there are a lot of coders who do the glLoadMatrix() thing -- in which case it is hard for an API to know whether the transformations are affine or not... The way I am pretty positive the driver writers do things (not 100% sure as they won't tell me... :) is something like: flag_t affineMatrix; flag_t isInverted; glLoadIdentity() affineMatrix = true; set values() glRotate() set values glScale() affineMatrix = false set values When it comes to transformation, it's a matter of if (!affineMatrix) if (!isInverted) invert matrix isInverted = true transform pipeline The actual pipeline could optimise that a lot, and bypass any routines to ensure maximum performance... but that's just my trying to demonstrate things clearly... :) Has anybody actually noticed performance increases from the use of glLoadMatrix()? The benefits of having your own matrices are numerous, such as for grabbing view frustum parameters, being able to transform BBoxes easily, etc... but I am thinking that a _lot_ of objects would be faster to get across to the HW if you could just send the the info with normal OpenGL calls. Leathal. |
From: Davide P. <da...@pr...> - 2000-07-28 07:22:58
|
> If you set a scaling matrix as part of the transformation pipeline what > happens to the normals? > I currently use D3D\OpenGL to do the transformation but use my own lighting > code. If I set a scaling factor to each of the 3 axes the angles of most of > the faces in the model will change (unless the scaling factor is the same > for all three axes). An extreme example is when XSc=ZSc=1.0f; YSc=0.0f; All > normals will either become 0,0,0 (for triangles with a normal perpendicular > to the Yaxis) or 0,1,0 for all other normals. > > Rebuilding the normal list is not an option as the models are shared > multiple times with different scalings. > > Anybody else encountered this problem? And how does D3D\OpenGL cater for > this? There is a very nice doc on opengl site called : Avoiding 19 Common OpenGL Pitfalls oglpitfall.pdf Written by Mark J. Kilgard that cover this topic. Davide Pirola www.prograph.it www.protonic.net |
From: <ro...@do...> - 2000-07-29 10:35:14
|
John White wrote: >If you set a scaling matrix as part of the transformation pipeline what >happens to the normals? The transformations we are talking about are all invertible affine mappings of 3-space. (We are not considering projection mappings in this discussion). Affine mappings are mappings that can be realized as a linear transformation (leaving some point fixed, which we call the origin) followed by a translation. The linear part is represented by a system of 3 equations in the 3 coordinate variables, i.e by a 3x3 matrix, in a coordinate system whose origin is at the fixed point. The translation is represented by a vector. Computer graphicists have formed the unfortunate (I think) habit of combining the 3x3 matrix and the 3 components of the translation vector into a 4x4 matrix, containing a row or column (0,0,0,1) which carries no useful information. The affine mappings of the transformation pipeline are applied to geometric models consisting of sets of points. In graphics we use mostly models that are made of planar polygons defined by special points called vertices. One of the nice properties of affine transformations is that they map planes to planes, so planar polygons to planar polygons, one of the reasons we focus on affine transformations. Now one of the two important defining parameters of any plane is its normal vector. The general question at the root of your specific question about scale mappings is: Given the matrix of the affine mapping that is applied to the vertices of a domain polygons, how can I get the normal vector of the transformed polygon from the normal vector of the domain polygon? I will give the answer to the general question and then see how it applies in the absence of change of scale as well as in the presence of change of scale. First, it should be clear that the translation part of the affine mapping, i.e. the last row or column of the 4x4 matrix, is irrelevant to mapping the normals. This is because the very concept of "vector" involves independence of position, You can translate a vector anywhere in the space, keeping it always parallel to its original direction and of constant length, and it is considered the same vector. At every point, a plane has the same normal vector as at every other point. Translation has no effect on normal vectors, because translation maps a plane to a parallel plane, which has the same normal vector. Thus when considering how to find the normal vector of the transformed plane from the normal vector of the original plane, we only have to be concerned with the linear part of the affine transformation, the part that is represented by the upper left 3x3 matrix, call it L. Let us make no assumptions about L except that it is invertible (as are all the transformations of the graphics pipeline, up to projection). Let P be a particular plane through the organ and let n be a unit normal vector to P. Then it can be shown that the unit normal vector to the transformed plane LP must be the normalization of transpose(inverse(L))n (where I am using the OperatorOnLeft convention). You can find a demonstration of this fact in FvDFH Sec 5.7, so I won't repeat it here. So in general for ANY affine transformation with L as the 3x3 matrix of its linear part, to get the unit normal of the transform of a plane, you multiple the unit normal of the plane by transpose(inverse(L)) and then renormalize. For some applications, such as back face culling, it is not important that the normal have unit length, so you can leave off the renormalization. I think that most APIs give wrong lighting if you supply them with non-unit normal vectors. Note first that if L is invertible, then so is transpose(L) and inverse(transpose(L)) = transpose(inverse(L)) which you can show by elementary linear algebra. Now suppose that L is a rotation (or more generally an orthogonal transformation). Then, as we all know inverse(L) = transpose(L), so transpose(inverse(L)) = transpose(transpose(L)) = inverse(inverse(L)) = L. That is, for rotations, or for affine transformations consisting of a rotation followed by a translation (i.e., most of the affine transformation we use in graphics), the correct operator to use on normal vectors is just L. And indeed if L is orthogonal then it can be shown (again elementary linear algebra) that it preserves vector lengths, so there is no need to renormalize Ln. If L is an orthogonal transformation that is not a rotation, i.e. if L is a rotation combined with a reflection in a plane, then you have to watch out because Ln might point to the "wrong side" of the image plane, Now suppose that L is a uniform scaling by scale factor s. This means that in any coordinate system, L is diagonal with every diagonal element = s and the off diagonal elements all 0, in other words L = sI where I is the identity, transpose(L) = sI = L and inverse(L) = (1/s)I. (Check it out). So transpose(inverse(L)) = (1/s)I. When you multiply a vector n by this you get (1/s)n, which is really easy to renormalize, just multiply it by s, or better yet, recognize from the start that the transformation has no effect whatever on the unit normal of any plane, the unit normals are all unchanged. I'm not sure what any particular API does in this case, but this tells you how to do the math. Now suppose that L is a non-uniform scaling. Then there is some coordinate system with respect to which L has three diagonal elements sx, sy, sz and the off diagonal elements are all zero. Call this matrix diag(sx, sy, sz). Notice that unlike the orthogonal transformations and the uniform scaling, a non-uniform scaling DOES NOT PRESERVE ANGLES, so it does not preserve perpendicularity of a line to a plane. Further inverse(L) = (1/sx., 1/sy, 1/sz). Further transpose(L) = L and transpose(inverse(L)) = inverse(L). So the correct the way to transform the normal vector is to multiply it by the matrix inverse(L) = diag(1/sx, 1/sy, 1/sz) and renormalize Notice that this vector diag(1/sx, 1/sy, 1/sz) n = (nx/sx, ny/sy, nz/sz) is no longer parallel to n, so there is no shortcut, you actually have to do the normalization if you want accurate unit normal vectors to the transformed surface. Again, I am not sure what any particular API does here, but I have just given you the true math. Note in particular for the non-uniform scaling it is NOT CORRECT to apply the matrix L to the normals then renormalize (as I think someone else suggests in this thread). You will get a unit vector that way, but it will no longer be perpendicular to the surface, so it won't even be useful for back face culling, let alone accurate lighting. I could extend this to more general transformations, say adding shear, but I won't because it starts to get messy, but in any case the true correct accurate formula to apply in any case is to multiply the normals by the matrix transpose(inverse(L)) and renormalize. For a general linear transformation composed of some of these elementary transformations, rotations, uniform or nonuniform scalings, shears, etc., you use the general identities that transpose(AB) = transpose(B) transpose(A) and inverse(AB) = inverse(B) inverse(A), so that transpose(inverse(AB)) = transpose(inverse(B)inverse(A)) = transpose(inverse(A))transpose(inverse(B)). So for a general concatenation of transformations ABC...Z the correct way to transform the normals to multiply them by transpose(inverse(A))....transpose(inverse(Z) and renormalize. The above discussion helps you simplify the product for special cases. |
From: <ro...@do...> - 2000-07-29 14:21:06
|
I wrote: >.... Let P be a particular plane through the organ and ... A frightening demonstration of the folly of relying on spell checkers, rather than careful proof reading, in the wee small hours. |