Re: [Algorithms] Scaling Models
Brought to you by:
vexxed72
From: <ro...@do...> - 2000-07-29 10:35:14
|
John White wrote: >If you set a scaling matrix as part of the transformation pipeline what >happens to the normals? The transformations we are talking about are all invertible affine mappings of 3-space. (We are not considering projection mappings in this discussion). Affine mappings are mappings that can be realized as a linear transformation (leaving some point fixed, which we call the origin) followed by a translation. The linear part is represented by a system of 3 equations in the 3 coordinate variables, i.e by a 3x3 matrix, in a coordinate system whose origin is at the fixed point. The translation is represented by a vector. Computer graphicists have formed the unfortunate (I think) habit of combining the 3x3 matrix and the 3 components of the translation vector into a 4x4 matrix, containing a row or column (0,0,0,1) which carries no useful information. The affine mappings of the transformation pipeline are applied to geometric models consisting of sets of points. In graphics we use mostly models that are made of planar polygons defined by special points called vertices. One of the nice properties of affine transformations is that they map planes to planes, so planar polygons to planar polygons, one of the reasons we focus on affine transformations. Now one of the two important defining parameters of any plane is its normal vector. The general question at the root of your specific question about scale mappings is: Given the matrix of the affine mapping that is applied to the vertices of a domain polygons, how can I get the normal vector of the transformed polygon from the normal vector of the domain polygon? I will give the answer to the general question and then see how it applies in the absence of change of scale as well as in the presence of change of scale. First, it should be clear that the translation part of the affine mapping, i.e. the last row or column of the 4x4 matrix, is irrelevant to mapping the normals. This is because the very concept of "vector" involves independence of position, You can translate a vector anywhere in the space, keeping it always parallel to its original direction and of constant length, and it is considered the same vector. At every point, a plane has the same normal vector as at every other point. Translation has no effect on normal vectors, because translation maps a plane to a parallel plane, which has the same normal vector. Thus when considering how to find the normal vector of the transformed plane from the normal vector of the original plane, we only have to be concerned with the linear part of the affine transformation, the part that is represented by the upper left 3x3 matrix, call it L. Let us make no assumptions about L except that it is invertible (as are all the transformations of the graphics pipeline, up to projection). Let P be a particular plane through the organ and let n be a unit normal vector to P. Then it can be shown that the unit normal vector to the transformed plane LP must be the normalization of transpose(inverse(L))n (where I am using the OperatorOnLeft convention). You can find a demonstration of this fact in FvDFH Sec 5.7, so I won't repeat it here. So in general for ANY affine transformation with L as the 3x3 matrix of its linear part, to get the unit normal of the transform of a plane, you multiple the unit normal of the plane by transpose(inverse(L)) and then renormalize. For some applications, such as back face culling, it is not important that the normal have unit length, so you can leave off the renormalization. I think that most APIs give wrong lighting if you supply them with non-unit normal vectors. Note first that if L is invertible, then so is transpose(L) and inverse(transpose(L)) = transpose(inverse(L)) which you can show by elementary linear algebra. Now suppose that L is a rotation (or more generally an orthogonal transformation). Then, as we all know inverse(L) = transpose(L), so transpose(inverse(L)) = transpose(transpose(L)) = inverse(inverse(L)) = L. That is, for rotations, or for affine transformations consisting of a rotation followed by a translation (i.e., most of the affine transformation we use in graphics), the correct operator to use on normal vectors is just L. And indeed if L is orthogonal then it can be shown (again elementary linear algebra) that it preserves vector lengths, so there is no need to renormalize Ln. If L is an orthogonal transformation that is not a rotation, i.e. if L is a rotation combined with a reflection in a plane, then you have to watch out because Ln might point to the "wrong side" of the image plane, Now suppose that L is a uniform scaling by scale factor s. This means that in any coordinate system, L is diagonal with every diagonal element = s and the off diagonal elements all 0, in other words L = sI where I is the identity, transpose(L) = sI = L and inverse(L) = (1/s)I. (Check it out). So transpose(inverse(L)) = (1/s)I. When you multiply a vector n by this you get (1/s)n, which is really easy to renormalize, just multiply it by s, or better yet, recognize from the start that the transformation has no effect whatever on the unit normal of any plane, the unit normals are all unchanged. I'm not sure what any particular API does in this case, but this tells you how to do the math. Now suppose that L is a non-uniform scaling. Then there is some coordinate system with respect to which L has three diagonal elements sx, sy, sz and the off diagonal elements are all zero. Call this matrix diag(sx, sy, sz). Notice that unlike the orthogonal transformations and the uniform scaling, a non-uniform scaling DOES NOT PRESERVE ANGLES, so it does not preserve perpendicularity of a line to a plane. Further inverse(L) = (1/sx., 1/sy, 1/sz). Further transpose(L) = L and transpose(inverse(L)) = inverse(L). So the correct the way to transform the normal vector is to multiply it by the matrix inverse(L) = diag(1/sx, 1/sy, 1/sz) and renormalize Notice that this vector diag(1/sx, 1/sy, 1/sz) n = (nx/sx, ny/sy, nz/sz) is no longer parallel to n, so there is no shortcut, you actually have to do the normalization if you want accurate unit normal vectors to the transformed surface. Again, I am not sure what any particular API does here, but I have just given you the true math. Note in particular for the non-uniform scaling it is NOT CORRECT to apply the matrix L to the normals then renormalize (as I think someone else suggests in this thread). You will get a unit vector that way, but it will no longer be perpendicular to the surface, so it won't even be useful for back face culling, let alone accurate lighting. I could extend this to more general transformations, say adding shear, but I won't because it starts to get messy, but in any case the true correct accurate formula to apply in any case is to multiply the normals by the matrix transpose(inverse(L)) and renormalize. For a general linear transformation composed of some of these elementary transformations, rotations, uniform or nonuniform scalings, shears, etc., you use the general identities that transpose(AB) = transpose(B) transpose(A) and inverse(AB) = inverse(B) inverse(A), so that transpose(inverse(AB)) = transpose(inverse(B)inverse(A)) = transpose(inverse(A))transpose(inverse(B)). So for a general concatenation of transformations ABC...Z the correct way to transform the normals to multiply them by transpose(inverse(A))....transpose(inverse(Z) and renormalize. The above discussion helps you simplify the product for special cases. |