On Thu, Jun 9, 2011 at 1:24 PM, Derakon <derakon@gmail.com> wrote:
I have a program that displays 3D arrays of pixel data that have been
transformed (by XYZ translation, rotation about the Z axis, and
uniform scaling in X and Y -- so five parameters). When possible (i.e.
Z offset is 0) I use OpenGL to show the transformation since this is
fast. however, when there is a Z transform, I manually construct a
transformation matrix with Numpy, invert it, and use it to map display
coordinates into the data to determine what needs to be shown. That
is, I have a bunch of XY coordinates, one for each pixel I want to
display (e.g. from [0, 0] to [512, 512]), I reverse-transform them,
this gives me a location in the dataset, and that gets me the value to
display.

There's a problem here: the two approaches generate different results.
For example, here's my flat 512x512 test image (disregarding the Z
dimension), untransformed (the grey lines are for reference and are
not part of the image):
http://derakon.dyndns.org/~chriswei/temp2/1.png

Here's the image rotated 45 and translated 20 pixels in X, in OpenGL:
http://derakon.dyndns.org/~chriswei/temp2/2.png

(Ignore the grey triangles; they're just a display artifact)

And here's that same image with the transformation applied via Numpy:
http://derakon.dyndns.org/~chriswei/temp2/3.png

The last example is the behavior I actually want -- rotation should be
about the center of the image, and applied before translation occurs.
However that's not what I'm getting from my OpenGL transformation
code. I've put up a paste comparing the two approaches and what I get
when I print their transformation matrices, here:
http://pastebin.com/j31iLGbT

Any ideas what I'm doing wrong here? I note that if I swap the order
of the glRotated and glTranslated calls, then I get OpenGL matrices
that look more like what I'm getting from Numpy, but of course the
rotation is no longer about the center of the image, so the results
are even more off.
Switch lines 11 and 12.