basic help needed for new programmer

HJL
2007-10-13
2013-04-24
  • HJL
    HJL
    2007-10-13

    Hi

    I'm neither a programmer not a mathematician (although I do have a science background) and I need basic help with 3D geometry. I've read all of the excellent Martin Baker site but still find the maths a bit much.

    In my spare time I program in C++ to write simple games for my smartphone. I decided to try and do something with wireframe graphics. I am struggling to control the rotation of objects in the way that I want.

    The space I am modelling uses 3D Cartesian coordinates and vectors, with the positive directions being x=right, y=forward, z=up. Each entity in space is a list of triangles, and each triangle has an origin and two vectors (variables named “edge1” and “edge2”) to find the vertices; from there I draw lines between the three vertices. The starting point, the origin, is variable "position" which is a 3D coordinate.

    Each entity also has an "angle", made up of x,y and z parts. All the vectors (from origin to vertex) within the entity are rotated according to this angle. I modified some code I found on the web to do this:

    //START OF CODE

    double sx=sin(angle.x), sy=sin(angle.y), sz=sin(angle.z);
    double cx=cos(angle.x), cy=cos(angle.y), cz=cos(angle.z);

    //create a 3x3 matrix “mat” with 1’s down the diagonal and the rest 0’s
    identity();

    //note there is a fourth column in the matrix which stores the “origin”

    //rotate each triangle using rotate formula matrix (all triangles rotated to same angle)
    //to convert the end of each edge into a vertex position
    mat[0][0]=cy*cz+sy*sx*sz;      mat[1][0]=-cy*sz+cz*sy*sx;     mat[2][0]=cx*sy;
    mat[0][1]=cx*sz;               mat[1][1]=cx*cz;               mat[2][1]=-sx;
    mat[0][2]=-cz*sy+cy*sx*sz;     mat[1][2]=sy*sz+cy*cz*sx;      mat[2][2]=cy*cx;

    for (int loop=0; loop<num_triangs; loop++)
    {
            //enter the triangle position into the matrix
        //(for current test-run all triangles have same position)
        triang = &triang_list[loop];
        mat[3][0]=triang->position.x;
        mat[3][1]=triang->position.y;
        mat[3][2]=triang->position.z;
       
        //instead of looping through 3 vertices
        //just reiterate each corner, starting with the origin
        triang->origin.x = mat[3][0];//(as if vector is all zeroes)
        triang->origin.y = mat[3][1];
        triang->origin.z = mat[3][2];
       
        //now vertex1
        triang->vertex1.x = triang->edge1.x * mat[0][0] + triang->edge1.y * mat[1][0] + triang->edge1.z * mat[2][0] + mat[3][0];
        triang->vertex1.y = triang->edge1.x * mat[0][1] + triang->edge1.y * mat[1][1] + triang->edge1.z * mat[2][1] + mat[3][1];
        triang->vertex1.z = triang->edge1.x * mat[0][2] + triang->edge1.y * mat[1][2] + triang->edge1.z * mat[2][2] + mat[3][2];

        //now vertex2
        triang->vertex2.x = triang->edge2.x * mat[0][0] + triang->edge2.y * mat[1][0] + triang->edge2.z * mat[2][0] + mat[3][0];
        triang->vertex2.y = triang->edge2.x * mat[0][1] + triang->edge2.y * mat[1][1] + triang->edge2.z * mat[2][1] + mat[3][1];
        triang->vertex2.z = triang->edge2.x * mat[0][2] + triang->edge2.y * mat[1][2] + triang->edge2.z * mat[2][2] + mat[3][2];

        //and flag the triangle as having been rotated,
        triang->rotated = true;
        //to allow the main routine to call getX,Y and Z
    }

    //END OF CODE

    So far, so good? Actually no because the x,y and z angles behave rather strangely. I want to be able to rotate objects using pitch, yaw and roll, and hoped that the x,y, and z would correspond to these, but they only work at the base position of angle(0,0,0) [equivalent to orientation along unit vector (0,1,0)]. They appear to behave as follows:

    Angle.z is consistent whatever the heading. It always acts as a yaw, i.e. the orientation swings left and right relative to the present orientation.

    Angle.x rotates about the x axis at the start (0,0,0) in a pitching motion. However if I yaw 90 deg to the left or right, angle.x becomes a roll. So it is still rotating around the x axis but I want it to now rotate around the y axis. Even more confusing, if I first pitch down (to the “south pole”) using angle.y and then change angle.x, this x rotation becomes a yaw about the y axis.

    Angle.y does something similar to angle.x. It starts as a roll about the y axis. If I dip down to the south pole it becomes a yaw, rotating about the y axis still. If I start from (0,0,0), yaw to the right, then angle.y continues to move around the y axis, now acting as a pitch.

    In between the 90 deg directions, various intermediate things occur.

    So, can anyone help?? As I said I’ve tried to understand the problem by reading on the web but I am stuck. My thoughts are that maybe I shouldn’t be using this angle system in the first place because it is so dependent on where your object is oriented and how it got there. (Something to do with Euler angles?) But I don’t know what to use instead of the code above to rotate the vectors.

    Maybe there is a simple way of converting pitch, yaw and roll controls into the angles that I am using?

    Maybe, when obeying key commands, I need to convert the current orientation into a vector, rotate about that vector according to the instruction, then convert back into the angle format? But I don’t know how to do the rotation about the vector /or/ the conversion back to the angle system.

    I would be very appreciative of any advice.

    Thanks,
    Lewie

     
    • Martin Baker
      Martin Baker
      2007-10-13

      Hi Lewie,

      Looks like you are nearly there, I'm not sure exactly what the problem is but a couple of things occurred to me on first reading your message:
      > each triangle has an origin and two vectors (variables named “edge1” and “edge2”)

      This seems to imply that you are rotating each triangle in its own axis and then adjusting the origin? I guess it could be done that way but, if I have understood correctly, I think you may be making things unnecessarily hard for yourself?

      I think what is usually done is: the vertices of each triangle are stored in the coordinate system of the object you are transforming. If we assume you are modelling, say an aircraft, then choose a coordinate system, x along the fuselage, y along the wing or whatever. Then encode all the vertices for that object in that coordinate system directly. Then to rotate that object multiply each vertex(vector) by the rotation matrix to give the transformed vertex(vector). This rotates the whole object around its origin. You can then offset the aircraft to be where you want on the screen by adding a fixed vector to each vertex (or equivalently by using a 4x4 matrix as you describe).

      I strongly agree with you that it is best to avoid Euler angles if possible. Sometimes there are angles implied in the situation itself, for example in the case of an aircraft, the control surfaces imply rotation around certain axis. In this case I think you are doing the right thing in converting the angle information to matrix form as soon as possible. I guess the thing to keep in mind is that, when combining rotations, order is important. So if you want yaw then roll then pitch then yaw you will need a different matrix than if you want yaw then pitch then roll, these angles are relative to the aircraft not the ground.

      Rather than go on and write a long message about all the things that could be wrong, perhaps I should check with you to see if I am on the right track?

      Martin

       
      • HJL
        HJL
        2007-10-14

        Thanks Martin,

        I am one step ahead and have ditched the Euler angles entirely (you have confirmed that is what I was using and that they were not really up to the job). I've switched over to trying to use matrices only for the rotations and using the functions you provide to do the necessary conversions and calculations. I can rotate the object nicely in front of a fixed camera, and am now doing the code to move the camera independently. So, a lot of progress on that front.

        Regarding the encoding of the object, the way I have done it makes it easy for me to "join the dots" once I've found where the points are in space. I will try to simplify/speed up the coding of the objects once I've got the rotation code sorted out.

        I had hoped to use Quaternions as they seemed interesting but I couldn't quite understand them, particularly their relationship with axis-angle notation. I have some queries which i hope are straightforward for you to answer, even if the questions are bit long:

        My understanding is that axis-angle is encoded as an axis, which is a 3D vector (x,y,z) describing the way that the object is pointed, and an angle, which is a rotation about the axis. If it was an aeroplane, the axis would run along the middle of the fuselage and the plane would be banking according to the angle. Am I right so far?

        Query 1:
        On the page on converting axis-angle to quaternions, you state that
        qx = ax * sin(angle/2)
        qy = ay * sin(angle/2)
        qz = az * sin(angle/2)
        qw = cos(angle/2)
        This would mean that if angle=0, all of qx, qy and qz are also zero (because sin(0)=0). This doesn’t make sense to me because it means that if angle=0 (which I take to mean that the aircraft is not banking) then the axis is of no consequence as the resulting quaternion is always the same!
        For example, an aircraft flying to the east (positively along the x axis) without banking would have an axis of (1,0,0) and angle 0. The resulting quaternion (qw,qx,qy,qz) would be (1,0,0,0).
        Another aircraft orientated at 90 degrees to the first, still flying level but now heading north (along the y axis) would have axis (0,1,0) and angle (0). But the resulting quaternion is exactly the same: (1,0,0,0). So a lot of information has been lost.
        Converting the quaternion (1,0,0,0) back to axis-angle gives problems because it is at a singularity, so we have to set the axis to an arbitrary value and thus lose the information about the axis that we put in before the conversion!

        What is the solution to this in programming? Do we have to avoid allowing angle to ever reach zero? Or if it does reach zero do we have to store axis information somewhere else in order to put it back later?

        Query (2)
        On the page about axis-angle notation http://www.euclideanspace.com/maths/geometry/rotations/axisAngle/index.htm
        you give examples of possible orientations, with diagrams of an aeroplane and the corresponding axis-angle notation. The first one is a plane travelling to the right, with angle=0 and axis=1,0,0. So far so clear.

        The next diagram is a plane travelling either towards or away from the viewer, I’m not sure which. The information given is angle=90, axis=0,1,0. How is this so? I would have thought that the angle was 0, with axis 0,1,0 ?! Ninety degrees to what? If it is 90 degrees to the original position, about the vertical axis, then why has the axis also changed? There is redundant information here, in that a change in orientation in a single dimension has changed 2 of the variables.

        Your next diagram is of a plane pointing left, and now the axis has not changed from the last one (still 0,1,0) but the angle is 180. The movement between the 1st and 2nd diagrams is the same as between the 2nd and 3rd diagrams, yet the adjustments to the axis-angle are completely different! In one 90 degree change in heading, the angle changed 90 degrees and the axis changed also. In the second identical change in heading, the angle changed +90 but the axis did not change. Please explain this to me.

        Query (3)
        Moving to the next row of aeroplane diagrams, it gets even more confusing. The first in the line is a plane pointing straight up with an angle of 90 and an axis of (0,0,1). I understand this one. The second plane in the row has rotated about its long axis by 90 degrees, but has apparently achieved only a 30 degree change in angle and a radical change of axis. The axis is now (sqrt(1/3), sqrt(1/3), sqrt(1/3) – which I thought would mean that it is pointing somewhere off diagonally with respect to all axes?! I thought that this plane's orientation should be angle=180 axis=(0,0,1). Please can you explain where the figures come from?

        Thanks for your time and attention,

        Lewie

         
    • Martin Baker
      Martin Baker
      2007-10-14

      Lewie,

      > My understanding is that axis-angle is encoded as an axis, which is a 3D
      > vector (x,y,z) describing the way that the object is pointed, and an angle,
      > which is a rotation about the axis. If it was an aeroplane, the axis would
      > run along the middle of the fuselage and the plane would be banking
      > according to the angle. Am I right so far?

      Yes quite right.

      > Query 1:
      Yes, you are right, I think the thing to keep in mind is that the quaternion defines a 3D rotation only, its not trying to store any other information. This is actually an advantage, a rotation of 0deg around the x-axis represents the same rotation as 0deg around the y-axis, so its good that they are represented by the same quaternion. In the case of axis-angle the axis is redundant when the angle is zero, so we don't have to avoid it, we can just set the axis to an arbitrary value.
      If you need to know the direction of the fuselage: you take the vector representing its direction before it has been rotated say,
      v = 0,0,1
      (this will depend on the coordinate system and the initial orientation of the aircraft)
      Its position after the rotation is:
      v2 = q*v*q'

      > Query (2)
      > The information given is angle=90, axis=0,1,0. How is this so?

      Again axis-angle is only intended to represent a 3D rotation. So this represents a ninety degree rotation around the y-axis. This rotation only defines the position of the fuselage relative to its position before rotation.

      > Your next diagram is of a plane pointing left, and now the axis has not
      > changed from the last one (still 0,1,0) but the angle is 180. The
      > movement between the 1st and 2nd diagrams is the same as between
      > the 2nd and 3rd diagrams,

      The diagrams are supposed to represent the rotation relative to the aircrafts initial position (when angle = zero) not relative to the position shown next to it. I should have made that clearer on the page.

      > Query (3)

      Same again, each picture is relative to its position before it has been rotated.

      I would welcome any ideas to improve the web pages clearer, I'll put a better explanation on the diagrams of aeroplane orientation, let me know if there is anything else that I can do?
      Also I'm sure lots of people who are new to the subject will have similar questions to these, so I would like to put a link to this thread, if that's all right with you?

      Martin

       
      • HJL
        HJL
        2007-10-14

        By all means, you can let anyone else in on this discussion if you wish.

        I understand your reply to query(1) - thanks.

        I think I get the answer to my other queries... It would seem that the values describing a rotation make more sense if you know where you started from, because the end result of a rotation is always relative to some previous orientation. Any given orientation has an infinite number of possible rotations that led to that state.

        Regarding the programming, what I have now is an object in space that I can rotate any way I like using the controls. Unfortunately when I try to tilt the camera is all gets a bit twisted, for reasons I have not yet figured out. But... it's sunday evening, it's time to switch off... I'll get back to this one next week.

        Lewie

         
    • HJL
      HJL
      2007-10-24

      Dear Martin,

      I'm a bit stuck again. The problem is now with moving the "camera", or rather, figuring out what the world "looks like" with a new camera orientation. If this topic has already been covered, I apologise for asking again and perhaps you could point me to the right thread...

      The entities in space have their rotation from their base position encoded in a matrix, so when I want to display an entity I just work out where its various vertices have ended up and plot them. I have tried to use the same class of object for the camera as well, whereby the orientation of the camera is also coded in a matrix. The controls of the camera are equivalent to pitch, roll and yaw. I know that the code for controlling the movements is okay because when applied to the object being looked at, rather than the camera, the object behaves as expected.

      To plot the object's position by moving the camera, I tried to use the following steps:
      (1) Work out where the object's vertices are now on 3D cartesian axes.
      (2) By subtracting the camera's position, place those points on axes where the camera is at (0,0,0).
      (3) Now express the direction to the points as a vector (which is = to the coordinates on the camera-centred axes).
      (4) Rotate the vectors according to the matrix which stores the camera's current orientation (equivalent to its rotation away from the original position).

      At first I thought the only bug would be that the camera would move in the opposite direction from expected. Instead it is a bit strange because as the camera goes through "rolling" (about the Y-axis, the front-to-back axis), the object sort of spirals around as if it is being twisted about its long axis. At 180 degrees the object is rotated to where one would expect to see it, but at 90 degrees it is both 90 degrees to the camera about the Y-axis and also 90 degrees about its own Y-axis. (Hope you can visualise this).

      Obviously I am going through the wrong series of steps or the wrong type of conversions to code the camera's rotation. I think I am in some way trying to make a matrix encode both a rotation and an orientation at the same time. But I can't figure out what I should be doing. Can you give me any hints or tips about how this should be done?

      Thanks for your advice,
      Lewie

       
      • nhughes
        nhughes
        2007-10-25

        Can you outline to me what you are trying to do?  I have been pointing sensors (like cameras) at objects in space from spacecraft for 25 years and can probably help you with your problem.

        Do you have an x-y-z position of the camera and x-y-z location of the object you're trying to point at, presumably with the camera roll position such that "up" for the camera is in the right place?  Trying to get the qauternion that describes this?

        This is a well practiced operation.  Let me know if this is what you're trying to do, or if you're trying to do something else and I'll probably be able to give you the algorithm.

        Noel Hughes
        nhughes1ster@gmail.com

         
        • HJL
          HJL
          2007-10-27

          Better... I followed Martin's advice about inverting the matrix and this got rid of the strange corkscrewing effect.

          Not quite there... After the camera has "rolled", the up-down and left-right swinging (pitch and yaw) are supposed to remain the same /from the user's point of view/. Instead, they continue to change the orientation about the same axes as before, thus the up-down swing will always be in terms of the "outside world" and not the viewer. (Except at the neutral position where the viewer is in line with the outside world.)

          The other problem is that if I point the camera slightly away from the object by yawing, so that the object is off-centre, and then roll the camera, one would expect the object to travel in a wide circle relative to the viewer's centre of vision. Instead the object just turns around in the same point on the screen.

          In answer to Noel, it sounds like you might be able to help here. What I am trying to do is write a little wireframe sim in C++ to be run on a PDA/smartphone. I've done games before but only 2D classic arcade things and this is my first foray into 3D.

          I'm using matrices not quaternions.

          The wireframe objects are represented by a number of points, each of which has a position in 3D cartesian coordinates. (x=left-right, y=forward-back, z=up-down). The camera also has a position in the 3D space. The camera's controls are to correspond to pitch, roll and yaw. To change the camera's orientation I put values in variables ax, ay or az (depending on which control is touched). Then I create a matrix from these as follows:

              double sx=sin(ax), sy=sin(ay), sz=sin(az);
              double cx=cos(ax), cy=cos(ay), cz=cos(az);

              mat[0][0]=cy*cz+sy*sx*sz;  mat[1][0]=-cy*sz+cz*sy*sx; mat[2][0]=cx*sy;
              mat[0][1]=cx*sz;           mat[1][1]=cx*cz;           mat[2][1]=-sx;
              mat[0][2]=-cz*sy+cy*sx*sz; mat[1][2]=sy*sz+cy*cz*sx;  mat[2][2]=cy*cx;

          I then multiply this matrix by the current orientation to produce a new orientation.

          I think what I might be doing wrong is confusing orientations with rotations?

          Also I think that to solve my first problem above, I need to set up this matrix in a different way so that it takes into account the current camera orientation because at the moment it is stuck to the axes of the "outside world". Perhaps the matrix should actually be a lot simpler to calculate than the algorithm above...?

          L

           
          • nhughes
            nhughes
            2007-10-28

            I strongly recommend switching to quaternions.  Using quaternions you can perform cross and dot products and immediately generate the required quaternions for camera pointing, etc. 

            To create the quaternion to point the camera with the camera "up" vector pointed up:

            given:

            Vc = position vector of the camera
            Vp = position vector of the target to be viewed

            Assuming the boresight of the camera is its x axis

            calculate the camera to target vector and normalize it

            Vcp = (Vp - Vc)/|(Vp - Vc)|

            calculate camera pointing vector by doing the cross product of [1 0 0] and Vcp to get the rotation axis, dot product to get angle

            Vpp = [1 0 0] X Vcp/|[1 0 0] X Vcp|  (normalized)

            angle = A = acos([1 0 0] dot Vcp)

            quaternion that points the camera at the target

            Qp = Vpp sin(A/2) , cos(A/2)  (this is using the scalar last convention)

            Getting the roll rotation to put the camera with "up" in the right place requires a little more quaternion math that I can give you if you want to follow up on this.

            My advice is to try to visualize an orientation and how to generate it using fundamental rotations, represent each of those rotations as I did, above, and stack up, or successively multiply quaternions, to get the orientation you need.  Simply trying to find the right equation or expression or matrix for what you're looking for is very dangerous.  The equation you find may work in one or two or a hundred test cases but not in the hundred and first.  You run the same risk with any method, but generating orientations from fundamental rotations that you understand is much more robust.

            Rotations and orientations are the same thing so it is easy to get confused.  An orientation is the rotation that takes your rotated object from having its attached coordinate frame aligned with the reference frame to its final orientation.

            I also have some files on basic quaternion operations that I can email you if you would like to have them.

            Good luck.
            Noel

             
          • nhughes
            nhughes
            2007-10-28

            I looked again at your email and have some thoughts about your issue of yawing and then rolling.  If, after the yaw, a rolling motion leaves the object in the same location in the camera field of view, you are rotating about the camera to object vector, not around the camera boresight.  If you were using quaternions, the initial yaw motion would be accomplished by multiplying the initial, object pointing, quaternions by the quaternion [0 0 sin(yaw angle/2) cos(yaw angle/2)].  To accomplish the roll, multiply this quaternion by [sin(rollangle/2) 0 0 cos(roll angle/2)].  This second multiplication would rotate the camera about its roll (I'm assming the x axis is the roll/boresight vector) axis.
            Noel

             
    • Martin Baker
      Martin Baker
      2007-10-24

      Lewie,

      One thing that occurred to me is this, as you say: the objects appear to move in the opposite direction to the camera, but its more than just that, everything is reversed! So to move the camera you need to use the inverse matrix to that which you would use to move the objects. To invert a 3x3 rotation matrix swap the columns to be rows and rows are columns (in other words reflect the elements of the matrix about the leading diagonal).

      Martin

       
    • Martin Baker
      Martin Baker
      2007-10-28

      Noel,

      Can I put a question in here?

      I agree with you about the benefits of quaternions but I would be interested in your thoughts on some practical issues:
      1) People writing games often need to mix translations with rotations (rotating about a point other than the origin, move objects, pan across a scene, etc.).
      2) People writing games often need to render the scene using a projective transformation (or using OpenGL or DirextX to do it - which use matrices).
      3) If calculating the physics we may need to calculate inertia tensor.
      Do you have any suggestions about the best way to do these things?

      Also I would be interested to see more details on the downside of matrices, they contain more numbers which is a disadvantage and rounding errors can make them de-orthogonalised which is hard to correct, but as I understand it, they don't have singularities or problems like that?

      Martin

       
      • nhughes
        nhughes
        2007-10-28

        Thanks for the reply.

        In answer to your points:

        1) Mixing translations with rotations is pretty much transparent to what orientation system (quaternions or matrices) you're using.  (except for Euler angles, which are a ploy of the devil!)  All the operations you can do with matrices you can also perform with quaternions.

        2) I don't know anything about OpenGL or DirextX. 

        3) The inertia matrix is an inherent property of a body, like mass, density, etc.  The definition of the inertia matrix is Iij = integral of XiXjdm over the volume of the body.  (Xi and Xj are the coordinates of the dm, a differential of mass, so Xi and Xj = either the x, y, or z component)

        Like quaternions, direction cosine matrices don't have singularities, as Euler angles do.  Quaternions are easier to work with for a number of reasons.  As you said, there are fewer numbers; as I mentioned in an earlier email, vector functions, the cross and dot product most prominently, lend themselves to quaternion generation and preserve the intuitive "feel" of rotations.  This last aspect is more my own view than one that can be explicitly proven.  A quaternion can be easily normalized and orthogonality is never an issue, since a quaternion only describes a single vector, not three, and an angle.

        The bottom line, if matrices are needed, is that one can perform all calculations in quaternions, taking advantage of their inherent benefits, and then calculate the matrix equivalent to the quaternion when a matrix is needed (for interfacing, etc).

        Noel

         
    • Martin Baker
      Martin Baker
      2007-10-29

      Noel,

      Thanks very much for you reply, the topic of which rotation algebra to use often comes up and it is good to get your perspective.

      When I think of combining rotations and translations I often assume that we would want to combine them into one multiplication operation (using 4x4 matrix or dual quaternion algebra) but as you say this may not be the best approach. With object oriented programming we can easily make an object from a quaternion and a translation vector which has benefits. I guess it depends on each individual application to work out how often they would have to convert to matrices to decide on the best approach.

      Just for my own curiosity (this is now going away from the original topic so feel free to ignore it) but I was wondering about how you work out how to rotate a spacecraft. I am assuming:
      * there are pairs of gas nosiles about various axies?
      * the mass distribution of the spacecraft may not be symmetrical?
      So do you work out the rotation you want using quaternions, then choose what angular acceleration and deceleration you want, then convert this to torque using inertia matrix?

      Martin

       
      • nhughes
        nhughes
        2007-10-30

        I'm not familiar with nosiles. (nozzles?)

        The extremely simplified process of controlling a spacecraft is:

        1) determine the current attitude/angular velocity using some combination of star trackers, gyroscopes, Earth sensors, Sun sensors, magnetometers, etc.

        2) find the error between the current attitude/vel and the commanded attitude/vel (where you are vs where you want to be)

        3) determine the angular acceleration vector, a, required to move the attitude toward the commanded attitude/vel

        4) using the equation T = Ia
            where T is torque, I is the inertia matrix and a is the angular acceleration

        5) then calculate what commands to send to the torque effectors, thrusters, reaction wheels, control moment gyros, magnetic torquers, etc. to generate the required torque.

        6) start over at 1)

        Do this for the lifetime of the spacecraft.  This system is not dependent on the inertia matrix being symmetrical; it is very rare for a spacecraft to have a symmetric inertia matrix.  Real spacecraft don't look like they do in Star Wars.

        As far as combining rotations and position into one "object" there is no reason to do this other than computer science aspects.  You can create any kind of software edifice to effect this object construct but it doesn't reflect any physical requirement or entity.

        As far as do you use quaternions, the above process is independent of what attitude description you use.

        Noel

         
    • Martin Baker
      Martin Baker
      2007-10-30

      Noel,

      Thanks very much - very interesting.

      > As far as combining rotations and position into one "object" there
      > is no reason to do this other than computer science aspects.  You
      > can create any kind of software edifice to effect this object
      > construct but it doesn't reflect any physical requirement or entity.

      Ideally "objects" should reflect physical reality with the aim of allowing us to scale up to model complex objects more easily, for example, humanoids or other jointed systems (one of many examples). Virtually all the programs, software libraries and rendering interfaces, that I have seen, do this by modeling the world as a 'scene graph' or tree structure where each node may be a transform (combined rotation and transform - usually 4x4 matrix). Of course, just because everyone else does it does not make it right, but I would argue that there are good reasons for doing it and it does reflect a physical 'reality'.

      The benefit of the scene graph is that we can rotate one particular joint by changing one transform and the positions of all the points on the arm can be calculated just by traversing the tree. I agree we could hold all the rotations and transforms separately but we would have to map between them for rotations of joints not on the origin. For computer animations we may need to do these calculations for every frame , transversing down the tree from the camera and then up the tree to the object being rendered. Also when doing the physics we need to translate backward and forward between local coordinates and inertial frame.

      Do you need to do this things in space terms, for example 'Canadarm' on Space shuttle and ISS?

      Martin