Menu

Tips on using the library with DirectX

2009-01-17
2013-06-03
  • Alexandru Simion

    Hi,

    First of all, I'm no expert when it comes to coordinate systems and conversions between OpenGL and DirectX, so I can't say I really got the math behind all this. When you don't know exactly what you're doing, it comes down to a trial and error process. And with models, skeletons and animations, you can come very close to the correct result, without being correct. Then you try another model or another animation and it doesn't work! Makes you wonder if it's the model, the import library, or your code. And is very hard to compare the results.

    Anyway, to get to the point, I want to post a few tips about how I finally got it working (fingers crossed).
    I hope it will help people trying to use the library with a DirectX based engine.

    --------------------

    1. First of all, make sure your math library is working right!

    Be ready to test with code from other math libraries, especially with matrix, quaternions, and functions to convert and create these. With all these indexes in a matrix, a simple copy/paste mistake can offer many hours of pain!

    2. Consider if you have scale in matrices (like in 4x4). Getting the rotation matrix or converting to a quaternion, might not work if scaled.

    3. Watch out for the coordinate system!

    DirectX is left handed, while OpenGL is right handed. I won't go into details here, because as I said I'm no expert and, as far as I've seen, the things can be a little relative...

    Anyway, use the aiProcess_ConvertToLeftHanded flag when you import the scene. I think it works OK, at least with the .X models I'm using.

    4. Consider of the way matrices are stored in memory!

    DirectX is row major, while OpenGL is column major. This means in DirectX the first 4 floats in a matrix are the top row and the bottom row keeps the translation. In Open GL, the first 4 floats are the left column and the right column keeps the translation. To convert between the two representations you have to transpose the matrix.

    Even if I used the aiProcess_ConvertToLeftHanded flag, I still had to transpose the matrices from nodes default transformations, their inverse absolute binding and the rotation in animations.

    For the rotation from animations, I converted the quaterion to a mat3x3, transpose the matrix, and then cast it to my matrix class and used it to convert to my class quaternion - then store it in my own anim file. Consider that the library will not convert a transposed matrix to the correct quaternion rotation. Probably this could've been done easier with more math knowledge. Still, it worked for me.

    5. Consider order in matrix multiplication.

    This is related with the point above.

    6. Knowing the theory behind models, skeletons, skinning, and animations.

    It is very helpful to know what to do with all these matrices, but there isn't much good documentation available on the net about this subject. And it's not an easy subject. Having a different coordinate system and a different order for matrix multiplications only adds to the complexity of the problem.

    I only want to point out that a node (bone) has a current transformation that can be stored as a position, a quaterion (rotation) and a scale, or it can use a 3x3 matrix instead of the quaternion, or it can keep a whole 4x4 matrix that includes all the three components. This current transformation is updated at runtime by animation (or by the game).  And it considered "local" because it's relative to the node's parent!

    When importing a skeleton, the node's current transformation is set to an initial value (stored in the file), that could also be remembered in another transformation that can be used to reset the skeleton to the initial pose.

    What I found the hard way is that this initial pose, stored in the nodes, can be very different from the real binding pose (used when the mesh was skinned and vertex weights were calculated).

    If you need skinned meshes you will need this information about the binding pose. The importer library offers it as the Bone Offset transformation. You can store it in the node too, though the library covers a more general case, having it in each skinned mesh. This offset transformation is the inverse of the absolute transformation in the binding pose. It is usually stored as a 4x4 matrix and it's purpose is to bring vertexes from the mesh space (usually the model space) into the bone space (aka moving the bone from it's binding pose into origin). From there the vertexes are brought back in model space (or even world space) by the node's absolute transformation.

    The node can also keep it's absolute current transformation, cached for easy and fast use. When you change the (local) current transformation of a node (from animation), you can set a dirty flag, and when you need it's absolute transformation, you'll have to compute it using it's parent absolute transformation and so one. In a simple case you only need it when you render, but you might want to test collisions, ray hits on bones, etc.

    Here are the formulas that worked fine for me (with respect to the order of matrix multiplication):

    node.transform_absolute = node.transform_current_local * node.parent.GetTransformAbsolute()
    transform_render = node.transform_bind_absolute_inverse * node.GetTransformAbsolute()

    This is a little different from how it's done in Assimp viewer. I'm still not sure why, but theoretically and practically this works. Here is how a vertex is transformed at render time:

    transformed_vertex = mesh_vertex * transform_bind_absolute_inverse * transform_current_local * parent.transform_absolute

    7. Use other model viewers

    Try it with other viewers and see if the mesh looks the same, having the same orientation, etc.
    Pay attention to their coordinates system, and don't rule out the possibility that it can also view it wrong, or the mesh could be badly exported in the first place.

    ---------------

    Now, if I look back when I started to add models support in my engine, I realize there were a lot of technical details I didn't know, and they made me write and re-write the code and waste a lot of time.

    As I stand now, I managed to see mister Tiny.x from DirectX walking and the Dwarf.x from Irrlicht also playing his animation. And the thing I'm most proud of is that I got the model of Kelly from "So Blonde" adventure game, playing walk and idle. And the game is full of such nice .x models to test on. It was a little tricky, because they keep the skinned mesh with the skeleton in one file, and the animations in other separate ones.

    Next I have to deal with blended animations support, but not at the full extent of the possibilities - quite a complex thing too. And then, to decide on the final form of my model file and class format.

    ---------------

    Sorry for the long post, but maybe someone will find it useful and skip some painful debugging.
    If anyone needs more info, or spotted some tech mistakes in my post, please let me know.

    Many thanks to the guys working on this wonderful library!
    Alex

     
    • Grégory Jaegy

      Grégory Jaegy - 2009-01-19

      Shouldn't the "aiProcess_ConvertToLeftHanded" flag handle everything automatically ?

      It doesn't sound very consistent to me to have to manually convert some matrices/quaternion in addition to this flag usage.

      Or is there a good reason for that ?

      This has to be considered as a constructive criticism. The library is just great ;)

       
    • Grégory Jaegy

      Grégory Jaegy - 2009-01-19

      In addition to my post and Alex's (very helpful) one, this is what I need to do so that static meshes (so no animation) look right in my D3D-based engine. Please note that my math library uses the same layout as DirectX one.
      Also, I use the aiProcess_ConvertToLeftHanded flag.

      - I need to swap vertices y and z components:

          pDataPos[j].Set(pMesh->mVertices[j].x, pMesh->mVertices[j].z, pMesh->mVertices[j].y);
          if (pMesh->mNormals)
              pDataNor[j].Set(pMesh->mNormals[j].x, pMesh->mNormals[j].z, pMesh->mNormals[j].y);

      - I need to first transpose then multiply by a Y<=>Z swapping matrix all transformation matrices before decomposing them:

          _pNode->mTransformation.Transpose();

          imSMatrix4D oTransf;
          memcpy(&oTransf.m_11, &_pNode->mTransformation.a1, 16*sizeof(imFloat));

          imSMatrix4D oMatCoordinates;
          oMatCoordinates.SetZero();
          oMatCoordinates.m_11 = 1.0f;
          oMatCoordinates.m_32 = 1.0f;
          oMatCoordinates.m_23 = 1.0f;

          oTransf = oTransf * oMatCoordinates;

          imSVector3D vPosition;
          imSQuaternion qRotation;
          oTransf.ExtractTranslation(vPosition);
          qRotation.FromRotationMatrix(oTransf);

          pNewAtomic->SetPosition(vPosition);
          pNewAtomic->SetOrientation(qRotation);

      Voila, I hope that could also help someone to spare some time !

      Honnestly, I am a bit surprised I have to swap the y and z component, as I would expect the library to do it for me when using the "aiProcess_ConvertToLeftHanded" flag. Maybe I am doing something wrong ?

       
      • Grégory Jaegy

        Grégory Jaegy - 2009-01-19

        Forget my last post.

        The issue I had is due to the fact that the handiness change seems to be encoded into the root node transformation.

        It seems that sometimes the basic decomposition of that transformation matrix into translation/rotation/scale as I do (I don't store the full matrix, I reconstruct the matrix at runtime using pos/rot/scale) doesn't save all information.

        What I mean is that when one decomposes this transformation into translation/rotation/scale, and then construct the matrix from these, the result is different from the original matrix as the handiness change stored in the matrix isn't reflected in translation/rotation/scale.

        For instance, consider the following matrix:
        1 0 0  0
        0 1 0  0
        0 0 -1 0
        0 0 0  1

        It is impossible to reflect this into a standard transl/rotation/scale...

        Not sure how to handle this !

         
    • Alexandru Simion

      I had a similar problem with the root node - I ended up resetting it to identity :)
      But then, when I saw the ConvertToLeftHanded flag and used it, the library did whatever was needed and it worked without other changes from my part, except the transpose operations mentioned in my initial post.

      I was still unsure because all the models I had were facing to Z- axis and I am used with models pointing toward Z+ for easy orientation.
      In fact I just tested the DX tiger model in their mesh tutorial. It is indeed pointing to Z-.

      In my model format,
      I save the nodes' transforms as 3*vec3: pos, rot (euler angles), scale.
      The inverse absolute binding matrix I save as a 4x4 matrix. I could decompose it too, but I noticed some small float erros, so I saved it all.

      Then when I load the model,
      I keep the current (local) transform as pos, rot(mat3x3), scale and the absolute transform as mat4x4 (easier for composing matrices for render).

      I also have function to set the transform with 3*vec3 (pos, rot, scale) since this is more human friendly and probably the entities will keep it this way.

      Anyway, if you're using the DX coordinates system ( Z+ into screen, Y+ up, X to the right) and the ConvertToLeftHanded flag when open the scene, it should work. Or at least it seems to work for me. Try to draw the axes and make sure it's not the library.

      Alex

       
    • Grégory Jaegy

      Grégory Jaegy - 2009-01-20

      OK, I finally managed to find out what is happening ;) As I guess this could help some other users of this great library, I think it is worth explaining what my issue was and how I solved it.

      First thing to say, this issue only appears when saving node transformations as separate position/rotation/scale components (not when saving full matrix).

      One has to take extreme care when decomposing assimp matrices into those components. Reason for that is that in some cases, the resulting matrix cannot be decomposed into those 3 components only.

      Some math background: rotation matrix are special orthogonal matrices with a determinant +1. In the other hand, a 3x3 matrix with a +1 determinant is a rotation matrix, and thus can be decomposed into a rotation (quaternion in my case).

      However, when using assimp with the "aiProcess_ConvertToLeftHanded" flag and some specific model (for instance, a Collada file where the up axis is Y - example at http://g.jaegy.free.fr/_divers/modelY.dae\) assimp encodes several coordinates system changes into the root node matrix (collada Y_up -> assimp Z_up transformation and assimp Z_up -> Direct3D Y_UP transformation ).

      This results sometimes in some matrices with a -1 determinant, which means this matrix contains not only a rotation but also a reflection. For instance, importing the sample model above leads to the following root node matrix:

      1 0 0  0
      0 1 0  0
      0 0 -1 0
      0 0 0  1

      As one can see, this matrix cannot be decomposed into pos/rot/scale without losing the "-" information on m_33.

      The solution to this problem is to add an additional "flip" boolean to the components, so we now have pos/rot/scale/flip.

      Below is the source code I use to decompose and recompose the matrix. When the determinant is -1, multiplying the matrix by "-Identity" will let one extract a rotation. This will have to be taken into account of course when recomposing the matrix !

      Please also note that this kind of decomposition doesn't handle non-uniform scaling (one have to use a full polar decomposition to solve this).

          void Decompose(imSVector3D& _vPosition, imSQuaternion& _qRotation, imSVector3D& _vScale, imBool& _bFlip)
          {
              // compute matrix determinant
              imFloat fDet = Determinant();
              _bFlip = (fDet < 0.0f);

              // extract
              ExtractTranslation(_vPosition);
              ExtractScale(_vScale);
              if (fDet < 0.0f)
              {
                  imSMatrix4D matIDNeg, matResult;
                  matIDNeg.SetIdentity();
                  matIDNeg.m_11 = matIDNeg.m_22 = matIDNeg.m_33 = -1.0f;
                  Multiply(*this, matIDNeg, matResult);
                  _qRotation.FromRotationMatrix(matResult);
              }
              else
                  _qRotation.FromRotationMatrix(*this);
          }

          void Recompose(const imSVector3DTemplate<T>& _vPosition,
                                  const imSQuaternionTemplate<T>& _qRotation,
                                  const imSVector3DTemplate<T>& _vScale, imBool _bFlip)
          {
              imSMatrix4D oMatTemp0, oMatTemp1, oMatTemp2;
              _oMatTemp0.SetScale(_vScale);
              if (_bFlip)
              {
                  _oMatTemp0.m_11 = -_oMatTemp0.m_11;
                  _oMatTemp0.m_22 = -_oMatTemp0.m_22;
                  _oMatTemp0.m_33 = -_oMatTemp0.m_33;
              }

              // then rotation
              _qRotation.ToRotationMatrix(_oMatTemp1);
              imSMatrix4D::Multiply(_oMatTemp0, _oMatTemp1, _oMatTemp2);

              // finally position
              _oMatTemp0.SetTranslation(_vPosition.x, _vPosition.y, _vPosition.z);   
              imSMatrix4D::Multiply(_oMatTemp2, _oMatTemp0, *this);
          }

      Voilà, I hope this will help some people ;)

       
    • Alexandru Simion

      Hm, this is kind tricky...
      I hope there's another way, than to store a flip flag in my model format.
      Anyway, thanks for the code!

      Maybe the developers of the library can have a look into this and see if having such a matrix with reflection in the root bone (after using the ConvertToLeftHanded flag) is what they intended - Please!.

      -------------

      A friend of mine explained to me a few tricks about converting from a collada format (exported from max with z and y axes swapped   )
      He uses a matrix to swap the axes but the trick was to also use the inverse of that matrix (which is the same swap matrix)
      And he transforms every node, as follows:

      dx_mat = inv(swap_mat) * max_mat * swap_mat

      When you multiply on the hierarchy (to compute abs matrices at runtime) you get:

      local_dx_mat = ( inv(swap_mat) * local_max_mat * swap_mat ) * parent_dx_mat
      local_dx_mat = ( inv(swap_mat) * local_max_mat * swap_mat ) * ( inv(swap_mat) * parent_max_mat * swap_mat )
      local_dx_mat = inv(swap_mat) * local_max_mat * parent_max_mat * swap_mat

      The last trick to do, was to swap the z and y components in all vertexes, so the inv(swap_mat) will get them ready for the old max matrix.

      Of course a tall model in max, that had a big z component, will still be tall in DX, having a big y component, as it should.

      All this looks like valid math to me.
      Maybe this is what the ConvertToLeftHanded flag does - I don't know.

      I'll ask my friend if he stores the rotation as angles (or quat) in the file.
      I hope the reflection in that matrix you're talking about doesn't come from this sort of formulas, and I hope it can be avoided by the library.

       
      • Grégory Jaegy

        Grégory Jaegy - 2009-01-21

        The matrix with the negative determinant is the result of the multiplication of two matrices:

        - the first one is the matrix that tranform the Collada file Y_up coordinates system into Assimp internal coordinates system (OGL-like)
        - the second matrix is used to transform from Assimp internal coordinates system into Direct3D coordinates system

        This result is store as the root node matrix. And no, there is no way to decompose this kind of matrix without using an additional flag.

        But this is not required if you don't split the matrix into pos/rot/scale but simply store the plain matrix.

         
    • Alexandru Simion

      Yes, I see. But probably that's why my friend is using the inverse of this transform and is  adjusting the vertexes too, as I said. And this is done on every node, not just the root. It doesn't seem right to have something special on the root anyway.

      If you consider this special transform matrix as composed from the two 4x4 matrices ( collada_to_assimp and assimp_to_dx ) and use it's inverse as in my previous post, maybe a good matrix will result.

      My friend decomposes his matrix and stores it as a quaternion without problems. I guess this means the rotation matrix should be valid with no reflection in it.

      This is still a subject open the the developers of assimp, if they want to step in with some advice.

       
    • Alexander Gessler

      Hi,

      sorry for my late reply to this thread.
      The whole topic 'coordinate systems' has always been Thomas' job, so I'll let any further details to him, he knows better. From my point of view, Grégory is right with his solution for matrices with a reflection component but it's a general problem that should be handled (or better: avoided!) by assimp, not by the user. Collada is afaik the onky source of those matrices at the moment.

      @ConvertToLeftHanded
      I think the user needs much finer control over ConvertToLeftHanded. My current idea is to split it up in 3 single steps:

      1) transform model space from RH to LH.
         Alexandru's idea to make the transformation matrix configurable (https://sourceforge.net/forum/forum.php?thread_id=2874710&forum_id=817653) is quite good, so the step would be absolutely flexible then.

      2) flip texture coordinate y component
         There are different conventions where the origin of UV components is so this should be freely configurable, too.

      3) flip face winding-order
         The same. Not everyone uses CCW backface-culling with DirectX and not every GL user utilizes CW.

      For backward compatibility ConvertToLeftHanded would map to all these three steps, with a default gl-to-dx matrix set for the first.

      What do you think?

      Bye,
      Alex

       
      • Grégory Jaegy

        Grégory Jaegy - 2009-01-21

        Sure I'm right ;) kidding ;)

        Anyway, personnally i don't need more control over the ConvertToLeftHanded, simply because it is very easy to flip the tex coord again when required, or flip winding order.

        But more control is always better (as long as the API/usage doesn't get too complex, which often happens when giving more options).

         
    • Alexandru Simion

      Offering more control in this case is a good thing.
      Maybe now me and Grégory know how to handle this, but a new user would be very happy if he can get it working from the first try.
      Thanks for the support!

       
    • Thomas Ziegenhagen

      Ok, a response is overdue here, I guess. I'm sorry for responding so late. I don't have internet access at home at the moment, so I have to go to friends everytime.

      There are several issues described here, each being independent from all the others. I'm trying to clear things up a bit here.

      a) Matrix order.

      There are two methods of storing matrices in memory: row_major and column_major. Imagine a matrix given by three base vectors U, V and W and a translation vector T. A column major matrix would then look like this:

      Ux Vx Wx Tx
      Uy Vy Wy Ty
      Uz Vz Wz Tz
      0   0   0   1

      Assimp matrix classes do use this layout, OpenGL applications usually use this layout, too. But the important part is, that this is *not* obligatory for OpenGL, it's the programmer's choice. Me, for example, uses DirectX and colum major matrices.

      The order of sequence for column major matrices is MatResult = MatLast * MatMiddle * MatFirst. That means that transformations are applied back to front.

      A row major matrix built from the vectors named above would look like this:

      Ux Uy Uz 0
      Vx Vy Vz 0
      Wx Wy Wz 0
      Tx Ty Tz 1

      The order of sequence for row major matrices is MatResult = MatFirst * MatMiddle * MatLast. That means that transformations are applied front to back like you'd expect when you're used to reading convention of european languages.

      The important fact to take home is: Don't rely on the matrix layout to be like you expected. Row major vs. column major is *not* a question of DirectX vs. OpenGl, it's a design decision. Matrices provided by Assimp are always column major, even when the ConvertToLH step is applied to the scene. If you want row major matrices, for example if you want to use them with the D3DXMatrix methods, you have to transpose the aiMatrices first.

      b) Coordinate spaces.

      This is a difficult topic. There are multiple coordinate systems, all of them falling into two classes: lefthanded spaces and righthanded spaces. Examples:

      X to the right, Y up, Z into screen: lefthanded coordinate system, used for example by DirectX
      X to the right, Y up, Z out of screen: righthanded space, used by OpenGL for example.
      X to the right, Y into screen, Z up: righthanded space, used for example by 3D Studio Max. Present Assimp default, but please read on.

      There are more options, of course, but these are the common ones I know of. Each righthanded space can be converted to every other righthanded space by a simple rotation. Same for each lefthanded space. The coordinate space you use in your application is pretty much a design choice. You're *not* restricted to what the rendering API of your choice uses. In most cases people will still use the coordinate space given by the rendering API, but that's for ease of development, not forced.

      Upto now Assimp used the 3D Studio Max coordinate system internally. I'm not an OpenGL guy, I was under the impression that this is also OpenGL standard. But according to numerous feedback, it isn't. Therefore Alex and I decided to change the internal coordinate system of Assimp to default OpenGL space, with X to the right, Y being up and Z pointing out of screen. We will make the neccessary changes during the next week, I think.

      c) Conversion between lefthanded space and righthanded space.

      Every conversion back and forth between righthanded space and lefthanded space will *always* contain a reflection. This is unavoidable, it's the principle of the conversion. But we can choose where to apply this reflection. At the conversion matrix is applied to the root node's transformation matrix. Therefore the root node's transformation contains a reflection and is not decomposable into Quaternions or Euler angles. Note again, just to make sure that the point comes across: this is the same for both the old 3DSMax-like space and the new OpenGL default space Assimp is using - in both cases, conversion to lefthanded space involves a reflection - an negative scaling of one axis.

      After some discussion with Alex we decided to rewrite the ConvertToLH step again to keep the node transformations free from reflections. Instead we're going to mirror the meshes, flip the triangle ordering so that clockwise stays clockwise and adapt the rotations accordingly. At the moment I don't know how to handle animations, but there has to be some way to do this - others did it before.

      d) What is "up" for certain models/scenes/files

      Another difficult topic. For some file formats (like .X or Collada) the coordinate space is clearly defined or at least given by a pseudo standard. Other file formats never bothered to specify a coordinate system at all or use multiple options depending on the exporter or whatever... it's difficult to tell from the inside. The only thing we can do from the Assimp side is to watch a million example files closely, look for hints on what's meant to be "up" in them, and then adapt the importers one by one.

      Same goes for face ordering: in most cases it's determined by the model or the modelling package, in some cases the file format specifies an ordering. The only thing we can do from our side is to pass through the face ordering as unchanged as possible. All other issues are best taken up by you with your graphics artist.

      e) Texture coordinates.

      Upto now I was under the impression that DirectX defines UV coords of (0, 0) to be the upper left corner of the texture while OpenGL defines (0, 0) to be the lower left corner of the texture. Therefore I implemented the texture coord Y flipping in the ConvertToLH step. It doesn't exactly belong there, it's needed in any case. Upto now OpenGL formats like Collada look fine with this inversion, and OpenGL users I know of reported to me that various files looked fine after importing. Therefore I'm confused now of who's right here. I need your feedback here: where exactly is the origin of the texture coords located at the image? Which file formats did you try, and what did you experience for each of them? We need more informations on this issue to determine the next steps.

      Hope that cleared up a bit. Read you again soon when I've rewritten the handiness conversion.

      Bye, Thomas

       
      • Grégory Jaegy

        Grégory Jaegy - 2009-02-02

        Hi Thomas,

        I think the solution you are going to apply to avoid the reflection at the root node level is a good one.

        Concerning the texture coordinates, using "1.0 - fV" for the vertical texture coordinate has worked for me so far (using assimp & D3D, tested with .3ds & collada).

        Cheers,
        Gregory

         
      • Mark Sibly

        Mark Sibly - 2009-02-02

        Hi,

        Regarding texcoords: I was always under the impression that there was no real difference - 'up' depends on which vertices the texcoords are attached to, and 'smaller' values of U/V simply corresponded to 'smaller' vram addresses - eg: when you upload a texture, you're uploading V=0 row first up to V=1 row last.

        Also: My current 3D engine uses OpenGL with a custom projection matrix so the coord system is the same as D3D's. Models appear the same in both assimp_view and my engine, so i think this also supports the idea there's no difference.

        I think what's happening a lot is that several importers (eg: my b3dimporter) are mirroring V to compensate for the fact that ConvertToLefthanded is too!

        This is all confused somewhat by the fact that windows BMP's are frequently stored upside down...

        Another idea for this coord system stuff: How about a 'CoordSys' member is added to the scene class along with some enums like COORDSYS_3DS, COORDSYS_D3D and COORDSYS_GL?

        This way, loaders wont have to do anything fancy, they can just produce scenes in their 'natural' coord system.

        Then, ConvertToLeftHanded is replaced with ConvertToCoordSys, and a new property is added to the importer, ala the delete points/lines stuff.

        A bit of extra work, but not too much, and it localizes all the nasty hacky stuff in ConvertToCoordSys.

        Also, I've got a pretty robust Direct3D 3DS loader (with anim axis/angle stuff) if you want something for reference.

        Bye!
        Mark

         
    • Alexander Gessler

      Hi,

      >> Another idea for this coord system stuff: How about a 'CoordSys' member is added to the scene class along with some enums like COORDSYS_3DS, COORDSYS_D3D and COORDSYS_GL?

      Indded, that would be a possible solution for another part of the problem: as soon as if we have Thomas' handedness conversion code (which seems, as I hear, to turn out to be quite difficult and lengthy due to the complex scene data structure), all loaders with animation support will need nearly the same stuff againt to convert to the internal format (for the others it should be no problem to convert the vertices manually).

      But I think that won't be aproblem as soon as there is proper code for the GL-RH to DX-LH case. Everything else is just the same with a few modifications. Could be put in a nice utility class which the loaders can call.

      Using the flags you mentioned, however, would allow for some optimization if source and destination format match ... could cause other problems because some postprocessing steps are *expecting* data in a specific handedness. Life is not so easy ... but we'll find a way out of the jungle. Step by step :-) Let's start with the first, then discuss further changes.

      Bye,
      Alex

       
    • Alexandru Simion

      Thank you Thomas for the clear explanations.

      I'm happy you're planning to avoid the reflections when converting between lefthanded space and righthanded space. Adjusting the meshes and stuff is the best thing. I was also planning to have it in my model compiler tool - it uses the Assimp to import assets and it can combine and adjust the content, based on a script. It's great because I had some models with animations in separate files and I had to put them together in my model.

      And by the way, I found a few bones containing reflections in some .x models used in a game. Don't know if they happened when imported with Assimp, but I suspect they were stored like that in the original model format.

       

Log in to post a comment.