I am using Assimp and it is working great however some exporters do not have the option of converting from Z-up to Y-up, so I am trying to implement this in my loader.
What I am not sure about though, is exactly what MakeLeftHanded flag causes Assimp to do to the model/scene?
Does it change the geometry? Or just the transforms?
If it alters the vertices does it also alter things like the skinning transforms? What about the animations?
I can see that my model is 'mirrored' in the YX plane when importing but want to know exactly what is happening, instead of relying on what I can observe since I remember swapping axis's is painful enough at the best of times without there being an unexpected transform in there!
Thanks for any insights,
SJ
If you would like to refer to this comment somewhere else in this project, copy and paste the following link:
if you just need a rotation, than only apply that rotation :) ConvertToLeftHanded converts from right handed coordinate system to left handed. Right Handed is mostly used in the OpenGL world and most file formats, Left Handed is used in the DirectX world and a few file formats. To convert scenes from one to another, ConvertToLeftHanded does
- mirror all meshes
- mirror the node translations to fit
- mirror all skinning offset matrices
- mirror all animation channels affecting the node translations
- optional via separate step: reversing face winding order
- optional via separate step: mirror texture coordinates upside down
The latter is to convert between OpenGL and DirectX texture coordinate rules: in OpenGL (0,0) is lower left, in DirectX it's upper left. The face winding reversal is optional, too, because there is no "right" face winding order, it depends on your expectations and the choice of the modelling package. But with mirroring the mesh comes a reversal of the face winding order, so we added it to keep the intended order intact.
Does this answer your questions? Because: all of this doesn't have anything to do with Z-up to Y-up, which is a simple rotation.
If you would like to refer to this comment somewhere else in this project, copy and paste the following link:
Thank you for your detailed answer. That is much clearer now (esp. how the coordinate system conversion is orthogonal from the Z/Y up conversion).
I am caught on one point of the ZY conversion however. When I rotate my model, it, the skinned and non-skinned animations work fine. But the lights do not (since they are not parented)
If I rotate each of the model child nodes, and rotate the animation key-frames, non-skinned animations work, and so do the lights, but skinned animations are deformed. (Which I presume is because I do not transform the bind pose transforms, or the skinning transforms, but then provide as the pose a bunch of rotated transforms!)
What I do now is rotate the model, ensure that all lights are non-parented and only rotate the animation keys for channels that control lights, which is working OK; but am wondering is there a 'correct' or 'accepted' way to do the ZY conversion when dealing with animations?
(I read yesterday a post by someone who was demonstrating code for transforming animation key-frames based on the actual node transforms, which sounds interesting; they were doing it using Assimp as well I'm quite sure, but I did not favourite it so cannot find it again!)
(I also did not realise DX had a different texture coordinate system to OpenGL, that is interesting, Thanks! All this time I thought it was just 3DSMax being a pain! :))
If you would like to refer to this comment somewhere else in this project, copy and paste the following link:
i'm not really sure what you want to achieve by rotating a skinned mesh - you could just rotate the parent in your engine, or you could rotate the the root node and all animation channels affecting it. It depends on what engine you use - I probably don't need to remind you that the Assimp data structures are not intended to be used in life game scenes, so this is actually a problem outside the scope of Assimp, in my opinion.
Either way, if you rotate the model, all child nodes and also all resources tied to nodes should end up rotated, too. That applies to the mesh, all bone offset matrices (as they're local to the mesh's coordinate system), and to all lights. As you probably know, the Assimp light structures are also instanced in the local coordinate system of a node. The position and possible direction informations of a light are not global but in relation to the node which refers to it. And because of Assimp animation channels each affecting a node's transformation, lights should just move along nicely when animated and should also be rotated when you apply the Z-to-Y rotation at the root node.
To sum this up: the "correct" way to rotate a model and all its attachments is to apply a simple rotation to the root node. The Collada loader does this, and up to now nobody complained :-)
If you would like to refer to this comment somewhere else in this project, copy and paste the following link:
I know this outside the scope of Assimp but I thought given the enormous amount of experience with transferring data between all sorts of different configurations here, someone would be able to tell me the 'accepted' way: which you have, so thanks again! :)
I need a special case with the lights due to how I resolve the transform of each light (because lights aren't parented to the equivalent of the root node in my app.)
I know its beyond the scope of Assimp so its not a request, but out of curiosity has anyone considered incorporating swapping the ZY as a post process step? (since many exporters like OpenCOLLADA don't offer it as an option) It seems in the same category as converting the coordinate system and having it in the importer would provide some consistency over how it was done (and stop questions like this ;))
If you would like to refer to this comment somewhere else in this project, copy and paste the following link:
I don't think this is a good idea - there's so much confusion around handedness and coordinate systems in general, we should not add to the fuel by giving a simple root node rotation a dedicated post processing step.
Bye, Alex
If you would like to refer to this comment somewhere else in this project, copy and paste the following link:
Anonymous
-
2012-04-10
A possible use case would be where the user may want to define a custom transformation (e.g. rotate the entire scene/model 90 degrees about the Z axis) that gets applied to the scene root to be used during the pre-transform vertex post process step.
Actually this is a use case for me, and while its doable for my situation (C#.NET), it's a bit tricky, so having a configuration actually would be nice.
If you would like to refer to this comment somewhere else in this project, copy and paste the following link:
Seeing that such a transformation would be orthogonal to all postprocessing steps… what keeps you from just doing it? Just apply that rotation and be done. Do I miss something here?
If you would like to refer to this comment somewhere else in this project, copy and paste the following link:
Anonymous
-
2012-04-10
Sorry for the confusion. I'm talking about in context of a P/Invoke C# wrapper for Assimp where you're marshalling the scene data from unmanaged to managed memory, which is something you don't want to do that often, and can be a bit difficult going the other way (at least in C#…in a C++/CLI wrapper…very easy)
Basically how I have things setup is the initial import is a "all or nothing" import where all post-process steps are done right then and not after. Baking a custom transformation like in this case means you have to write some very unsafe code where you read the root node's matrix byte by byte, do the appropriate multiplication, then write the data back byte by byte. Then run the scene through the step before finally marshalling everything into managed memory.
It's trivial as you say if you use Assimp directly with C/C++, but that's not the case with me. It's not a huge deal really, but it would make things safer for my context, which is always a nice goal when working with native code from C#.
If you would like to refer to this comment somewhere else in this project, copy and paste the following link:
I see. Didn't think about managed languages, sorry. In this case it might actually be easier to apply this rotation upfront.
One question: how do you use the imported data? I was under the impression that you're extracting the data you need to your own structures and then dispose the Importer and the scene. If I'm right, such a rotation would be trivial to apply when converting to your own data.
If you would like to refer to this comment somewhere else in this project, copy and paste the following link:
Anonymous
-
2012-04-10
It's all about for baking/pre-transforming the geometry. E.g. In the case of the original poster, loading a static mesh from a modeler that uses a different axis for "up" may prefer to bake a rotation to correct that, so the correction doesn't have to be re-computed when gathering parent-child transforms in the hierarchy (if they use a hierarchy). Basically, do it once, during content processing and never have to worry about it after. This of course can already be done, but it requires multiple steps rather than, say a single step.
What I'm thinking of is like the configuration for normal generation…you're able to set a smoothing angle so the step can be done during the initial import. Rather than having to import the scene, set a value to it, and then resend it back through the normal post-processing step.
If you would like to refer to this comment somewhere else in this project, copy and paste the following link:
Hello,
I am using Assimp and it is working great however some exporters do not have the option of converting from Z-up to Y-up, so I am trying to implement this in my loader.
What I am not sure about though, is exactly what MakeLeftHanded flag causes Assimp to do to the model/scene?
Does it change the geometry? Or just the transforms?
If it alters the vertices does it also alter things like the skinning transforms? What about the animations?
I can see that my model is 'mirrored' in the YX plane when importing but want to know exactly what is happening, instead of relying on what I can observe since I remember swapping axis's is painful enough at the best of times without there being an unexpected transform in there!
Thanks for any insights,
SJ
Hi sebmaster,
if you just need a rotation, than only apply that rotation :) ConvertToLeftHanded converts from right handed coordinate system to left handed. Right Handed is mostly used in the OpenGL world and most file formats, Left Handed is used in the DirectX world and a few file formats. To convert scenes from one to another, ConvertToLeftHanded does
- mirror all meshes
- mirror the node translations to fit
- mirror all skinning offset matrices
- mirror all animation channels affecting the node translations
- optional via separate step: reversing face winding order
- optional via separate step: mirror texture coordinates upside down
The latter is to convert between OpenGL and DirectX texture coordinate rules: in OpenGL (0,0) is lower left, in DirectX it's upper left. The face winding reversal is optional, too, because there is no "right" face winding order, it depends on your expectations and the choice of the modelling package. But with mirroring the mesh comes a reversal of the face winding order, so we added it to keep the intended order intact.
Does this answer your questions? Because: all of this doesn't have anything to do with Z-up to Y-up, which is a simple rotation.
Hello ulfjorensen,
Thank you for your detailed answer. That is much clearer now (esp. how the coordinate system conversion is orthogonal from the Z/Y up conversion).
I am caught on one point of the ZY conversion however. When I rotate my model, it, the skinned and non-skinned animations work fine. But the lights do not (since they are not parented)
If I rotate each of the model child nodes, and rotate the animation key-frames, non-skinned animations work, and so do the lights, but skinned animations are deformed. (Which I presume is because I do not transform the bind pose transforms, or the skinning transforms, but then provide as the pose a bunch of rotated transforms!)
What I do now is rotate the model, ensure that all lights are non-parented and only rotate the animation keys for channels that control lights, which is working OK; but am wondering is there a 'correct' or 'accepted' way to do the ZY conversion when dealing with animations?
(I read yesterday a post by someone who was demonstrating code for transforming animation key-frames based on the actual node transforms, which sounds interesting; they were doing it using Assimp as well I'm quite sure, but I did not favourite it so cannot find it again!)
(I also did not realise DX had a different texture coordinate system to OpenGL, that is interesting, Thanks! All this time I thought it was just 3DSMax being a pain! :))
hi sebmaster,
i'm not really sure what you want to achieve by rotating a skinned mesh - you could just rotate the parent in your engine, or you could rotate the the root node and all animation channels affecting it. It depends on what engine you use - I probably don't need to remind you that the Assimp data structures are not intended to be used in life game scenes, so this is actually a problem outside the scope of Assimp, in my opinion.
Either way, if you rotate the model, all child nodes and also all resources tied to nodes should end up rotated, too. That applies to the mesh, all bone offset matrices (as they're local to the mesh's coordinate system), and to all lights. As you probably know, the Assimp light structures are also instanced in the local coordinate system of a node. The position and possible direction informations of a light are not global but in relation to the node which refers to it. And because of Assimp animation channels each affecting a node's transformation, lights should just move along nicely when animated and should also be rotated when you apply the Z-to-Y rotation at the root node.
To sum this up: the "correct" way to rotate a model and all its attachments is to apply a simple rotation to the root node. The Collada loader does this, and up to now nobody complained :-)
Hi ulfjorensen,
I know this outside the scope of Assimp but I thought given the enormous amount of experience with transferring data between all sorts of different configurations here, someone would be able to tell me the 'accepted' way: which you have, so thanks again! :)
I need a special case with the lights due to how I resolve the transform of each light (because lights aren't parented to the equivalent of the root node in my app.)
I know its beyond the scope of Assimp so its not a request, but out of curiosity has anyone considered incorporating swapping the ZY as a post process step? (since many exporters like OpenCOLLADA don't offer it as an option) It seems in the same category as converting the coordinate system and having it in the importer would provide some consistency over how it was done (and stop questions like this ;))
I don't think this is a good idea - there's so much confusion around handedness and coordinate systems in general, we should not add to the fuel by giving a simple root node rotation a dedicated post processing step.
Bye, Alex
A possible use case would be where the user may want to define a custom transformation (e.g. rotate the entire scene/model 90 degrees about the Z axis) that gets applied to the scene root to be used during the pre-transform vertex post process step.
Actually this is a use case for me, and while its doable for my situation (C#.NET), it's a bit tricky, so having a configuration actually would be nice.
Seeing that such a transformation would be orthogonal to all postprocessing steps… what keeps you from just doing it? Just apply that rotation and be done. Do I miss something here?
Sorry for the confusion. I'm talking about in context of a P/Invoke C# wrapper for Assimp where you're marshalling the scene data from unmanaged to managed memory, which is something you don't want to do that often, and can be a bit difficult going the other way (at least in C#…in a C++/CLI wrapper…very easy)
Basically how I have things setup is the initial import is a "all or nothing" import where all post-process steps are done right then and not after. Baking a custom transformation like in this case means you have to write some very unsafe code where you read the root node's matrix byte by byte, do the appropriate multiplication, then write the data back byte by byte. Then run the scene through the step before finally marshalling everything into managed memory.
It's trivial as you say if you use Assimp directly with C/C++, but that's not the case with me. It's not a huge deal really, but it would make things safer for my context, which is always a nice goal when working with native code from C#.
I see. Didn't think about managed languages, sorry. In this case it might actually be easier to apply this rotation upfront.
One question: how do you use the imported data? I was under the impression that you're extracting the data you need to your own structures and then dispose the Importer and the scene. If I'm right, such a rotation would be trivial to apply when converting to your own data.
It's all about for baking/pre-transforming the geometry. E.g. In the case of the original poster, loading a static mesh from a modeler that uses a different axis for "up" may prefer to bake a rotation to correct that, so the correction doesn't have to be re-computed when gathering parent-child transforms in the hierarchy (if they use a hierarchy). Basically, do it once, during content processing and never have to worry about it after. This of course can already be done, but it requires multiple steps rather than, say a single step.
What I'm thinking of is like the configuration for normal generation…you're able to set a smoothing angle so the step can be done during the initial import. Rather than having to import the scene, set a value to it, and then resend it back through the normal post-processing step.