first post (w00t)… I have looked through the forums and through the documentation and everything and I can't seem to be able to find an answer to this question: How do I pre-cache animations from assimp bone keyframes?
I can't seem to be able to get my head around this problem, and it's really eating me up. What I do is:
1.) I read the bone's offset matrix and multiply it by all previous transforms in the assimp scene structure and store it as a matrix.
2.) I read all the animations. For every keyframe I look up the corrseponding transform and multiply it by all previous transforms and store that in my own BoneKeyframe class
3.) When my game updates, the current BonePose for that time is looked up and put into a matrix array which gets sent to the GPU for skinning
When I copy the relative transforms, everything looks ok in bind pose. But when I start running the animation things get ugly because the transforms are local, and not relative to eachother. When I copy the absolute transforms I get "rubbish" data…
Help?
Emiel
If you would like to refer to this comment somewhere else in this project, copy and paste the following link:
So, after rethinking what I actually want from assimp, I can distill my question to: how do I get the local transformation matrices for every bone for every frame?
If you would like to refer to this comment somewhere else in this project, copy and paste the following link:
In short:
The offset matrices are mesh to bone matrices. They are already global in relation to the mesh, you don't need to concatenate them with the parent bone's offset matrices.
Think of the animation as a twofold task: First you animate the nodes in the scene graph, then you calculate the bone matrices from the current state of the scene graph. Both parts are indeed separable - the movement of the nodes could also come from a ragdoll physics simulation, for example.
First step: animate the scene graph. Iterate over all animation channels. Find the affected node by aiNodeAnim::mNodeName. Find the suitable anim keys for the current time. Calculate a transform from those keys. This transformation *replaces* the node's transformation. That means that the transformation is local in relation to its parent node.
You'd now see a moving scene graph. Nodes move around, and meshes, cameras or lights attached to those nodes also move.
The second step to perform each frame is to calculate the bone matrices from the current state of the nodes. For each bone, find the corresponding node by aiBone::mName. Calculate the bone matrix by concatenating aiBone::mOffsetMatrix, then the corresponding node's aiNode::mTransformation, then the parent node's mTransformation and so on until you're back to the same level of the hierarchy as the mesh itsself. Use this bone matrices to transform vertices in the vertex shader.
If you would like to refer to this comment somewhere else in this project, copy and paste the following link:
Yeah, I read that thread/post as well. However, that's not very usefull to me since I do not want to use assimp in anything except my asset loading library. The rendering library should only use mathematical representations that are "compiled" during the asset loading process. So right now, I try this by doing this:
this imports the bone (copies the assimp structure to my own essentially…)
The principles explained in this and the other threads are not limited to Assimp data structures, but are basic implementation guides. Please understand that I don't have the time to personally debug your code. Generic advises include:
- Check your matrix multiplication order. The bone animation system usually is the first place where one's math code is put to a rigorous test, where otherwise playing with signs and order of sequence suffices to get it working somehow.
- Check your quaternion-matrix conversion back and forth.
- Debug things separately. First try to visualize the bone structure itsself and verify that it's movement looks fine before using the pose to transform vertices.
Good luck.
If you would like to refer to this comment somewhere else in this project, copy and paste the following link:
Ok, checked that. Bones work. When I load them all with identity matrices and rotate a random bone in the hierarchy the mesh responds apropriately. But… when I load them using assimp, I still get junk when I copy the relative bone transforms in with the same function. So I tried this:
which, I would expect, gives me the relative transforms which I can use to hierarchically transform all the bones in the skeleton while maintaining the ability to transform the bind pose. But, this does not work. Am I missing something here?
Emiel
If you would like to refer to this comment somewhere else in this project, copy and paste the following link:
OK.. nevermind.. stupid of me. The above uses the relative transforms allright, but it doesn't store the world matrix for children of the node being transformed to relative space (i.e. only works one time…) Figured it out though… On to animations!
If you would like to refer to this comment somewhere else in this project, copy and paste the following link:
The comments are a bit wrong though… Util::GetDerivedTransform does *not* include the node's local transform. I'll keep you updated on how the animation part is going…
Emiel
If you would like to refer to this comment somewhere else in this project, copy and paste the following link:
Thanks. I'll look at your code when I'll have succeed to get access to the texCoords and indices of the mesh (I can get access to the vertex position, normals… but not to those data). Right now, I'm stuck at that and don't know what to do :/ I'm impatient to get where you are.
Good luck with animations !
If you would like to refer to this comment somewhere else in this project, copy and paste the following link:
Ok, I got the bones in relative space. Bind pose is working and so are the bone animations. One more hurdle, and I'm all done. I've got this model, and all bones are parented to an animation node called Bip01. After this comes Bip01_Spine etc. So apparently, Bip01 manages some rotational animation properties such as when my model plays it's "die" animation it rolls onto it's side (also, some other minute changes in the model are managed by this node but this is by far the most obvious error). I was wondering if anyoe knew a smart way of compiling this animation nodes transforms into the child hierarchy.
Emiel
If you would like to refer to this comment somewhere else in this project, copy and paste the following link:
Where did you get the model you're playing with ? Is it a free one ? I can't find a model with bones on the net. And I really need one now that I don't have trouble for retrieving the data from the model and being able to render it.
If you would like to refer to this comment somewhere else in this project, copy and paste the following link:
I know this is old, but…
You can find models for testing at: http://www.collada.org/owl/
One that seems to be a reference is Seymour (Check the public folders, there should be one named Seymour_anim2_triangulate.dae along with texture) since it has multiple materials (which translate into multiple meshes), texture and rigged animation (If you have problems rendering this mesh, you should try the multi-textured cube first).
I'm currently setting vertices weights correctly, but still face overshooting in animation (so arm passes through torso in the first second or so). I'm probably inverting matrices multiplication order or not using inverses when I should.
If you can, please correct me:
To Get a nodes Global transformation:
// ----------------------------------------------------------------
// Calculates the global transformation matrix for the given internal node
void Animator::CalculateBoneToWorldTransform(cBone* child){
child->GlobalTransform = child->LocalTransform;
cBone* parent = child->Parent;
while( parent ){// this will climb the nodes up along through the parents concentating all the matrices to get the Object to World transform, or in this case, the Bone To World transform
child->GlobalTransform *= parent->LocalTransform;
parent = parent->Parent;// get the parent of the bone we are working on
}
}
To get the actual Transformation matrix:
mat = Bones->Offset * Bones->GlobalTransform;
Please remember that I'm using OpenGL (so, right-hand rule). The Base transformation looks Ok.
If you would like to refer to this comment somewhere else in this project, copy and paste the following link:
Now it is my turn to get this working, and I am failing. Given bones, b1, b2, etc (defined by aiNodeAnim), where b2 is the parent of b1, etc, I understand it that the resulting matrix to be used for mesh animation transformation for b1 is:
offs * b1 * b2 * …
Where "offs" is the offset matrix for b1 defined in aiBone. Is this correctly understood?
You have to follow the multiplication chain b1 * b2 * … all the way up as defined by the aiNode hierarchy?
If you would like to refer to this comment somewhere else in this project, copy and paste the following link:
This question really depends on your matrix layout. But if you're talking about aiMatrix4x4 (which is column major), the order of sequence you stated is correct.
You don't need to go all the way up to the root node, but it usually doesn't hurt. The actual end node of the up transition is hard to tell, though, because it depends on the scene hierarchy in the modelling application and what the exporter and the file format decide. In my experience there are usually two scenarios:
Skeleton as subnode. Looks like this
Root
..MeshNode
….SkeletonRoot
……SkeletonSubNode1
……SkeletonSubNode2
and so on. In this case, the root node is clear - it's the mesh's node.
And there's the other scenario: Skeleton as sibling node. Looks like this:
Root
..MeshNode
..SkeletonRoot
….SkeletonSubNode1
….SkeletonSubNode2
In this case I usually add the SkeletonRoot-Transformation to the transformation chain, and then the inverse of the MeshNode as well. But in my experience this usually stems from 3DSMax and the skeleton root is usually the same as the mesh node, so you can leave this step out.
And there's a strange version of the second scenario:
Root
..DummyNode
….MeshNode
..SkeletonRoot
….SkeletonSubNode1
because, for some unfathomable reasons some exporters insist on inserting empty nodes there. In this case, you're fucked. I once added a filter to find and remove these dummy nodes, but I don't remember if I implemented it in a importer or in a post processing step. And there were complains about empty nodes vanishing, so I'm pretty I made this filter optional at some point in the past.
If you would like to refer to this comment somewhere else in this project, copy and paste the following link:
This question really depends on your matrix layout. But if you're talking about aiMatrix4x4 (which is column major), the order of sequence you stated is correct.
Thank you!
And there's the other scenario: Skeleton as sibling node. Looks like this:
Root
..MeshNode
..SkeletonRoot
….SkeletonSubNode1
….SkeletonSubNode2
That seems to correspond to my case. I export from Blender to Collada format, and get the following structure (with animation data for the bones, not the bind pose). In this example animation, I moved only Bone2 3 units in 'z', and Bone1 is a parent of Bone2:
Scene - rot(x, -pi/2)
..Armature - identity, gives same as Scene as result
….Bone1 - rot(x,pi/2), reverse rotation gives identity matrix back again as result
……Bone2 - translate(y,3), corresponds to 3 units movement, with no accumulated rotation.
..Mesh - identity, gives same rotation as Scene as result
The result from this is that animation Bone2 would have a translation of 3 units in 'y' and no rotation. On the one hand, I moved the bone in 'z', but on the other hand, the bone local coordinate system is 'y' for up. The mesh offset matrix for Bone1 (Offs1) is a rot(-pi/2), and for Bone2 (Offs2) it is a rotation with an additional translation of -3 units in 'y', giving:
1 0 0 0
0 0 1 -3
0 -1 0 0
0 0 0 1
Now the question is, given these matrices, how do I create the animation transformation matrix to be used for Bone2 on the mesh? Offs2*Bone2 gives:
1 0 0 0
0 0 1 -3
0 -1 0 -3
0 0 0 1
This is clearly wrong, there should be 3 units in 'z'. You mentioned that you used the inverse of the mesh. If I do inverse(mesh)*Offs2*Bone2 I get:
1 0 0 0
0 1 0 3
0 0 1 -3
0 0 0 1
Still wrong. If I do Offs2*inverse(mesh)*Bone2 I get the identity matrix. Interesting.
I think the matrix I eventually want should be:
1 0 0 0
0 1 0 0
0 0 1 3
0 0 0 1
Because of the rotations, it makes it harder to analyze what is right and what is wrong.
If you would like to refer to this comment somewhere else in this project, copy and paste the following link:
I don't really get your numbers, so I can't comment on that. But your example matrix multiplication sequence does not match the rule you stated yourself in your initial post. If I get the multiplication order right (column major means "back to front"), it should be
Offs2 * Node1 * Node2
Apart from that, I can't tell what's wrong here. Verify that, if the skeleton is in bind pose, each bone matrix should be identity. Also verify that you export the scene in bind pose. For a proper animation system this doesn't change a bit, but it makes your initial task of figuring out the details easier.
I also suggest to examine this problem on paper. Remember that the offset matrix is the inverse "bone to mesh in bind pose" matrix. So it actually transforms from mesh space to bone space. If you concatenate the bone nodes from sub to parent, you get the "bone to mesh in current pose" matrix. If the current pose is the bind pose, both should cancel out eachother, resulting in the aforementioned identity matrix.
Oh, and… maybe it's already too late, but just to be sure we're using the same words:
bone matrix: it's the resulting transformation matrix you'd upload to the vertex shader. It transforms from mesh space bind pose to mesh space current pose.
offset matrix: its the inverse matrix transforming from mesh space to bone space in bind pose.
node matrix: a node's transformation matrix in relation to its parent node.
So you'd need to concatenate a node's transformation matrix with the transform of its parent, and so on down to the skeleton root. This is your "bone to mesh matrix in current pose". Put the offset matrix in front of this and you'd get "difference between bind pose and current pose in mesh space". And THIS is what you actually want, you'd use it to displace the vertices.
You can simplify the animation problem by avoiding skinned meshes and simply use node animations without the bones. At the end, a node animation is just that: it describes the change of the transformation matrix of a single node over time. If you just stick a little example mesh to each of your skeleton nodes (or simply draw the skeleton using some debug lines), you can make the animation system work without the hassle of a skinning shader. If this works, then add the mesh deformation and the offset matrices and all this to the equation.
Hope that makes sense. Good luck.
If you would like to refer to this comment somewhere else in this project, copy and paste the following link:
Thanks for your help, I got it working now! Not only that, the skinning with is working, implemented in the shader.
But your example matrix multiplication sequence does not match the rule you stated yourself in your initial post. If I get the multiplication order right (column major means "back to front"), it should be
Offs2 * Node1 * Node2
Confirmed. You also said something additional, that you sometimes have to use the inverse mesh transformation matrix. That, and the above lines, turned out to be the key.
Also verify that you export the scene in bind pose.
I am a beginner with Blender, and this was another fact that had me going crazy.
I also suggest to examine this problem on paper.
I downloaded Octave, a most helpful tool that has excellent support for matrix manipulations. Using that, it was easier to test and verify assumptions.
The example below is now moving a bone from position 3 to 4. With a mesh transformation matrix (from the node tree)
1 0 0 0
0 0 1 0
0 -1 0 0
0 0 0 1
and an offset matrix of
1 0 0 0
0 1 0 -3
0 0 1 -0
0 0 0 1
computing offs*inverse(mesh) gives:
1 0 0 0
0 1 0 -3
0 0 1 0
0 0 0 1
This is perfect, as it it will exactly negate a bone (give identity matrix) that is only moved 3 units. The open issue here is that I am using the offset matrix, not the inverse offset.
Multiplying with the animation bone
1 0 0 0
0 1 0 4
0 0 1 0
0 0 0 1
gives just the bone matrix I want (to be sent to the shader):
1 0 0 0
0 1 0 1
0 0 1 0
0 0 0 1
I am writing a detailed blog about this, which will hopefully help others.
If I get the multiplication order right (column major means "back to front")…
I think the concept column major and row major is about the order that the 16 numbers are stored in memory. If so, it only need to be taken into account when converting from one format to the other,
and matrix multiplication order stays the same.
If you would like to refer to this comment somewhere else in this project, copy and paste the following link:
I now learned where my need for the inverse mesh matrix comes from. I had all vertices pre multiplied with the mesh matrix, to be able to draw the model in bind pose. But the computation scene * armature * b1 * b2 * offs computes a new mesh transformation matrix, to replace the bind pose matrix. That means I had to negate the pre multiplication of the bind pose mesh transformations.
If you would like to refer to this comment somewhere else in this project, copy and paste the following link:
Hey all,
first post (w00t)… I have looked through the forums and through the documentation and everything and I can't seem to be able to find an answer to this question: How do I pre-cache animations from assimp bone keyframes?
I can't seem to be able to get my head around this problem, and it's really eating me up. What I do is:
1.) I read the bone's offset matrix and multiply it by all previous transforms in the assimp scene structure and store it as a matrix.
2.) I read all the animations. For every keyframe I look up the corrseponding transform and multiply it by all previous transforms and store that in my own BoneKeyframe class
3.) When my game updates, the current BonePose for that time is looked up and put into a matrix array which gets sent to the GPU for skinning
When I copy the relative transforms, everything looks ok in bind pose. But when I start running the animation things get ugly because the transforms are local, and not relative to eachother. When I copy the absolute transforms I get "rubbish" data…
Help?
Emiel
So, after rethinking what I actually want from assimp, I can distill my question to: how do I get the local transformation matrices for every bone for every frame?
I wrote a voluminous post on that topic in this thread: https://sourceforge.net/projects/assimp/forums/forum/817654/topic/3880745
In short:
The offset matrices are mesh to bone matrices. They are already global in relation to the mesh, you don't need to concatenate them with the parent bone's offset matrices.
Think of the animation as a twofold task: First you animate the nodes in the scene graph, then you calculate the bone matrices from the current state of the scene graph. Both parts are indeed separable - the movement of the nodes could also come from a ragdoll physics simulation, for example.
First step: animate the scene graph. Iterate over all animation channels. Find the affected node by aiNodeAnim::mNodeName. Find the suitable anim keys for the current time. Calculate a transform from those keys. This transformation *replaces* the node's transformation. That means that the transformation is local in relation to its parent node.
You'd now see a moving scene graph. Nodes move around, and meshes, cameras or lights attached to those nodes also move.
The second step to perform each frame is to calculate the bone matrices from the current state of the nodes. For each bone, find the corresponding node by aiBone::mName. Calculate the bone matrix by concatenating aiBone::mOffsetMatrix, then the corresponding node's aiNode::mTransformation, then the parent node's mTransformation and so on until you're back to the same level of the hierarchy as the mesh itsself. Use this bone matrices to transform vertices in the vertex shader.
Yeah, I read that thread/post as well. However, that's not very usefull to me since I do not want to use assimp in anything except my asset loading library. The rendering library should only use mathematical representations that are "compiled" during the asset loading process. So right now, I try this by doing this:
this imports the bone (copies the assimp structure to my own essentially…)
then, I import animations like this:
and that's that for the content processing. After that I call the following function to get a matrix list to upload to my shader:
this does *not* work. However, when I use this function for frame 0:
I do get the correct "bind" pose.
Hope this clarifies my problem…
Regards,
Emiel
Oops, forgot:
this function fills my model's bone transforms:
The principles explained in this and the other threads are not limited to Assimp data structures, but are basic implementation guides. Please understand that I don't have the time to personally debug your code. Generic advises include:
- Check your matrix multiplication order. The bone animation system usually is the first place where one's math code is put to a rigorous test, where otherwise playing with signs and order of sequence suffices to get it working somehow.
- Check your quaternion-matrix conversion back and forth.
- Debug things separately. First try to visualize the bone structure itsself and verify that it's movement looks fine before using the pose to transform vertices.
Good luck.
Ok, checked that. Bones work. When I load them all with identity matrices and rotate a random bone in the hierarchy the mesh responds apropriately. But… when I load them using assimp, I still get junk when I copy the relative bone transforms in with the same function. So I tried this:
which, I would expect, gives me the relative transforms which I can use to hierarchically transform all the bones in the skeleton while maintaining the ability to transform the bind pose. But, this does not work. Am I missing something here?
Emiel
OK.. nevermind.. stupid of me. The above uses the relative transforms allright, but it doesn't store the world matrix for children of the node being transformed to relative space (i.e. only works one time…) Figured it out though… On to animations!
hello. I'm going to need to do the same thing as you. Could you show me where your mistake was please ?
This is the code I now use for setting relative transforms, hope it helps
Regards,
Emiel
The comments are a bit wrong though… Util::GetDerivedTransform does *not* include the node's local transform. I'll keep you updated on how the animation part is going…
Emiel
Thanks. I'll look at your code when I'll have succeed to get access to the texCoords and indices of the mesh (I can get access to the vertex position, normals… but not to those data). Right now, I'm stuck at that and don't know what to do :/ I'm impatient to get where you are.
Good luck with animations !
I used this one to figure out how texcoords etc. work:
http://wasaproject.net16.net/?p=175
Ok, I got the bones in relative space. Bind pose is working and so are the bone animations. One more hurdle, and I'm all done. I've got this model, and all bones are parented to an animation node called Bip01. After this comes Bip01_Spine etc. So apparently, Bip01 manages some rotational animation properties such as when my model plays it's "die" animation it rolls onto it's side (also, some other minute changes in the model are managed by this node but this is by far the most obvious error). I was wondering if anyoe knew a smart way of compiling this animation nodes transforms into the child hierarchy.
Emiel
Where did you get the model you're playing with ? Is it a free one ? I can't find a model with bones on the net. And I really need one now that I don't have trouble for retrieving the data from the model and being able to render it.
I know this is old, but…
You can find models for testing at: http://www.collada.org/owl/
One that seems to be a reference is Seymour (Check the public folders, there should be one named Seymour_anim2_triangulate.dae along with texture) since it has multiple materials (which translate into multiple meshes), texture and rigged animation (If you have problems rendering this mesh, you should try the multi-textured cube first).
I'm currently setting vertices weights correctly, but still face overshooting in animation (so arm passes through torso in the first second or so). I'm probably inverting matrices multiplication order or not using inverses when I should.
If you can, please correct me:
To Get a nodes Global transformation:
// ----------------------------------------------------------------
// Calculates the global transformation matrix for the given internal node
void Animator::CalculateBoneToWorldTransform(cBone* child){
child->GlobalTransform = child->LocalTransform;
cBone* parent = child->Parent;
while( parent ){// this will climb the nodes up along through the parents concentating all the matrices to get the Object to World transform, or in this case, the Bone To World transform
child->GlobalTransform *= parent->LocalTransform;
parent = parent->Parent;// get the parent of the bone we are working on
}
}
To get the actual Transformation matrix:
mat = Bones->Offset * Bones->GlobalTransform;
Please remember that I'm using OpenGL (so, right-hand rule). The Base transformation looks Ok.
Ok! Finally made it!
As it turns out, the "overshooting", after played frame by frame, was actually "inverse motion".
Just replaced
mat = Bones->Offset * Bones->GlobalTransform;
with
mat = affineInverse(Bones->Offset * Bones->GlobalTransform);
and I was good to go!
Will look into this with more detail latter (this still feels like a workaround).
Nothing like a good night's sleep to clear the ideas.
Thanks everyone for their help
Now it is my turn to get this working, and I am failing. Given bones, b1, b2, etc (defined by aiNodeAnim), where b2 is the parent of b1, etc, I understand it that the resulting matrix to be used for mesh animation transformation for b1 is:
offs * b1 * b2 * …
Where "offs" is the offset matrix for b1 defined in aiBone. Is this correctly understood?
You have to follow the multiplication chain b1 * b2 * … all the way up as defined by the aiNode hierarchy?
This question really depends on your matrix layout. But if you're talking about aiMatrix4x4 (which is column major), the order of sequence you stated is correct.
You don't need to go all the way up to the root node, but it usually doesn't hurt. The actual end node of the up transition is hard to tell, though, because it depends on the scene hierarchy in the modelling application and what the exporter and the file format decide. In my experience there are usually two scenarios:
Skeleton as subnode. Looks like this
Root
..MeshNode
….SkeletonRoot
……SkeletonSubNode1
……SkeletonSubNode2
and so on. In this case, the root node is clear - it's the mesh's node.
And there's the other scenario: Skeleton as sibling node. Looks like this:
Root
..MeshNode
..SkeletonRoot
….SkeletonSubNode1
….SkeletonSubNode2
In this case I usually add the SkeletonRoot-Transformation to the transformation chain, and then the inverse of the MeshNode as well. But in my experience this usually stems from 3DSMax and the skeleton root is usually the same as the mesh node, so you can leave this step out.
And there's a strange version of the second scenario:
Root
..DummyNode
….MeshNode
..SkeletonRoot
….SkeletonSubNode1
because, for some unfathomable reasons some exporters insist on inserting empty nodes there. In this case, you're fucked. I once added a filter to find and remove these dummy nodes, but I don't remember if I implemented it in a importer or in a post processing step. And there were complains about empty nodes vanishing, so I'm pretty I made this filter optional at some point in the past.
Thank you!
That seems to correspond to my case. I export from Blender to Collada format, and get the following structure (with animation data for the bones, not the bind pose). In this example animation, I moved only Bone2 3 units in 'z', and Bone1 is a parent of Bone2:
Scene - rot(x, -pi/2)
..Armature - identity, gives same as Scene as result
….Bone1 - rot(x,pi/2), reverse rotation gives identity matrix back again as result
……Bone2 - translate(y,3), corresponds to 3 units movement, with no accumulated rotation.
..Mesh - identity, gives same rotation as Scene as result
The result from this is that animation Bone2 would have a translation of 3 units in 'y' and no rotation. On the one hand, I moved the bone in 'z', but on the other hand, the bone local coordinate system is 'y' for up. The mesh offset matrix for Bone1 (Offs1) is a rot(-pi/2), and for Bone2 (Offs2) it is a rotation with an additional translation of -3 units in 'y', giving:
Now the question is, given these matrices, how do I create the animation transformation matrix to be used for Bone2 on the mesh?
Offs2*Bone2 gives:
This is clearly wrong, there should be 3 units in 'z'. You mentioned that you used the inverse of the mesh. If I do inverse(mesh)*Offs2*Bone2 I get:
Still wrong. If I do Offs2*inverse(mesh)*Bone2 I get the identity matrix. Interesting.
I think the matrix I eventually want should be:
Because of the rotations, it makes it harder to analyze what is right and what is wrong.
I don't really get your numbers, so I can't comment on that. But your example matrix multiplication sequence does not match the rule you stated yourself in your initial post. If I get the multiplication order right (column major means "back to front"), it should be
Offs2 * Node1 * Node2
Apart from that, I can't tell what's wrong here. Verify that, if the skeleton is in bind pose, each bone matrix should be identity. Also verify that you export the scene in bind pose. For a proper animation system this doesn't change a bit, but it makes your initial task of figuring out the details easier.
I also suggest to examine this problem on paper. Remember that the offset matrix is the inverse "bone to mesh in bind pose" matrix. So it actually transforms from mesh space to bone space. If you concatenate the bone nodes from sub to parent, you get the "bone to mesh in current pose" matrix. If the current pose is the bind pose, both should cancel out eachother, resulting in the aforementioned identity matrix.
Oh, and… maybe it's already too late, but just to be sure we're using the same words:
bone matrix: it's the resulting transformation matrix you'd upload to the vertex shader. It transforms from mesh space bind pose to mesh space current pose.
offset matrix: its the inverse matrix transforming from mesh space to bone space in bind pose.
node matrix: a node's transformation matrix in relation to its parent node.
So you'd need to concatenate a node's transformation matrix with the transform of its parent, and so on down to the skeleton root. This is your "bone to mesh matrix in current pose". Put the offset matrix in front of this and you'd get "difference between bind pose and current pose in mesh space". And THIS is what you actually want, you'd use it to displace the vertices.
You can simplify the animation problem by avoiding skinned meshes and simply use node animations without the bones. At the end, a node animation is just that: it describes the change of the transformation matrix of a single node over time. If you just stick a little example mesh to each of your skeleton nodes (or simply draw the skeleton using some debug lines), you can make the animation system work without the hassle of a skinning shader. If this works, then add the mesh deformation and the offset matrices and all this to the equation.
Hope that makes sense. Good luck.
Thanks for your help, I got it working now! Not only that, the skinning with is working, implemented in the shader.
Offs2 * Node1 * Node2
Confirmed. You also said something additional, that you sometimes have to use the inverse mesh transformation matrix. That, and the above lines, turned out to be the key.
I am a beginner with Blender, and this was another fact that had me going crazy.
I downloaded Octave, a most helpful tool that has excellent support for matrix manipulations. Using that, it was easier to test and verify assumptions.
The example below is now moving a bone from position 3 to 4. With a mesh transformation matrix (from the node tree)
and an offset matrix of
computing offs*inverse(mesh) gives:
This is perfect, as it it will exactly negate a bone (give identity matrix) that is only moved 3 units. The open issue here is that I am using the offset matrix, not the inverse offset.
Multiplying with the animation bone
gives just the bone matrix I want (to be sent to the shader):
I am writing a detailed blog about this, which will hopefully help others.
I think the concept column major and row major is about the order that the 16 numbers are stored in memory. If so, it only need to be taken into account when converting from one format to the other,
and matrix multiplication order stays the same.
I now learned where my need for the inverse mesh matrix comes from. I had all vertices pre multiplied with the mesh matrix, to be able to draw the model in bind pose. But the computation scene * armature * b1 * b2 * offs computes a new mesh transformation matrix, to replace the bind pose matrix. That means I had to negate the pre multiplication of the bind pose mesh transformations.