Dave McClurg wrote:
> I'm not clear on how to implement "vertex blending" in ssg such that
> vertices are not connected to one single bone, but to *all* bones at once.
> What data type and dimension is this weight array you mention and where does
> it refer ?
Well, the hardware out there only allows two matrices to be applied to each
vertex. That's OK for the elbows, knees, wrists and ankles - but the character
animation gurus tell me that (for example) a vertex somewhere between shoulder,
neck and lower back would have to be affected by at least three matrices - and
they can come up with some pathalogical cases where many more are needed...facial
animation certainly comes to mind here. Bear in mind that the 'bones' we are
talking about here are not necessarily anatomical bones - we'd have eyebrow
bones, bones in the hair and bones in loose clothing.
In the limit, I think they'd really like to do (as you say) is to have ALL
matrices that make up the character simultaneously apply to every vertex in
the character somehow. I greatly dislike that idea from a data locality/culling/etc
standpoint...everything is wrong about it from a rendering perspective.
If we didn't have to consider hardware issues, we would have an array of pointers
to matrices in the leaf nodes - with a matching number of per-vertex 'weight'
tables. Each vertex in one of these special leaf nodes would have to be transformed
by each of the matrices in turn and then each result weighted and added. (Or whatever
black magic we decided is trendy).
This is pretty costly - so we'd probably want to test if one or other matrix
has a zero weight and skip the extra matrix multiplies if we can for things
like vertices that are in the middle of a bone.
Clearly, the more simultaneous matrices you want to allow, the more 'weight'
arrays there are, the more searching for non-zero weights - and the more
matrix * vertex multiplications you have to do.
I havn't studied how the hardware handles this in the case of GeForce cards.
What's even *MORE* complex is the new 'vertex pipeline' stuff that nVidia have
come out with (only under Windoze so far) - I imagine that allows yet more
complex matrix/vertex interactions...but I havn't even begun to look into what
it does for you in practice.
Some other issues are:
* Do you bother to recompute the vertex normals? Strictly, you should.
But the costs are insane! Will anyone ever notice?
* Do you do anything to colours and texture coordinates as the vertex
* Can this work with VIPM detail reduction, etc.
> > To have the character 'walk' or 'shoot' (and also to shoot whilst
> > walking),
> > the game still has to figure out how to move the bones - and for that you
> > could use pre-canned animations, motion capture data, inverse kinematics,
> > neural networks, etc, etc....however, that doesn't concern the graphics
> > side of things - we just have to worry about how to generate the
> > right skin
> > vertices from the modelled figure and the bone positions supplied by the
> > application.
> yes. but the data pipeline is of utmost importance. I'm working on some
> rigid bone animation using TM_ANIMATION key-framing from 3dsmax. Even if
> ssg supported vertex blending, i'm not sure how I would get that information
> out of 3dsmax. it's probably there but i'm not seeing it right now. can
> anyone help?
Well, one way is to paint the colour of each vertex tell you how much of each
matrix to apply. So for the 'human arm' case, the 'hand' bone could be
blue, the forearm green and the upper arm red. By blending the colours
so that the vertices at the elbow are orange, yellow, lime - and the wrist
vertices are cyan. Obviously your loader then converts those into the
three 'weight' arrays - and the vertices are forced back to white or something.
Another way would simply to run a tool to assign weights according to distance
from a hand-modelled "bone" polygon. Vertices all along the forearm would be
an inch or so away from the central bone - but the ones around the elbow would
be further away because you'd end the bone a short distance short of where the
anatomical bone ends.
I don't know details of which would be best - but that's not a PLIB issue - it's
for your tools.
One of the incentives to work on PrettyPoly is that once we have it working, it'll
be running SSG internally and we'll be able to add widgets to directly edit vertex
weights *AND* you'll be able to see the effect of that directly and immediately.
> if anyone is wondering why i don't just use frame-by-frame mesh animations
> instead of skeletal animation it is because i simply don't have the RAM to
> store it on a game console.
Yes - certainly.
...and there are LOTS of other reasons...you can invent animation on-the-fly
with inverse kinematics - you can have smoother motion - you get to use the
same animation data for multiple human characters - even if they are quite
different in size. Getting this kind of animation directly from human
performers with body suits, etc is also a lot easier.
Steve Baker HomeEmail: <sjbaker1@...>
HomePage : http://web2.airmail.net/sjbaker1
Projects : http://plib.sourceforge.net