How applicable is the implementation of OpenSteer to a 3d environment? What differences are there?
Ideally, I would like to write a small piece of software that:
i) loads a number of 3d geometric characters (meshes modelled in a three 3d package e.g. 3ds Max, Maya, Lighwave) into a 'world' with a terrain (another mesh), and navigates the characters autonomously around it, maintaining foot contact with the terrain, detecting collisions and allowing for other behaviours.
The movement/locomotion of the characters would be simulated by using animation clips for each character animated using a keyframe approach previously.
Can anybody suggest where and how this problem differs from that considered by Opensteer?
Sorry if this sounds naive, but I am genuinely interested in implementing this, but am a little lost as to where to start. Any help would be great.
Thanks
Best Wishes
Richard Cannock
If you would like to refer to this comment somewhere else in this project, copy and paste the following link:
Richard Cannock asks: "How applicable is the implementation of OpenSteer to a 3d environment?"
It should be directly applicable. OpenSteer is based on 3D geometry, using 3d vectors and 3x3 rotation matrices. While most of the current demos constrain the vehicles to the XZ plane, this is not a limitation of OpenSteer as can be seen in the Boids demo.
Perhaps you are asking about integration with 3d articulated characters. This is certainly possible, but animating those characters is beyond the scope of OpenSteer. A likely scenario would be to integrate steering behaviors into an existing engine for animating articulated characters to create autonomous articulated characters. See an old example of that in the "Stuart/Bird/Fish" real-time PS2 demo in this video (http://www.red3d.com/cwr/temp/BirdFish_320x240.mov -- That used to be on the site of the R&D group where I work (http://www.research.scea.com/) but seems to have disappeared. I'll try to fix that.)
As to terrain following, that too has been done in combination with steering behaviors as in the PigeonPark PS2 demo (http://www.red3d.com/cwr/papers/2000/pip.html). On my short list of future projects for OpenSteer is a terrain following demo, but no promises about when that will be ready.
So the short answer is that there are no technical reasons why OpenSteer could not address the needs of your project, but there may be quite a lot of new code you would have to write to integrate OpenSteer to the rest of your software.
If you would like to refer to this comment somewhere else in this project, copy and paste the following link:
Thanks for the reply. From reading your paper, and a related tutorial paper (from SIGGRAPH 99), I had thought that 3d applications integrating Opensteer were applicable.
>> Perhaps you are asking about integration with >> 3d articulated characters.
Yes, this is exactly what I was referring to. I realise that the animation of the characters themselves is outside of the realms of Opensteer, but I had envisaged having a number of pre-defined animation cycles (walk, run, fight) that could be blended, scaled (based on speed?), that could be called at each point, based on the behaviour indicated by openSteer.
>> As to terrain following, that too has been done >> in combination with steering behaviors as in the >> PigeonPark PS2 demo.
Could you expand a little? When you are referring to terrain following, are you talking about waypoint following/path following or ensuring that your characters are in contact with the terrain e.g. positioned correctly in the Y axis?
Other than that, the main problem I have is that the 3d visualisation engine that I plan to use is Lightwave 3d, and it's plugin architecture is based around C, and not C++.
Regards
Richard
If you would like to refer to this comment somewhere else in this project, copy and paste the following link:
Richard Cannock: "...I had envisaged having a number of pre-defined animation cycles (walk, run, fight) that could be blended, scaled (based on speed?), that could be called at each point, based on the behaviour indicated by openSteer..."
As you probably know this general approach is widely used in games, and systems like Massive (http://www.massivesoftware.com/) use it for crowd scenes in movies. And despite making a passing reference to it in the 1999 paper, I do not have any personal experience using it in conjunction with steering behaviors. One possible approach is to steer an invisible "vehicle" near the animated character's position, at the end of each animated clip, a new clip would be selected based on the relative offset of the vehicle: if it is a bit ahead use the "walk" animation, if it is far ahead use "run", if behind use "slow down to stop", if its to either side use the appropriate turn animation, and so on.
The Stuart/Bird/Fish demo I mentioned yesterday took a very simple approach. We used a single walk cycle animation and then just transformed the walking character to be in the local space of the steered vehicle. The vehicle was given a speed to match the walk cycle and a minimum turning radius was enforced so the straight walk looked OK despite the curved path. Nothing was done to fix "foot skate" -- but since the character was walking on water (!) we could get away with a lot.
Richard Cannock: "...When you are referring to terrain following, are you talking about waypoint following/path following or ensuring that your characters are in contact with the terrain e.g. positioned correctly in the Y axis?..."
The latter. If you have direct access to the terrain model, this can be easily accomplished with a kinematic constraint: look up the terrain elevation for the character's horizontal position, then set the character's vertical (Y) position to that elevation. Except for extremely steep terrain, it is sufficient to steer in 3d then constrain the elevation after each update. Note that humans, animals and other "legged" systems tend to keep upright on inclined terrain, while wheeled vehicles tend to reorient so that their local Up axis is perpendicular to the terrain surface. There is already a "hook" in OpenSteer called SimpleVehicle::regenerateLocalSpace which is used now to provide banking for the Boids and could be used to implement various flavors of terrain following.
Richard Cannock: "...Other than that, the main problem I have is that the 3d visualisation engine that I plan to use is Lightwave 3d, and it's plugin architecture is based around C, and not C++..."
I've been largely ignoring this issue, but if anyone has words of wisdom I'd be glad to hear them. The OpenSteer user can always write their own interface between C and C++ modules. In the end steering is expressed as a 3d steering force vector, or a transformation matrix for the vehicle. At worst you need to copy between 3 and 16 floats from one data structure to another. But it would be nice if OpenSteer had a better story to tell C programmers on this point.
If you would like to refer to this comment somewhere else in this project, copy and paste the following link:
"I've been largely ignoring this issue, but if anyone has words of wisdom I'd be glad to hear them. The OpenSteer user can always write their own interface between C and C++ modules. In the end steering is expressed as a 3d steering force vector, or a transformation matrix for the vehicle. At worst you need to copy between 3 and 16 floats from one data structure to another. But it would be nice if OpenSteer had a better story to tell C programmers on this point."
Perhaps just store an OpenGL compatible 4x4 matrix in the LocalSpace structure instead of single vectors. Then a method to hand out a pointer to this structure would be needed and could directly passed into an glMultMatrix-command.
It __might__ be easier then to write a C-adapter function to interface with Lightwave... I don't know anything about Lightwave and its API though...
Cheers,
Bjoern
If you would like to refer to this comment somewhere else in this project, copy and paste the following link:
>> One possible approach is to steer an >>invisible "vehicle" near the animated character's >>position, at the end of each animated clip, a >>new clip would be selected based on the relative >>offset of the vehicle: if it is a bit ahead use >>the "walk" animation, if it is far ahead use "run", >>if behind use "slow down to stop", if its to either >>side use the appropriate turn animation, and so >>on.
Not quite sure why this level of "indirection" i.e. offset between the character and the vehicle is required. Why cannot the character be in the exact position of the vehicle?
The only reason I can see is to prevent you needing to turn on the spot (i.e. jumps in the animation). Could you explain a bit?
>> The vehicle was given a speed to match the >> walk cycle and a minimum turning radius was >>enforced so the straight walk looked OK despite >>the curved path.
Is this minimum turning radius related to the above point i.e. turning on the spot? i.e. are both points about making the walk look realistic, rather than "stop", "turn on spot", "go" etc.
>>the character was walking on water (!) we could >>get away with a lot.
Yes, I imagine that helps a bit!
>>Nothing was done to fix "foot skate" --
As I understand it, foot skate is to do with interpolation between keyframes for the feet position. Surely therefore it is a question of the original walk cycle having the problem?
>> The OpenSteer user can always write their own >> interface between C and C++ modules.
Well, I suspect that this is the approach I will use, but until I have wrapped my head around the exact nature of the problem, and some of the related issues, I am not going to start design/implementation.
Finally, I have a question. There are repeated references to local space in the various bits of literature. A lot of operations seem to be based around transforming things into "local space". Not quite sure I understand this concept.
I'm sorry if my questions seem elementary. They are! It's been 10 years since I did any mathematics at a formal level, and so some of the physics and maths are a bit fuzzy!
Thanks
Richard
If you would like to refer to this comment somewhere else in this project, copy and paste the following link:
Me: "One approach is to steer an invisible vehicle near the animated character's position..."
Richard: "Not quite sure why this level of indirection...is required..."
There are two main ways to use pre-recorded locomotion animation clips with a procedurally animated character. (This comes up in both interactive games and behavioral animation (crowd scenes) for films.) A walk-cycle animation can include the global translation of the body, or not. If you motion capture someone walking across the floor you get the first kind. If you motion capture someone walking on a treadmill you get the second kind.
In the Stuart/Bird/Fish demo I mentioned, we used a walk cycle in the "treadmill frame of reference" then just placed that in the local space (see below!) of the invisible vehicle controlled by steering behaviors. This is straightforward to implement, but has potentially serious artifacts like "foot skate" since nothing prevents the character's weight-bearing foot from moving relative to the ground. (This can be fixed as a post-process step using inverse kinematics.)
On the other hand, using animation clips from the "global frame of reference" which include the character's motion, there is no foot skate. (If the foot stayed "planted" on the motion capture stage, it will stay planted in the virtual world.) The downside of this approach is that the character's motion is confined to a small set of fixed clips (move ahead 1.2 meters, turn to the right with radius 0.8, etc.) so the character's path resembles the snap-together track for toy trains. It was for this case that I suggested the offset invisible vehicle: approximating the unrestricted path of the vehicle with the restricted motion of the animated character.
Richard: "Finally, I have a question. There are repeated references to local space in the various bits of literature. A lot of operations seem to be based around transforming things into "local space". Not quite sure I understand this concept."
Yes, that terminology is a bit of a Craig-ism and dates back to my 1975 SB thesis. Many people talk about "transformation matrices" instead, it is a object-versus-operation distinction. A transformation matrix can specify the geometrical relationship between (say) an airplane and the world. Multiplying a local vector by a matrix transforms it into global space (globalize), multiplying a global vector by the inverse (*) matrix transforms it into local space (localize). Local space refers to the perspective of someone inside the airplane, like the pilot. Distances and angles measured from the plane are relative to its orientation, as are its steering mechanisms (like flaps and rudders): hence an obstacle on the left of the plane's local space implies steer to the plane's right, etc.
(*) geek aside: OK the inverse transpose of the rotation matrix
If you would like to refer to this comment somewhere else in this project, copy and paste the following link:
Ok, so I understand the two different animation cycles you can record, essentially 'on the spot' or not, but:
i) with the globally translated version, surely the feet only stay rooted to the floor if your virtual terrain matches the motion capture terrain? Otherwise if the terrain elevation in the virtual world is different, then surely you are back to square one?
ii) why does using the global reference approach limit your number of clips. Surely you can just repeat the motion again, or blend? I am not sure why the characters movement is restricted, and hence why you can't just use the character as the vehicle.
iii) could you expand on minimum turning radius?
iv) I understand (I think!) the local vs global space issue, but why not work in Global space. Surely it is unambigos?
v) Finally, you mentioned that IK can be used to fix foot skating as a post process. Are there any references you know for this.
Thanks again for your patience and help. In addition to picking your brains, I am just about to order a copies of AI programming wisdom, Game programming gems, and have started to re-read Computer Graphics by Hearn, so hopefully my brain will start working again (assuming it ever did) soon!
Richard
If you would like to refer to this comment somewhere else in this project, copy and paste the following link:
Richard:
i) with the globally translated version, surely the feet only stay rooted to the floor if your virtual terrain matches the motion capture terrain? Otherwise if the terrain elevation in the virtual world is different, then surely you are back to square one?
For "gently rolling terrain" where the character is basically walking (as opposed to rock climbing, or leaping over gullies) it is usually sufficient to take a normal walk cycle and displace it vertically to match the elevation of the terrain at its location. Of course a human's posture is somewhat different when they walk on the level versus up/down a hill. But it is often "visually acceptable" to use a walk cycle from level terrain vertically displaced to the terrain. Its not perfect, but neither are the alternatives, like full physically based locomotion with the character actively controlling its joints to remain upright and balanced, like a robot.
Richard:
ii) why does using the global reference approach limit your number of clips. Surely you can just repeat the motion again, or blend? I am not sure why the characters movement is restricted, and hence why you can't just use the character as the vehicle.
You know this topic is *really* not related to steering behaviors, so you probably want to pursue it elsewhere. Yes you can blend, although that introduces artifacts of its own. The advantage of motion capture is that it looks realistic, the disadvantage is that it is hard to control.
Richard:
iii) could you expand on minimum turning radius?
In the SimpleVehicle used in OpenSteer's demos, two parameters characterize the vehicle's style of motion: its maximum speed and the maximum (magnitude of) acceleration that can be applied. Taken together these imply the vehicle's minmum turning radius at top speed.
Richard:
iv) I understand (I think!) the local vs global space issue, but why not work in Global space. Surely it is unambigos?
Ambiguity is not the issue, I'd say its more an issue of an appropriate and convenient representation for the problem at hand. Of course either can be used be they can be easily converted back and forth. But consider how intuitive it is to drive a car from inside versus how hard it can be to drive a radio controlled car from the global frame of reference (e.g. it is heading towards you so its left is opposite your left).
Richard:
v) Finally, you mentioned that IK can be used to fix foot skating as a post process. Are there any references you know for this.
Here are a couple, from the CiteSeer pages for these papers, follow links to related working that sounds relevant to your needs:
Hi. Just had a quick query about OpenSteer.
How applicable is the implementation of OpenSteer to a 3d environment? What differences are there?
Ideally, I would like to write a small piece of software that:
i) loads a number of 3d geometric characters (meshes modelled in a three 3d package e.g. 3ds Max, Maya, Lighwave) into a 'world' with a terrain (another mesh), and navigates the characters autonomously around it, maintaining foot contact with the terrain, detecting collisions and allowing for other behaviours.
The movement/locomotion of the characters would be simulated by using animation clips for each character animated using a keyframe approach previously.
Can anybody suggest where and how this problem differs from that considered by Opensteer?
Sorry if this sounds naive, but I am genuinely interested in implementing this, but am a little lost as to where to start. Any help would be great.
Thanks
Best Wishes
Richard Cannock
Richard Cannock asks: "How applicable is the implementation of OpenSteer to a 3d environment?"
It should be directly applicable. OpenSteer is based on 3D geometry, using 3d vectors and 3x3 rotation matrices. While most of the current demos constrain the vehicles to the XZ plane, this is not a limitation of OpenSteer as can be seen in the Boids demo.
Perhaps you are asking about integration with 3d articulated characters. This is certainly possible, but animating those characters is beyond the scope of OpenSteer. A likely scenario would be to integrate steering behaviors into an existing engine for animating articulated characters to create autonomous articulated characters. See an old example of that in the "Stuart/Bird/Fish" real-time PS2 demo in this video (http://www.red3d.com/cwr/temp/BirdFish_320x240.mov -- That used to be on the site of the R&D group where I work (http://www.research.scea.com/) but seems to have disappeared. I'll try to fix that.)
As to terrain following, that too has been done in combination with steering behaviors as in the PigeonPark PS2 demo (http://www.red3d.com/cwr/papers/2000/pip.html). On my short list of future projects for OpenSteer is a terrain following demo, but no promises about when that will be ready.
So the short answer is that there are no technical reasons why OpenSteer could not address the needs of your project, but there may be quite a lot of new code you would have to write to integrate OpenSteer to the rest of your software.
Thanks for the reply. From reading your paper, and a related tutorial paper (from SIGGRAPH 99), I had thought that 3d applications integrating Opensteer were applicable.
>> Perhaps you are asking about integration with >> 3d articulated characters.
Yes, this is exactly what I was referring to. I realise that the animation of the characters themselves is outside of the realms of Opensteer, but I had envisaged having a number of pre-defined animation cycles (walk, run, fight) that could be blended, scaled (based on speed?), that could be called at each point, based on the behaviour indicated by openSteer.
>> As to terrain following, that too has been done >> in combination with steering behaviors as in the >> PigeonPark PS2 demo.
Could you expand a little? When you are referring to terrain following, are you talking about waypoint following/path following or ensuring that your characters are in contact with the terrain e.g. positioned correctly in the Y axis?
Other than that, the main problem I have is that the 3d visualisation engine that I plan to use is Lightwave 3d, and it's plugin architecture is based around C, and not C++.
Regards
Richard
Richard Cannock: "...I had envisaged having a number of pre-defined animation cycles (walk, run, fight) that could be blended, scaled (based on speed?), that could be called at each point, based on the behaviour indicated by openSteer..."
As you probably know this general approach is widely used in games, and systems like Massive (http://www.massivesoftware.com/) use it for crowd scenes in movies. And despite making a passing reference to it in the 1999 paper, I do not have any personal experience using it in conjunction with steering behaviors. One possible approach is to steer an invisible "vehicle" near the animated character's position, at the end of each animated clip, a new clip would be selected based on the relative offset of the vehicle: if it is a bit ahead use the "walk" animation, if it is far ahead use "run", if behind use "slow down to stop", if its to either side use the appropriate turn animation, and so on.
The Stuart/Bird/Fish demo I mentioned yesterday took a very simple approach. We used a single walk cycle animation and then just transformed the walking character to be in the local space of the steered vehicle. The vehicle was given a speed to match the walk cycle and a minimum turning radius was enforced so the straight walk looked OK despite the curved path. Nothing was done to fix "foot skate" -- but since the character was walking on water (!) we could get away with a lot.
Richard Cannock: "...When you are referring to terrain following, are you talking about waypoint following/path following or ensuring that your characters are in contact with the terrain e.g. positioned correctly in the Y axis?..."
The latter. If you have direct access to the terrain model, this can be easily accomplished with a kinematic constraint: look up the terrain elevation for the character's horizontal position, then set the character's vertical (Y) position to that elevation. Except for extremely steep terrain, it is sufficient to steer in 3d then constrain the elevation after each update. Note that humans, animals and other "legged" systems tend to keep upright on inclined terrain, while wheeled vehicles tend to reorient so that their local Up axis is perpendicular to the terrain surface. There is already a "hook" in OpenSteer called SimpleVehicle::regenerateLocalSpace which is used now to provide banking for the Boids and could be used to implement various flavors of terrain following.
Richard Cannock: "...Other than that, the main problem I have is that the 3d visualisation engine that I plan to use is Lightwave 3d, and it's plugin architecture is based around C, and not C++..."
I've been largely ignoring this issue, but if anyone has words of wisdom I'd be glad to hear them. The OpenSteer user can always write their own interface between C and C++ modules. In the end steering is expressed as a 3d steering force vector, or a transformation matrix for the vehicle. At worst you need to copy between 3 and 16 floats from one data structure to another. But it would be nice if OpenSteer had a better story to tell C programmers on this point.
"I've been largely ignoring this issue, but if anyone has words of wisdom I'd be glad to hear them. The OpenSteer user can always write their own interface between C and C++ modules. In the end steering is expressed as a 3d steering force vector, or a transformation matrix for the vehicle. At worst you need to copy between 3 and 16 floats from one data structure to another. But it would be nice if OpenSteer had a better story to tell C programmers on this point."
Perhaps just store an OpenGL compatible 4x4 matrix in the LocalSpace structure instead of single vectors. Then a method to hand out a pointer to this structure would be needed and could directly passed into an glMultMatrix-command.
It __might__ be easier then to write a C-adapter function to interface with Lightwave... I don't know anything about Lightwave and its API though...
Cheers,
Bjoern
>> One possible approach is to steer an >>invisible "vehicle" near the animated character's >>position, at the end of each animated clip, a >>new clip would be selected based on the relative >>offset of the vehicle: if it is a bit ahead use >>the "walk" animation, if it is far ahead use "run", >>if behind use "slow down to stop", if its to either >>side use the appropriate turn animation, and so >>on.
Not quite sure why this level of "indirection" i.e. offset between the character and the vehicle is required. Why cannot the character be in the exact position of the vehicle?
The only reason I can see is to prevent you needing to turn on the spot (i.e. jumps in the animation). Could you explain a bit?
>> The vehicle was given a speed to match the >> walk cycle and a minimum turning radius was >>enforced so the straight walk looked OK despite >>the curved path.
Is this minimum turning radius related to the above point i.e. turning on the spot? i.e. are both points about making the walk look realistic, rather than "stop", "turn on spot", "go" etc.
>>the character was walking on water (!) we could >>get away with a lot.
Yes, I imagine that helps a bit!
>>Nothing was done to fix "foot skate" --
As I understand it, foot skate is to do with interpolation between keyframes for the feet position. Surely therefore it is a question of the original walk cycle having the problem?
>> The OpenSteer user can always write their own >> interface between C and C++ modules.
Well, I suspect that this is the approach I will use, but until I have wrapped my head around the exact nature of the problem, and some of the related issues, I am not going to start design/implementation.
Finally, I have a question. There are repeated references to local space in the various bits of literature. A lot of operations seem to be based around transforming things into "local space". Not quite sure I understand this concept.
I'm sorry if my questions seem elementary. They are! It's been 10 years since I did any mathematics at a formal level, and so some of the physics and maths are a bit fuzzy!
Thanks
Richard
Me: "One approach is to steer an invisible vehicle near the animated character's position..."
Richard: "Not quite sure why this level of indirection...is required..."
There are two main ways to use pre-recorded locomotion animation clips with a procedurally animated character. (This comes up in both interactive games and behavioral animation (crowd scenes) for films.) A walk-cycle animation can include the global translation of the body, or not. If you motion capture someone walking across the floor you get the first kind. If you motion capture someone walking on a treadmill you get the second kind.
In the Stuart/Bird/Fish demo I mentioned, we used a walk cycle in the "treadmill frame of reference" then just placed that in the local space (see below!) of the invisible vehicle controlled by steering behaviors. This is straightforward to implement, but has potentially serious artifacts like "foot skate" since nothing prevents the character's weight-bearing foot from moving relative to the ground. (This can be fixed as a post-process step using inverse kinematics.)
On the other hand, using animation clips from the "global frame of reference" which include the character's motion, there is no foot skate. (If the foot stayed "planted" on the motion capture stage, it will stay planted in the virtual world.) The downside of this approach is that the character's motion is confined to a small set of fixed clips (move ahead 1.2 meters, turn to the right with radius 0.8, etc.) so the character's path resembles the snap-together track for toy trains. It was for this case that I suggested the offset invisible vehicle: approximating the unrestricted path of the vehicle with the restricted motion of the animated character.
Richard: "Finally, I have a question. There are repeated references to local space in the various bits of literature. A lot of operations seem to be based around transforming things into "local space". Not quite sure I understand this concept."
Yes, that terminology is a bit of a Craig-ism and dates back to my 1975 SB thesis. Many people talk about "transformation matrices" instead, it is a object-versus-operation distinction. A transformation matrix can specify the geometrical relationship between (say) an airplane and the world. Multiplying a local vector by a matrix transforms it into global space (globalize), multiplying a global vector by the inverse (*) matrix transforms it into local space (localize). Local space refers to the perspective of someone inside the airplane, like the pilot. Distances and angles measured from the plane are relative to its orientation, as are its steering mechanisms (like flaps and rudders): hence an obstacle on the left of the plane's local space implies steer to the plane's right, etc.
(*) geek aside: OK the inverse transpose of the rotation matrix
Ok, so I understand the two different animation cycles you can record, essentially 'on the spot' or not, but:
i) with the globally translated version, surely the feet only stay rooted to the floor if your virtual terrain matches the motion capture terrain? Otherwise if the terrain elevation in the virtual world is different, then surely you are back to square one?
ii) why does using the global reference approach limit your number of clips. Surely you can just repeat the motion again, or blend? I am not sure why the characters movement is restricted, and hence why you can't just use the character as the vehicle.
iii) could you expand on minimum turning radius?
iv) I understand (I think!) the local vs global space issue, but why not work in Global space. Surely it is unambigos?
v) Finally, you mentioned that IK can be used to fix foot skating as a post process. Are there any references you know for this.
Thanks again for your patience and help. In addition to picking your brains, I am just about to order a copies of AI programming wisdom, Game programming gems, and have started to re-read Computer Graphics by Hearn, so hopefully my brain will start working again (assuming it ever did) soon!
Richard
Richard:
i) with the globally translated version, surely the feet only stay rooted to the floor if your virtual terrain matches the motion capture terrain? Otherwise if the terrain elevation in the virtual world is different, then surely you are back to square one?
For "gently rolling terrain" where the character is basically walking (as opposed to rock climbing, or leaping over gullies) it is usually sufficient to take a normal walk cycle and displace it vertically to match the elevation of the terrain at its location. Of course a human's posture is somewhat different when they walk on the level versus up/down a hill. But it is often "visually acceptable" to use a walk cycle from level terrain vertically displaced to the terrain. Its not perfect, but neither are the alternatives, like full physically based locomotion with the character actively controlling its joints to remain upright and balanced, like a robot.
Richard:
ii) why does using the global reference approach limit your number of clips. Surely you can just repeat the motion again, or blend? I am not sure why the characters movement is restricted, and hence why you can't just use the character as the vehicle.
You know this topic is *really* not related to steering behaviors, so you probably want to pursue it elsewhere. Yes you can blend, although that introduces artifacts of its own. The advantage of motion capture is that it looks realistic, the disadvantage is that it is hard to control.
Richard:
iii) could you expand on minimum turning radius?
In the SimpleVehicle used in OpenSteer's demos, two parameters characterize the vehicle's style of motion: its maximum speed and the maximum (magnitude of) acceleration that can be applied. Taken together these imply the vehicle's minmum turning radius at top speed.
Richard:
iv) I understand (I think!) the local vs global space issue, but why not work in Global space. Surely it is unambigos?
Ambiguity is not the issue, I'd say its more an issue of an appropriate and convenient representation for the problem at hand. Of course either can be used be they can be easily converted back and forth. But consider how intuitive it is to drive a car from inside versus how hard it can be to drive a radio controlled car from the global frame of reference (e.g. it is heading towards you so its left is opposite your left).
Richard:
v) Finally, you mentioned that IK can be used to fix foot skating as a post process. Are there any references you know for this.
Here are a couple, from the CiteSeer pages for these papers, follow links to related working that sounds relevant to your needs:
Footskate Cleanup for Motion Capture Editing
http://www.cs.wisc.edu/graphics/Gallery/Kovar/Cleanup/
http://citeseer.nj.nec.com/kovar02footskate.html
Motion Path Editing
http://citeseer.nj.nec.com/gleicher01motion.html
Sorry to be a pain, but, could you have a quick look at my last queries, then I'll leave you alone for or a bit ;-)