Menu

VirtualRobot

Nikolaus Vahrenkamp Valerij Wittenbeck
Attachments
ContactPointsBox.png (60351 bytes)
HierarchicalIK.png (154744 bytes)
IKDemo.png (133312 bytes)
ReachMapAll.png (251298 bytes)
Reachmap.png (222483 bytes)
StabilityDemo.png (159119 bytes)
Tutorial10.png (84193 bytes)
Tutorial9.png (117853 bytes)
Tutorial9c.png (127923 bytes)
Wok_grasps_EEF.png (201134 bytes)

Once a robot and/or scene information are created (see Build a robot model), one can load and access the data via the VirtualRobot library. The C++ API offers convenient access, which is described in the following sections. Further information can be obtained by having a look on the source code of the examples that are provided with the library.

Preliminaries

We start with a brief introduction, showing how some internals are organized

Shared Pointers

Simox uses boost's shared pointers (also called smart pointers) in order to avoid memory leaks. If you are not familiar with shared pointer have a look at the official boost reference page: http://boost.org/libs/smart_ptr/smart_ptr.htm. In order to achieve a better readability of the source code, the following commonly used typedef is used for defining pointers:

typedef boost::shared_ptr<class> classPtr;

Hence the pointer to an instance of a VirtualRobot::Robot object can be written as VirtualRobot::RobotPtr (instead of the long version boost::shared_ptr<VirtualRobot::Robot>). Please note, that you never need to delete a pointer, since the smart pointer's reference counting system does this for you.

Matrices and Poses

Internally poses (which consist of a location and orientation) are handled with homogeneous 4x4 matrices. Therefore, the Eigen::Matrix4f type is used. Further information can be found at http://eigen.tuxfamily.org.

Namespaces

In Simox three namespaces are defined:

  • VirtualRobot
  • Saba
  • GraspStudio

VirtualRobot::RuntimeEnvironment

Several paths to data files are defined during the compilation process (E.g. the main data directory SimoxDir/VirtualRobot/data containing robot models, obstacles and precomputed grasp sets). Further, it is possible to add custom paths to the list of data paths in order to avoid absolute path names during execution. A data path can be set with

VirtualRobot::RuntimeEnvironment::addDataPath("/path/to/add");

And the list of all paths can be retrieved with

std::vector<std::string> allDataPaths = VirtualRobot::RuntimeEnvironment::getDataPaths();

In order to specify search paths from the command line, the build in command line parsing of Simox can be used. When starting your application with

./yourBinary --data-path "my/path/to/data"

the specified path will automatically added to the list of search paths when you process the command line as follows:

int main(int argc, char *argv[])
{
    VirtualRobot::RuntimeEnvironment::processCommandLine(argc,argv);
    // ...
}

When you want to pass custom flags to your program you can use the RuntimeEnvironment methods as well. Have a look at the following example:

int main(int argc, char *argv[])
{
    VirtualRobot::RuntimeEnvironment::considerKey("robot");
    VirtualRobot::RuntimeEnvironment::processCommandLine(argc,argv);
    VirtualRobot::RuntimeEnvironment::print();
    std::string filename(VR_BASE_DIR "/data/robots/ArmarIII/ArmarIII.xml");
    if (VirtualRobot::RuntimeEnvironment::hasValue("robot"))
    {
        std::string robFile = VirtualRobot::RuntimeEnvironment::getValue("robot");
        // check if the path is absolute or if can be created 
        if (VirtualRobot::RuntimeEnvironment::getDataFileAbsolute(robFile))
            filename = robFile;
    }
    cout << "Using robot file " << filename << endl;
    //...
}

VirtualRobot::SceneObject

All objects in VirtualRobot that may have a visualization (e.g. Robots, RobotNodes, Obstacles) are derived from the class SceneObject, which allows a hierarchical scene representation. SceneObjects may have children which are automatically updated/positioned when the parent is moved.

Accessing a Robot

A robot can be loaded as follows:

VirtualRobot::RobotPtr robot;
try
{
    robot = VirtualRobot::RobotIO::loadRobot(filename);
} catch (VirtualRobot::VirtualRobotException &e)
{
    std::cout << "Error: " << e.what() << std::endl;
}

RobotNodes: Accessing joints

A RobotNode can be retrieved with

VirtualRobot::RobotNodePtr rn = robot->getRobotNode("name of robot node");

The joint value can be set with

rn->setJointValue(myJointValueRadian);

The joint value can be retrieved with

float v =  rn->getJointValue();

The pose of the RobotNode's coordiante system can be retrieved with

Eigen::Matrix4f p = rn->getGlobalPose();

RobotConfig: Configurations of the robot

A RobotConfig represents a name,value map of joint values.

// Consider all RobotNodes
RobotConfigPtr c = robot->getConfig();
c->setConfig("joint 1", 0.5f);
robot->setConfig(c);

RobotNodeSets

A set of RobotNodes defined in the robot's XML file can be accessed with

VirtualRobot::RobotNodeSetPtr rns = robot->getRobotNodeSet("name of robot node set");
std::vector<float> jointValues = rns->getJointValues();
rns->setJointValues(jointValues);

When a kinematic chain is defined via a RobotNodeSet, its TCP node can be retrieved:

VirtualRobot::RobotNodePtr tcp = rns->getTCP();

Coordinate Transformations

Usually poses of objects and RobotNodes are given in the global coordinate system, but they can be transformed to local coordinate systems of other objects and vice versa. E.g. an object's position can be transformed to the hand coordinate system.

Eigen::Matrix4f localObjectPose robotNodeHand->toLocalCoordinateSystem(poseObjectGlobal);

In the following example the visually localized pose of a hand, given in the camera coordinate system, located in the left eye, is transformed to the global coordinate system:

Eigen::Matrix4f globalHandPose robotNodeEyeLeft->toGlobalCoordinateSystem(localHandPose);

End Effectors

An end effector, defined in the XML file of a robot, can be accessed as follows:

VirtualRobot::EndEffectorPtr eef = robot->getEndEffector("name of eef");

This eef can be closed, while self-collisons are considered:

std::vector<VirtualRobot::EndEffector::ContactInfo> contacts = eef->closeActors();

While grasping one or multiple obstacles can be considered:

std::vector<VirtualRobot::EndEffector::ContactInfo> contacts = eef->closeActors(obstacle);

A model of iCub's left hand.The model after executing the ''close'' command. The fingers are closed until self contact is determined.
The object is considered for collisions and all contacts with the object are visualized.

Obstacles and Grasps

Simple obstacles can be created as follows:

VirtualRobot::ObstaclePtr box = VirtualRobot::Obstacle::createBox(30.0f,30.0f,30.0f); // in mm

They can be positioned via homogeneous matrices:

box->setGlobalPose(matrix4x4);

Obstacles that can be grasped are called ManipulationObjects and offer the (optional) possibility to define grasps for a specific robot/end effector.

VirtualRobot::ManipulationObjectPtr object = VirtualRobot::ObjectIO::loadManipulationObject("filename.xml");

The corresponding grasps can be retrieved with:

VirtualRobot::GraspSetPtr grasps = object->getGraspSet("myRobot", "myEEF");

A grasp can be accessed as follows:

VirtualRobot::GraspPtr g = grasps->getGrasp(0);
VirtualRobot::GraspPtr g2 = grasps->getGrasp("name of grasp");

A grasp basically defines a object related pose of the end effector, whereas the pose of the end effector is given in its TCP coordinate system. To get the resulting pose for the TCP, when considering the object at pose mObject, you can use the following method:

Eigen::Matrix4f mTCP = g->getTcpPoseGlobal(mObject);

If you want to compute the pose that an object has to be set to, in order that the grasp g can be applied, you need to pass a robot, which is used to retrieve the current configuration:

Eigen::Matrix4f mObject = g->getTargetPoseGlobal(robot);

Scenes are a collection of robots, obstacles, and ManipulationObjects. A scene can be loaded and accessed in the following way:

VirtualRobot::ScenePtr scene = VirtualRobot::SceneIO:loadScene("scenefile.xml");

// access robots
std::vector<VirtualRobot::RobotPtr> robots = scene->getRobots();
VirtualRobot::RobotPtr robot = scene->getRobot("name of robot");

// get all configurations for a specific robot
 std::vector<VirtualRobot::RobotConfigPtr > configs = scene->getRobotConfigs(robot);

// get all Obstacles/ManipulationObjects
std::vector<VirtualRobot::ManipulationObjectPtr> manipObjects = scene->getManipulationObjects();
std::vector<VirtualRobot::ObstaclePtr> obstacles = scene->getObstacles();

// get sets of SceneObjects (== obstacles or manipulationobjects)
std::vector<VirtualRobot::SceneObjectSetPtr> objectSets = scene->getSceneObjectSets();

A wok with several grasps defined for ARMAR-III's right hand.A scene with two robots and objects.

Visualization

Depending on the cmake setup, different visualization engines can be used. Currently Coin3D is fully supported. Additionally, an exemplary implementation for OpenSceneGraph is provided, but not all visualization features are supported right now.

To use Coin3D's Open Inventor API we recommend to perform a setup together with Qt and SoQt (a Coin3D library that allows to easily render OpenInventor scenes to a Qt widget).
For this example, we assume that you have created an Qt-UI class (e.g. with QtDesigner) that holds an empty Qt::Frame named frameViewer.

The following code examples are taken from the VirtualRobot example RobotViewer.

  • The following variables are needed to setup your environment. The Coin3D's SoSeparators are used to hold the Open Inventor data and the SoQtExaminerViewerSoQt class from the SoQt library is used to embed a Qt widget for rendering.
    SoSeparator *sceneSep;
    SoSeparator *robotSep;
    boost::shared_ptr<VirtualRobot::CoinVisualization> visualization;
    SoQtExaminerViewer *viewer;
  • Initialization
    sceneSep = new SoSeparator;
    sceneSep->ref();
    robotSep = new SoSeparator;
    sceneSep->addChild(robotSep);
    viewer = new SoQtExaminerViewer(UI.frameViewer,"",TRUE,SoQtExaminerViewer::BUILD_POPUP);
    // the following setting is needed if you want to visualize highlighted parts of the robot
    viewer->setGLRenderAction(new SoLineHighlightRenderAction);


    // some optional parameters
    viewer->setBackgroundColor(SbColor(1.0f, 1.0f, 1.0f));
    viewer->setAccumulationBuffer(true);
    viewer->setAntialiasing(true, 4);
    viewer->setTransparencyType(SoGLRenderAction::SORTED_OBJECT_BLEND);
    viewer->setFeedbackVisibility(true);
    viewer->setSceneGraph(sceneSep);
    viewer->viewAll();
  • A loaded robot can be added to the visualization as follows:
    SceneObject::VisualizationType visuMode = SceneObject::Full;
    visualization = robot->getVisualization<CoinVisualization>(visuMode);
    SoNode* visualisationNode = visualisationNode = visualization->getCoinVisualization();
    robotSep->addChild(visualisationNode);

OpenSceneGraph

A robot and objects can be visualized in a Qt window using the VirtualRobot::osgViewerWidget class. This widget supports OSG renderings and it can be embedded in an Qt::Frame (you can setup your window with QtDesigner and add an empty Frame that is accessed from the code).

The following code examples are taken from the VirtualRobot example RobotViewerOSG.

  • For setting up your environment use the following varaibles:
    osg::Group* osgRoot;     
    osg::Group* osgRobot;
    VirtualRobot::osgViewerWidget* osgWidget;
    boost::shared_ptr<VirtualRobot::OSGVisualization> visualization;
  • Initialization
    osgRoot = new osg::Group();
    osgRobot = new osg::Group();
    osgRoot->addChild(osgRobot);
    osgWidget = new VirtualRobot::osgViewerWidget(osgRoot,UI.frameViewer);
    sgWidget->show();
  • A loaded robot can be added to the visualization as follows:
    SceneObject::VisualizationType visuMode = SceneObject::Full;
    visualization = robot->getVisualization<OSGVisualization>(visuMode);
    osg::Node* visualisationNode = visualisationNode = visualization->getOSGVisualization();
    osgRobot->addChild(visualisationNode);

Collision Detection

Collision Detection is a crucial part of any sampling-based motion planning algorithm. Hence, Simox takes special care of efficiently handling collision and distance queries.

Collision Engine

Simox allows to exchange the collision detection engine. By default a slightly updated version of the efficient and reliable library PQP is used. The sources are shipped with simox and the compilation process is handled by cmake, so there is no need of installing PQP on your own. An advantage of PQP is, that no prerequisites have to be met by the model representation as long as they exist as a set of triangles. PQP handles unordered sets of triangles with high efficiency.

If you intend to use a different collision engine have a look at the sources in VirtualRobot/CollisionDetection.

Collision Checker

In Simox a global collision checker singleton exists that can be accessed as follows:

VirtualRobot::CollisionCheckerPtr collisionChecker = VirtualRobot::CollisionChecker::getGlobalCollisionChecker();

If you do not plan to implement multi-threaded collision queries, this collision checker object is all you need.

Collision Models

All models (Objects, Obstacles, Robots, RobotNodes, etc) in Simox have two 3D representations. One for visualization (high def) and one for collision detection purposes (low def). You can visualize the models as well as the internal triangulated data structure by passing teh corresponding VirtualRobot::SceneObject::VisualizationType flag (Full, Collison, CollisionData) to the visualization routines. The collision models are usually defined in the robot's or object's XML description file.
You can access them with:

VirtualRobot::CollisionModelPtr colModel = object->getCollisionModel();

Collision and Distance Queries

The collision checker can be asked if two models are in collision or not:

bool inCollision = collisionChecker->checkCollision(colModel1,colModel2);

Additionally, the shortest distance between the two surfaces can be queried. The following code fragment shows how to get the shortest distance and the corresponding points on the object's surface:

Eigen::Vector3f surface1,surface2;
float dist = collisionChecker->calculateDistance(colModel1,colModel2,surface1,surface2);

Instead of using the collision models, SceneObjects (and all derived objects like Obstacles and RobotNodes) can be directly passed to the collision checker:

VirtualRobot::ObstaclePtr obstacle = VirtualRobot::Obstacle::createBox(100.0f,100.0f,100.0f);
obstacle->setGlobalPose(Eigen::matrix4f::Identity());
VirtualRobot::RobotNodePtr robotNode = robot->getRobotNode("Joint1");
bool inCollision = collisionChecker->checkCollision(robotNode,obstacle);

When operating with sets of objects (like kinematic chains, or a bunch of obstacles) you can use SceneObjectSets and RobotNodeSets as well:

VirtualRobot::ObstacleSetPtr obstacles = scene->getSceneObjectSet("All Obstacles");
VirtualRobot::RobotNodeSetPtr kinChain = robot->getRobotNodeSet("Left Arm");
float minDist = collisionChecker->calculateDistance(kinChain,obstacles);

Complex Collision Setups

Complex collision queries can be handled with the VirtualRobot::CDManager class. Here multiple pairs of object(s) can be specified which all should be checked for validity. This can be useful when multiple parts of a robot have to be considered, e.g. for motion planning.

In the following example an obstacle and a robot node are selected for collision detcetion. Additionally any slef collisions between the kinematic structures of the arm and the torso and head should be considered.

VirtualRobot::CDManagerPtr cdm(new VirtualRobot::CDManager());
cdm->addCollisionPair(obstacle,robotNode);
cdm->addCollisionPair(rnsArm,rnsTorso);
cdm->addCollisionPair(rnsArm,rnsHead);
bool inCollision = cdm->isInCollision();

Workspace Analysis: Reachability and Manipulability

A representation of the spatial reachability of a robot's manipulator can be precomputed in order to efficiently search for reachable grasps/poses during online processing. To this end a 6D voxel grid is filled, either with binary values (reachable/not reachable) or with quality information that indicates how well a pose can be reached. E.g. the manipulability can be used as such a quality measure.

From the base class for performing workspace analysis WorkspaceRepresentation, currently two derivations exist:

  • Reachability: This implementation can be used to encode reachable 6D poses with a simple quality measure which is related to the volume in joint space that maps to a voxel.
    • Creation
      Assuming a robot and a RobotNodeSet rns exist, an empty instance of the reachability can be created as follows:
        ReachabilityPtr reachSpace(new Reachability(robot));
        reachSpace->initialize(rns,discrTr,discrRo,minB,maxB,staticModel,dynamicModel,baseNode,tcpNode);
        reachSpace->print();
* Filling with data and IO  
  The reachability representation is initialized with two discretization parameters: discrTrans, the voxel size in translational components and discrRot, teh discretization for the three rotational dimensions. Further, the bounding box and in case self-collisions should be considered, a setup for collision detection is passed.

The reachability data can be filled by adding randomly generated robot poses (e.g. 1 mio poses):

        reachSpace->addRandomTCPPoses(1000000);
        reachSpace->print();
        reachSpace->save("myReachFile.bin");
        reachSpace.reset(new Reachability(robot));
        reachSpace->load("myReachFile.bin");
        reachSpace->print();
* Querying  
  The reachability data can be queried extremely fast. Here is an example:
        // get all grasps that are reachable
        ManipulationObjectPtr object; // an object
        // load object ....
        GraspSetPtr reachableGrasps = reachSpace->getReachableGrasps(allGrasps, object); 
* Demo  
  A reference implementation of reachability analysis can be found in VirtualRobot's example directory. The `ReachabilityDemo` can be used to build and visualize reachability data for custom robots and manipulators. The tool can be used to show myRobot.xml with a custom reachability file myReachFile.xml and by specifying the main TCP direction for visualization purposes.
        ReachabilityDemo --robot myRobot.xml --reachability myReachFile.bin --visualizationTCPAxis (1,0,0)
  • Manipulability: This class encapsulates an enhanced workspace representation that uses a basic manipulability measure for determining the quality of robot poses. Custom implementations of quality measures can be used as well.
    • Here is a short example:
        ManipulabilityPtr manipulability(new Manipulability(robot));
        PoseQualityManipulabilityPtr measure(new PoseQualityManipulability(rns)); // basic manipulability measure
        measure->penalizeJointLimits(true,50000); // penalize poses near joint limits
        manipulability->setManipulabilityMeasure(measure);
        float minB[6];
        float maxB[6];
        float maxManip;
        // automatically determine parameters
        manipulability->checkForParameters(rns,2000,minB,maxB,maxManip,baseNode,tcpNode);
        manipulability->initialize(rns,discrTr,discrRo,minB,maxB,staticModel,dynamicModel,baseNode,tcpNode);
        manipulability->setMaxManipulability(maxManip);
        manipulability->print();  
* Data and hole filling

Add some random configs and smooth the result:

        manipulability->addRandomTCPPoses(steps);
        manipulability->smooth(2);
        manipulability->print();  
  • Reachability Maps are useful to determine promising robot base positions for grasping. Based on a reachability distribution of the manipulator, the reachability map can becomputed as a discretized 2D distribution serving potential robot positions for reaching a grasping pose. The reachability map data structure can be built as follows:
    // get boundong box extends of manipulator's reachability 
    Eigen::Vector3f minBB,maxBB;
    reachSpace->getWorkspaceExtends(minBB,maxBB);
    // build reachability grid in x/y plane with identical discretization
    WorkspaceGridPtr reachGrid(new WorkspaceGrid(minBB(0),maxBB(0),minBB(1),maxBB(1),reachSpace->getDiscretizeParameterTranslation()));

    // the target object 
    ManipulationObjectPtr graspObject = ObjectIO::loadManipulationObject("myObject.xml");
    // and a specific grasp
    GraspPtr g = graspObject->getGrasp("myGrasp1");

    // position the reachability grid
    Eigen::Matrix4f gp = graspObject->getGlobalPose();
    reachGrid->setGridPosition(gp(0,3),gp(1,3));

    // fill the grid according to one specific grasp
    reachGrid->fillGridData(reachSpace, graspObject, g, robot->getRootNode() );


    // or: fill the grid with all grasps
    EndEffectorPtr eef = robot->getEndEffector("Right Hand");
    GraspSetPtr grasps = graspObject->getGraspSet(eef);
    for (int i=0;i<(int)grasps->getSize();i++)
    {
        g = grasps->getGrasp(i);
        reachGrid->fillGridData(reachSpace, graspObject, g, robot->getRootNode() );
    }

    // sample a base pose with minimum quality of 1
    // the result is stroed in x,y together with the corresponding grasp g
    float x,y;
    bool ok = reachGrid->getRandomPos(1, x, y, g,);
* The `ReachabilityMap` tool can be used to examine the behavior of this approach.

The reachability distribution of iCub's kinematic chain covering 3 hip and 7 arm joints.
The klzzwxh:0039 tool allows to create and visualize reachability data.
The klzzwxh:0040 tool is used visualize the reachability map for a specific grasp.
The klzzwxh:0041 tool is used visualize the reachability map for a set of grasp.

Jacobians and IK solving

Simox provides a generic IK solver, which is based on the Jacobian's Pseudoinverse approach. The IK solver is instantiated with a kinematic chain and it can be queired for 3D or 6D poses. The GenericIKDemo and JacobiDemo provide exemplary implementations.
Here is a short example:

RobotNodeSetPtr kc = robot->getRobotNodeSet("Left Arm");
// several Jacobi inversions algorithms are available
// here, the SVD damped approach is used
GenericIKSolverPtr ikSolver(new GenericIKSolver(kc, JacobiProvider::eSVDDamped));

// solve a query (only position)
Eigen::Matrix4f pose = kc->getTCP()->getGlobalPose();
pose(0,3) += 10.0f; // move 10 mm
bool ok = ikSolver->solve(pose,IKSolver::Position);

// solve a query (position and orientation)
// setup stepsize and max gradient decent loops
// lower stepsize -> better convergence but slower
// max loops: Just needed in special cases. Internally the iterative process is
// canceled when no progress has been made in the last step
ikSolver->setupJacobian(0.3f, 30); 
ok = ikSolver->solve(pose,IKSolver::All);

// Search solution to one of the grasps of the ManipulationObject
// current pose of object is considered
ok = ikSolver->solve(object, IKSolver::All);

// Search solution for one specific grasp applied to the object
GraspPtr g = object->getGraspSet("LeftHand")->getGrasp("Grasp0");
ok = ikSolver->solve(object, g, IKSolver::All);

// By default, the current configuration of the robot is used as start pose
// If that failes (e.g. for large distances), the IKSolver can generate random seed poses
// for the iterative gradient decent approach (50 in the following example).
ok = ikSolver->solve(pose, IKSolver::Position, 50);

Another example, showing how to access the Jacobians:

// Setup
RobotNodeSetPtr rns = robot->getRobotNodeSet("LeftArm");
DifferentialIKPtr j(new DifferentialIK(rns));

// setup a tcp with goal
RobotNodePtr tcp = rns->getTcp();
j->setGoal(targetPose,tcp,IKSolver::Position);

// get the Jacobian matrix for the tcp setup
Eigen::MatrixXf jac = j->getJacobianMatrix();

// get the pseudoinverse
Eigen::MatrixXf jacI = j->getPseudoInverseJacobianMatrix(tcp, IKSolver::All);

The klzzwxh:0052

Hierarchical IK solving

Complex robot systems usually have to meet multiple objectives for IK solving. Such objectives could include stability, feet postures, eef positions, etc. Simox provides methods for hierarchical IK solving, where a stack of tasks is processed by solving each task in the Null Space of the preceding tasks. Therefore JoacobiProviders are implemented for several objectives like pose reaching and stability. The framework is extendable and allows to implement custom JacobiProvides in order to combine them for hierarchical IK solving.

HierarchicalIKPtr hik(new HierarchicalIK(rnsJointsAll));
std::vector<HierarchicalIK::JacobiDefinition> jacobies;

// first Right feet pose
DifferentialIKPtr ikLeft2Right(new DifferentialIK(rnsJointsAll));
Eigen::Matrix4f trafoLeft2Right = feetPosture->getTransformationLeftToRightFoot();
Eigen::Matrix4f goalRight = feetPosture->getLeftTCP()->getGlobalPose() * trafoLeft2Right;
ikLeft2Right->setGoal(goalRight,feetPosture->getRightTCP());
HierarchicalIK::JacobiDefinition jd;
jd.jacProvider = ikLeft2Right;
jacobies.push_back(jd);

// second: COM
CoMIKPtr comIK(new CoMIK(rnsJointsAll,rnsModelsAll));
supportPolygon->updateSupportPolygon(10.0f);
Eigen::Vector2f c = MathTools::getConvexHullCenter(supportPolygon->getSupportPolygon2D());
comIK->setGoal(c); 
HierarchicalIK::JacobiDefinition jd2;
jd2.jacProvider = comIK;
jacobies.push_back(jd2);

// third: EEF pose
Eigen::Matrix4f goal = tcpGoal;
DifferentialIKPtr ikTCP(new DifferentialIK(rnsJointsAll));
ikTCP->setGoal(goal,tcp);
HierarchicalIK::JacobiDefinition jd3;
jd3.jacProvider = ikTCP;
jacobies.push_back(jd3);

//compute step
Eigen::VectorXf delta = hik->computeStep(jacobies,stepsize);
Eigen::VectorXf jv(delta.rows());
rnsJointsAll->getJointValues(jv);
jv += delta;
rnsJointsAll->setJointValues(jv);

The klzzwxh:0054

Stability computations

The StabilityDemo shows an exemplary usage of stability related methods.

The center of mass of a robot (or a RobotNodeSet) can be queried as follows:

// get com of robot
Eigen::Vector3f com = robot->getCoMGlobal();

The support polygon is defined by the convex hull of all contact points with the floor plane. The robot is in a statically stable pose if the projection of its center of mass (COM) on the floor lies within the support polygon.
It can be computed by detecting points of the feet which are near the floor, project them to the floor plane and compute their convex hull.

// Support polygon
MathTools::Plane floor =  MathTools::getFloorPlane();
std::vector< CollisionModelPtr > colModels =  
    robot->getCollisionModels();
CollisionCheckerPtr colChecker = CollisionChecker::getGlobalCollisionChecker();
std::vector< CollisionModelPtr >::iterator i = colModels.begin();
std::vector< MathTools::ContactPoint > points;
for (;i!=colModels.end();i++)
    colChecker->getContacts(floor, *i, points, 5.0f);
std::vector< Eigen::Vector2f > points2D;
for (size_t u=0;u<points.size();u++)
{
    Eigen::Vector2f pt2d = 
        MathTools::projectPointToPlane2D(points[u].p,floor);
    points2D.push_back(pt2d);
}
MathTools::ConvexHull2DPtr cv = MathTools::createConvexHull2D(points2D);

The ComIK solver allows to define a workspace target (2D) on the floor.
// comIK
CoMIK comIK(rnsJoints,rnsBodies);
comIK.setGoal(CoMTarget);
comIK.solveIK(0.3f,0,20);

klzzwxh:0058


Related

Wiki: RobotModels