Menu

ROS interface

Diego Centelles Beltran

There are several worlds that may be launched with:

        $ roslaunch cgripper_gazebo <world>

WORLDS

  • cgripper_kinect.launch: The custom arm model and the marker over a table, where the arm has a kinect camera.

  • cgripper_kinect_calibration.launch: The custom arm model and a chessboard for calibration, where the arm has a kinect camera.

Topics published by kinect:

    **/cgripper/kinect/depth/image_raw**
    **/cgripper/kinect/depth/points**
    **/cgripper/kinect/rgb/camera_info**
    **/cgripper/kinect/rgb/image_raw**
    /cgripper/kinect/rgb/image_raw/compressed
    /cgripper/kinect/rgb/image_raw/compressed/parameter_descriptions
    /cgripper/kinect/rgb/image_raw/compressed/parameter_updates
    /cgripper/kinect/rgb/image_raw/compressedDepth
    /cgripper/kinect/rgb/image_raw/compressedDepth/parameter_descriptions
    /cgripper/kinect/rgb/image_raw/compressedDepth/parameter_updates
    /cgripper/kinect/rgb/image_raw/theora
    /cgripper/kinect/rgb/image_raw/theora/parameter_descriptions
    /cgripper/kinect/rgb/image_raw/theora/parameter_updates
    /cgripper_kinect/depth/camera_info
    /cgripper_kinect/parameter_descriptions
    /cgripper_kinect/parameter_updates
  • cgripper_stereo.launch: The custom arm model and the marker over a table, where the arm has a parstereo camera.

  • cgripper_stereo_calibration.launch: The custom arm model and a chessboard for calibration, where the arm has a parstereo camera.

Topics published by stereo:

    **/cgripper_stereo/left/camera_info**
    **/cgripper_stereo/left/image_raw**
    /cgripper_stereo/left/image_raw/compressed
    /cgripper_stereo/left/image_raw/compressed/parameter_descriptions
    /cgripper_stereo/left/image_raw/compressed/parameter_updates
    /cgripper_stereo/left/image_raw/compressedDepth
    /cgripper_stereo/left/image_raw/compressedDepth/parameter_descriptions
    /cgripper_stereo/left/image_raw/compressedDepth/parameter_updates
    /cgripper_stereo/left/image_raw/theora
    /cgripper_stereo/left/image_raw/theora/parameter_descriptions
    /cgripper_stereo/left/image_raw/theora/parameter_updates
    /cgripper_stereo/left/parameter_descriptions
    /cgripper_stereo/left/parameter_updates
    **/cgripper_stereo/right/camera_info**
    **/cgripper_stereo/right/image_raw**
    /cgripper_stereo/right/image_raw/compressed
    /cgripper_stereo/right/image_raw/compressed/parameter_descriptions
    /cgripper_stereo/right/image_raw/compressed/parameter_updates
    /cgripper_stereo/right/image_raw/compressedDepth
    /cgripper_stereo/right/image_raw/compressedDepth/parameter_descriptions
    /cgripper_stereo/right/image_raw/compressedDepth/parameter_updates
    /cgripper_stereo/right/image_raw/theora
    /cgripper_stereo/right/image_raw/theora/parameter_descriptions
    /cgripper_stereo/right/image_raw/theora/parameter_updates
    /cgripper_stereo/right/parameter_descriptions
    /cgripper_stereo/right/parameter_updates

In order to view the image from a topic:

    $ rosrun image_view image_view image:=<TOPIC>

E.g:

~~~~~~
$ rosrun image_view image_view image:=/cgripper/stereo/left/image_raw
~~~~~

The models of the kinect and the stereo camera are the same.

alternate text

CAMERA CONFIGURATION

The camera configuration (focus length, baseline, frame size, depth, etc) can be checked and modified in the proper sdf file:

  • src/model/cgripper_kinect/model.sdf
  • src/model/cgripper_stereo/model.sdf
CAMERA CALIBRATION

In order to calibrate the cameras it will be needed to move the chessboard to the desired. Therefore, we have to launch the gazebo client (gazebo visual interfave):

    $ gzclient

alternate text

Calibration may be done:

or

The camera calibration will be needed for the pose estimation of the marker:
http://www.irisa.fr/lagadic/visp/documentation/visp-2.10.0/tutorial-pose-estimation.html

NEW PACKAGES

PLANAR CONTROLLER
  • Gripper motion pressing W/S, A/D, <leftarrow>/<rightarrow></rightarrow></leftarrow>
    $ rosrun gripper_controller gpycontroller.py
POINT CLOUD (FOR NOW, ONLY FOR KINECT)
    $ rosrun gripper_pcl kinectpcl

alternate text

TRACKER

This node tracks the black dots and uses the point cloud to show the 3D position of each dot relative to the camera frame.

    $ rosrun gripper_tracking tracker

alternate text

VIDEO DEMONSTRATION

Video not available

EXPERIMENTAL: MARKER TRACKING AND POSE CALCULATION WITHOUT POINTCLOUD

In Addition to the tracking of the black dots, the node shows the 3D points of each one using the point cloud. The pose of the marker (box) is calculated and the marker frame displayed (at the moment, this node does not work properly and has some bugs...):

    $ rosrun gripper_marker_tracking markerTracking_kinect

alternate text

ROS GRAPH

Drawing

VISUAL SERVOING NODES

IBVS

This node starts the Visual Servoing loop when the 4 dots are found. We can use the PLANAR CONTROLLER to move the camera in order to find the 4 dots, which will start the VS loop.

    $ rosrun gripper_ibvs ibvs

Thanks to the point cloud published by the kinect we can set a good approximation of the Z value of each dot for each iteration of the VS loop

Desired positions of the features (black dots):

alternate text

IBVS view:

IBVS view

At the moment, any node uses the velocity published by the control law (topic: IBVS/ControlLaw/velocity) in order to move the robot. A node that uses Moveit must be implemented.

IBVS video demo:

Video not available


MongoDB Logo MongoDB