Navigation Architecture

Sven Köhler
Attachments
Navigation1.jpg (188493 bytes)

Navigation Architecture

I used this diagram in the recent Skype conference with Brian and Roger. It attempts to show the major classes used for navigation and the interfaces between them.

The classes in ellipses are the main classes used for navigation. The classes in rectangles are data classes. The direction of the arrows indicates a call of a method in the indicated interface.

Pilot

Pilots are central to the architecture. They have two-way communication with Motor. The TachoMotor interface is used to control motors with tachometers (i.e. the NXT motors) and read data from the tachometers. Pilots implements the MotorListener interface to be told when motor movements start and end. By using the TachoMotor interface, pilots are treating tachometer data differently from other sensor data. This is because tachometers are tightly coupled with the NXT motors and tachometer data is treated as control data rather than sensor data. (See Sebastian Thrun's book, Probabilistic Robotics, for a definition of these terms and the arguments for treating odometry data as control data). It would, however, be possible to implement a pilot that controlled motors with the DCMotor interface and either did not use tachometer data, or treated it as sensor data.

Pilots implement one of the MoveController interface to move the robot. We have two main implementations: DifferentialPilot, which controls vehicles that use differential steering, and SteeringPilot, which controls vehicles that use car [[steering]]. DifferentialPilot implements the ArcRotateMoveController interface as it can do both arc and rotate (on the spot) moves. SteeringPilot implements the ArcMoveController interface as car steering does not support rotating on the spot. We also have an experimental FeedbackDifferentialPilot - see the section below on pilot sensor feedback.

Pilots call the MoveListener interface to send events to other classes when a move starts and stops. This is mainly designed for pose providers to use, but other classes can implement the interface.

Pilots typpically implement the MoveProvider interface. This includes the getMovement() method which can be used by pose providers to provide real-time pose updates during a pilot movement.

Pilots have no knowledge of their position and heading, i.e no knowledge of their pose, apart from the pose relative to the starting pose while they are executing a move. They do not use maps and do not use any external co-ordinate system, just co-ordinates relative to their start of a move.

Obstacle Detection

The architecture proposes an obstacle detection class and associated interfaces for pilots to be informed of obstacles while a move is in place. Such obstacle detection needs to be tightly coupled with a pilot.

The obstacle detection architecture should be general enough that it allows programmers to quickly and easily create vehicles that can operate in a number of different theaters. These theaters often have opposing goals, such as how they react to objects in traffic compared to a Nascar race or a demolition derby. If the driving on a highway in a family car, the vehicle will attempt to avoid all objects, and perhaps avoid some objects (people) more than others (cars/a curb). If the vehicle is competing in a demolition derby, then it might want to make contact with objects. Our architecture should be flexible to allow these interactions to be determined by the programmer.[[User:Bbagnall|Bbagnall]] 15:26, 16 March 2010 (UTC)

Pose Providers

A pose provider provides the pose of the robot in external absolute co-ordinates using the PoseController interface. It typically provides it to PoseControllers and user applications. The PoseControler interface is not used by pilots as they do not know the robot's position. It is not designed to provide feedback to pilots while they are moving and will typically be too slow to do this. Pose providers can calculate the pose using robot moves (their control data) and sensor measurements. If they use move data, they should implement the MoveListener interface. We currently have two implementations of pose providers: DeadReckoningPoseProvider, which uses Move data but no sensor measurements, and MCLPoseProvider which is a Monte Carlo Localization Particle Filter.

Pilot Sensor Feedback

Brent's current implementation of DifferentialFeddbackPilot uses pose providers to provide feedback during a move, and also uses the legacy Pilot interface rather than the newer MoveController interfaces.

I am proposing a new way for it to do sensor feedback. We will define a general method to allow sensor data providers to be added to classes such as pilots and pose controllers. Then, instead of using pose data as its sensor feedback, the pilot will be able to use any appropriate form of sensor measurement. This could be direction data from a DirectionFinder (such as Compass) or angular velocity from a GyroSensor or Acceleration from a TiltSensor.

Pose Controllers

Pose Contollers provide some of the functionality provided by navigators in the old architecture, but without duplicating pilot methods. The main method they implement is goTo(Point). Unlike pilots they know about the position of the robot in external co-ordinates. They use a pilot to move the vehicles and a pose provider to find its position in external co-ordinates.

The current implementation is ArcPoseController. This knows how to control vehicles that can either rotate on the spot or drive arcs of a circle. If ArcPoseController is passed a pilot that supports the rotate movement, the target is moved to by rotating to point at it and then moving to it in a straight line. If rotate on the spot is not supported, a single arc is calculated to move to the target point.

Note that pose controllers just do simple goTo operations. They do not find paths and do not typically have access to a map.

Path Finders

Path finders implement navigation strategies. They typically have access to maps and plot courses that meet constraints, such as staying on roads, avoiding obstacles and following walls. The current implementation is MapPathFinder.

The route returned by a path finder is an ordered collection of way points. A pose contoller can be used to follow the route.


Related

NXT Wiki: Home