I used this diagram in the recent Skype conference with Brian and Roger. It attempts to show the major classes used for navigation and the interfacesw between them.
The classes in ellipses are the main classes used for navigation. The classes in rectangles are data classes. The direction of the asrrows indicates a call of a method in the indicated interface.
Pilots are central to the architecture. They have two-way communication with Motor. The TachoMotor interface is used to control motors with tachometers (i.e. the NXT motors) and read data from the tachometers. Pilots implements the MotorListener interface to be informede about when motor movements start and end. By using the TachoMotor interface, pilots are treating tachometer data differently from other sensor data. This is because tachometers are tightly coupled with the NXT motors and tachometer data is treated as control data rather than sensor data. (See Sebastian Thrun's book, Probabilistic Robotics, for a definition of these terms and the arguments for treating odometry data as control data). It would, however, be possible to implement a pilot that controlled motors with trhe DCMotor interface and either did not use tachometer data, or treated it as sensor data.
Pilots implement one of the MoveController interface to move the robot. We have two main implementations: DifferentialPilot, which controls vehicles that use differential steering, and SterringPilot that controls vehicles that use car steering. DifferentialPilot implements the ArcRotateMoveController interface as it can do both arc and rotate (on the spot) moves. SteeringPilot implements the ArcgMoveController interface as car steeering does not support rotating on the spot. We also have an experimental FeedbackDifferentialPilot - see the section below on pilot sensor feedbacdk.
Pilots call the MoveListener interface to send events to other classes when a move starts and stops. This is mainly designed for pose providers to use, but other classee can implement the interface.
Pilots have no knowledge of their position and heading, i.e no knowledge of their pose, apart from the pose relative to the starting pose while they are executing a move. They do not use maps and do not use any external co-ordinate system, just co-ordinates relative to their start of a move.
The architecture proposes an obstacle detection class and associated interfaces for pilots to be informed of obstacles while a move is in place. Such obstacle detection needs to be tightly coupled with a pilot.