This is an iteration of the proposed navigation architecture.
[-img src=Nav 3.jpg width=750: missing =-]
Note: Currently this lists a few of the methods for each class. More convenience methods will be added in the actual implementations.
At the application level the user is able to use as many of these objects as they want. Applications should allow GUI displays and display all relevant data from our package, including Move lines, poses, speeds, accuracy, map data, etc... The simplest application will probably use a PoseController to instruct a robot to go to a destination point.
The PathPlanner is a consultant. It accepts Map data in the constructor. It also needs to know the turn radius of the vehicle. To get a path from it, either the user application or the PoseController asks for a new path by calling getPath(MoveController vehicle). The PoseController contains an instance of MoveController, so it can pass this itself. The dashed-line indicates an application passing this object directly.
We probably need a Path object that consists of ordered Moves and Poses/Points. The PathPlanner.getPath() method returns this data.
The PoseController class is the workhorse. The user application will do most of the calls to this class in order to make a vehicle move. It is given a destination target via goTo(destination). The PoseController is aware of the current Pose of the vehicle via the PoseProvider. It asks the PathPlanner for a path to get to the destination. It feeds moves to MoveController in order to find its way to the destination.
The MoveController is given a Move to complete from the PoseController. It is kept up to date about its current progress via the MoveProvider. It attempts virtual line following by comparing the Move given to it with the Move being provided by the MoveProvider. It implements a PID or similar algorithm to generate Motor commands to do virtual line following. It informs all MoveListeners when moves start or end.
When the obstacle detector detects an obstacle, it can handle the obstacle in a number of different ways, depending on what type of ObstacleDetector it is.
Reactionary: It could try to avoid the obstacle by going around it and trying to rejoin the current Move it was on. This requires no calculations or interactions with the (slow) PathPlanner, therefore it is relatively quick and wouldn't interrupt movement.
Deep thinking: It informs the PoseController which 1. halts the MoveController 2. informs the PathPlanner that an object was detected at x, y coordinates. 3. PathPlanner updates its Map data with new coordinates. 4. PoseController then discards the current Path waypoints ("flush their queues"). 5. It then asks the PathPlanner for a new path, which will go around the object that was added to the map data.
PoseProviders such as GPS don't need any information from anything. They are "black box" data providers and work independent. DeadreckoningPoseProvider receives Moves from the MoveController via the MoveListener interface. It can also be more complex, such as a localization Bayes filter, such as an MCL particle filter.
The MoveProvider works exactly like Lawrie's PilotStateEstimator, so it is a filter. But what does it produce to represent the state? [[Move]] seems like the most likely candidate, as it can represent a line.
The pilot state estimator hold the current state of a move (position, velocity, angular velocity, acceleration etc.) in terms that are independent of the design of the robot. It is told about moves and velocity from the pilot and reads sensor data from any number of sensor providers. It will typically be a Bayes filter, such as a Kalman filter. The pilot consults it to get the current state of the move.
This is the most common and basic usage of these objects, ignoring default constructors that would make this code smaller.
// Construct a map of the environment map = new LineMap(...); // Construct an ObstacleDetector that uses an Ultrasonic sensor obstacleDetector = new RangeObstacleDetector(sonic, THRESHOLD); // Construct a MoveProvider for the pilot, using a Kalman filter with feedback from a Gyro Sensor moveProvider = new KalmanMoveProvider(); moveProvider.addSensorProvider(Measurement.ANGULAR_VELOCITY, gyro); // Construct an MCL PoseProvider poseProvider = new MCLPoseProvider(map); poseProvider.addSensorProvider(Measurement.RANGE_READINGS, rangeScanner); // Construct the MoveController vehicle = new MoveController(Motor.A, Motor.C, obstacleDetector, moveProvider, ...); // Construct a path planner that uses the map planner = new MapPathPlanner(map); // Construct the PoseController poseController = new PoseController(vehicle, poseProvider, planner); // Go to the destination poseController.goTo(destination);
In this example, a GUI allows the user to view a map, see the current position of the robot, and click on a target destination. They can then see the proposed path and even alter the path before engaging the vehicle to follow that path.
// Construct a map of the environment map = new LineMap(...); // Construct a path planner that uses the map planner = new MapPathPlanner(map); // Construct a MoveController with the proper turn radius mc = new MoveController(Motor.A, Motor.C); // Retrieve the Move and Pose data via a Path object Path path = planner.getPath(mc, destination); // Now program displays the path on screen with their custom GUI code. ... // If user approves of this path, the path is passed to the PoseController, which carries out the moves. poseController.travel(path);