(AKA Localizers) The different methods that can be used to compute Pose (position and heading) allow us to determine the requirements of our Navigation API. We'd like our package to support a robot that uses say 3 or 4 different pose provider techniques. Say I have a robot that uses tacho dead reckoning, monte carlo, inertial dead reckoning, and let's even throw GPS in the mix. Given how Thrun describes the probability model, by adding all three of these to a filter we should be able to achieve an x, y coordinates and heading that is superior to any single pose provider technique, and with a lower error. So it's beneficial for us to have numerous pose provider techniques at our disposal.
These examples will help determine if we want to use a Move centered feedback control or remove Move entirely and go for a "breadcrumb" waypoint following style of control.
Several methodss need to stop and examine the environment before proceeding, and therefore can't provide readings while on the move. These include the camera localizer, MCL localizer and light sensor localizer all take long time to obtain a reading. We wouldn't want the user to keep polling the localizers expecting instant results for any of these. The Localizer interface should give some sort of stat indicating how long it takes to get reading roughly, such as Localizer.waitTime().
Mapping is also an important consideration. With waypoints, you need them close together to define a movement path. Can't be long distances apart for curves.
Currently implemented with DeadReckonerPoseProvider. Uses generic MoveProviders, which includes the diffferent Pilots, but will probably become a distinct TachoMoveProvider.
Also implemented as lejos.robotics.inertialProposal.TachoPoseProvider by Brent.
Deadreckoner is able to determine both position and direction.
Currently implemented by Lawrie. MCL is able to determine both position and direction.
Currently able to get GPS coordinates in leJOS but not implemented yet as GPSPoseProvider.
A GPS unit is only able to determine position but not direction. While you are standing in one spot, the satellite beacons can only pinpoint your location.
Currently implemented as GyroPoseProvider by Brent.
Currently implemented as InertialPoseProvider by Brent.
Setup: The vehicle is a radio controlled car. The radio controls have Lego motors hooked up to the control sticks. A camera is mounted over a table. The vehicle has a blue dot on the front and a red dot on the back. A camera is mounted over a table, and uses color detection to identify position and heading.
Make a class for controlling the joysticks called RCMoveController that extends ArcMoveController. This class could be used to control the thousands of typical R/C style vehicles out there.
With this setup, the vehicle has no tachometers for move control feedback. The info extracted from the camera is coordinates and heading, i.e. PoseReporter data. So how do we get a MoveReporter from this?
This would also work equally well with a GPSPoseReporter, converting data into MoveReporter data.
Robot in rectangular area with 4 walls and maximum length of each wall is about 180 cm (less than ultrasonic sensor range). Has a sensor array that consists of two ultrasonic sensors. One points at east wall and one at south wall. This array is on a motor that constantly keeps the sensor pointing in the same direction. Uses a compass to figure out North, to keep it pointed in same direction. So by knowing distance from each wall it immediately pinpoints the x, y within some error range.
Claude Baumann's beacon setup:
We could create a CameraLocalizer using an NXT Camera from Mindsensors. The user would need to hang different color spheres hanging around environment at known coordinates: red, blue, green, yellow. It can identify which sphere it is looking at based on the color, and so look up the position of the sphere. It can also tell the distance to the sphere based on the size of the sphere in the camera (NXT Cam has bounding box). The reason for using a colored sphere and not a circle or square is because a sphere is the only shape with the same profile when viewed from all angles. So it could use the distances to known sphere points and use trigonometry to calculate the x, y, z position of the robot, exactly like GPS but smaller scale. A sensor rotator could be useful for this. I'm not sure how well it could pinpoint the sphere angle and distance without centering the sphere in the camera frame, so it probably needs up and down movement too in order to target the sphere in the center.
This is a slight alteration on the "colored balls" method of localization that is more electronics oriented. In this scenario, the camera points at the ceiling and the user places semi-permanent colored circles on the ceiling, but the principle remains roughly the same. This can be altered by using the stationary recharge station/home base/beacon to project multicolored lights onto ceiling instead. The projector is essentially a single device like a planetarium here [http://the-gadgeteer.com/2006/08/24/sega_toys_homestar_planetarium/]
This method is similar to traditional navigation using stars and constellations, and this electronics device would essentially project a "constellation" on the ceiling. The "planetarium" would of course be very basic, perhaps using infrared instead of visible light. It could also differentiate the "stars" by rapidly turning them on and off in sync with the robot receiver. Or perhaps if it could project using different color wavelengths of light.