LAGR

Learning Applied to Ground Robotics

  Kurt Konolige
Motilal Agrawal
Bob Bolles
Brian Gerkey
Marty Fischler
Cregg Cowan


LAGR is a DARPA project to develop autonomous offroad navigation, using techniques based on realtime vision and learning.  Currently Phase I has just finished (June 2006), and Phase II is beginning in August 2006.

One of the distinguishing characteristics of LAGR is the evaluation of performance by an independent group.  Every month, each team sends a flash disk with their current code to be evaluated.  The code is run on a robot at a remote location, without direct involvement of the team.  The goal is given as a GPS coordinate some 50 to 200 m from the start position of the robot, and the robot must get to the goal autonomously.  Learning can take place during a run (e.g., creating and using an exploration map, and determining good and bad areas), from run to run (re-using maps and visual associations), and sometimes offline, when the teams are provided with log data of similar areas.

In the last two tests, 12 and 13, SRI finished first among the 8 teams.  The average speed of the robot exceeded 1.1 m/s during these runs, which is remarkable, given the maximum speed of the robot is capped at 1.3 m/s.

Outdoor navigation in unstructured environments is a difficult problem.  In contrast to systems such as the Grand Challenge vehicles, the LAGR robots do not use laser rangefinders.  The main sensor are two stereo video cameras on the front of the robot.  There is also a GPS system with about 3m deviation under good conditions, an inertial measurement unit (IMU), and wheel encoders.

 

One of the most challenging parts of the project is building good maps for navigation.  We have developed several robust techniques that consistently produce high-quality maps for navigation:

Performance

The robot has been tested over many hundreds of hours, in different types of terrain.  There are still many types of obstacles that cannot be dealt with - water, deep ditches, wire fences, etc.  Still, our algorithms do a good job in many situations.  Click on the image at the right for a video of the robot going to a goal about 200 m from its start point [warning - this is a 65 MB file].

Stereo Analysis

Fast, precise stereo algorithms are the key to environment reconstruction and mapping (see the SVS web pages).  The stereo images (512x384) can be computed in 40 ms.  Robust ground plane fitting and obstacle detection take another 30 ms.  

The image at the right shows the robot's-eye view of a typical run.  Click on the image for a movie of the run [warning - 60 MB file].

The far right shows the interpretation based on stereo analysis.  Purple is for obstacles, green is for ground plane.  Additionally we find likely paths, overlaid in yellow.  Click on the image for a movie [warning - 81 MB file].

Visual Odometry

GPS, IMU, and wheel odometry can be combined to estimate the motion of the robot, but errors in the local estimation can be significant.  When the robot's wheels slip, for example, it becomes difficult to estimate the forward progress of the robot.  GPS localization is usually good to within 3m, but becomes uncertain under tree canopies or near buildings.

For fine motion estimation and environment reconstruction, we rely on fast, robust visual odometry (VO).  By comparing hundreds of features in successive images, we can determine the robot's motion very precisely.  Integrating IMU and GPS measurements makes the process more robust, and gets rid of global drift.

The yellow line in the image at the left is the integration of GPS, IMU, and wheel sensors.  It drifts considerably under the tree canopy, and at the end of the run (on the right) the wheel slips as the robot pushes against an obstacle, causing the estimation to go badly wrong.

The green line is the VO-estimated path.  Locally it is much smoother, and is not bothered by wheel slips.  

Click on the image to see an animation of the map reconstruction [warning - 81 MB file].

     

Publications

Konolige, K. et al.
Outdoor Mapping and Navigation using Stereo Vision
Proceedings of the International Symposium on Experimental Robotics, Brazil (2006)

Agrawal, M. and Konolige, K.
Real-time Localization in Outdoor Environments using Stereo Vision and
Inexpensive GPS

Proceedings of the International Conference on Pattern Recognition (2006)

Agrawal, M.
A Lie Algebraic Approach for Consistent Pose Registration for General Euclidean Motion
IROS 2006