Autonomous Vehicle Readings

I have a few new graduate students this semester, and to get them up to speed working on the car (and robotics more generally) I’m giving them weekly readings of papers, books, and code. If you want to follow along, check out the list below. And if you have any suggestions or comments, please share them with me!

Week 3

Theme: Reference frames and a bit of geometry. This week we’re going to look at conventions for reference frames in robotics and Apollo, and then we’re going to do just enough differential geometry to understand the reference frames we use for planning and control on the car. We’ll start with two REPs (“ROS Enhancement Proposals”) for

Once you have the general ideas down, look at

Lastly, we have one paper to get through:

This paper describes the reference frames we work with in our planner and controller. As you work through the paper, also look at Atsushi Sakai’s

As we go, we’ll review the background differential geometry necessary to understand the paper and implementation. If you’re looking for a good differential geometry book, I recommend Manfredo do Carmo’s Differential Geometry of Curves & Surfaces.

Week 2

Theme: Point cloud registration and sensor calibration. In order to locate obstacles around the car in a useful way, we need to be able to express lidar measurements in the reference frame of our car. This requires us to calibrate our lidar to our IMU (which serves as the center of our vehicle frame). We’re going to look at four resources to help us address this problem. First we need to understand the Iterative Closest Point algorithm, which solves the problem of computing an optimal alignment of two shapes. Read Section 1.3 of the following, which provides definitions, notation, and an outline of the ICP algorithm:

Once you’re familiar with the problem ICP is solving, read this early paper on lidar-IMU calibration to see one method for solving the lidar-IMU calibration problem:

Finally, to gain an understanding of sensor calibration on a modern platform, look at these resources from Apollo:

The first is a description of how Apollo handles lidar-IMU calibration, which is similar (but not identical to) Levinson’s approach. The second guide looks at sensor calibration for other sensor pairs (camera-camera, camera-lidar, radar-camera).

Week 1

Theme: Probabilistic Robotics. Modern robotics uses probability as the framework for almost everything. The book Probabilistic Robotics is still, 12 years after publication, the best resource to learn how probability can be used to solve core Robotics problems. This week we’re reviewing chapter two, on the Bayes filter. The notation and vocabulary set out in that chapter is what we’ll be using for most of our own work on social robotics for cars.

We’ll also start reviewing the code that currently runs on the car, and students should be prepared to present an overview of their assigned modules from our internal codebase on Friday.

Week 0

Theme: Robot architecture. To the best of my knowledge, all of the architectures currently deployed on autonomous vehicles are based on some variation of the Sense-Plan-Act framework. There are decent reasons for this, but we should keep in mind that this isn’t the only way to design a robot, and in fact most of the robots deployed in the civilian world use a more reactive approach. To get started, we’ll read a classic Rodney Brooks paper reviewing these approaches and presenting the Subsumption Architecture, along with a more recent blog post asking why the architecture never quite got to full intelligence:

General Resources

There are a few resources that we’ll refer back to repeatedly over the semester. In no particular order, here are some survey papers we’ll revisit as we go:

And here are some code repositories that we’ll refer to:

  • Baidu’s Apollo - In addition to our ROS-only codebase, we also have a configuration for the car where we use Apollo. We’ll make frequent references to this code to understand what our car is (and isn’t) doing.

  • Stanford Self-Driving Code - This is a GitHub mirror of the original, which is still on SourceForge (you can find it if you’re willing to tolerate SourceForge). Old code, but lots of interesting ideas. Also interesting to look at how perception was being done before deep learning happened.

  • Atsushi Sakai’s PythonRobotics - A great resource for reference implementations of robotics algorithms, in Python.