We’ve all heard about self-driving cars; autonomous vehicles capable of driving themselves. Although we know they exist, not many give thought to how they are built. The Harris Institute’s newest Research Associate, William Nyffenegger, led a team of undergraduate students from Florida Institute of Technology and Florida State University in the 25th Annual Intelligent Ground Vehicle Competition. In this competition, schools from across the country were tasked with building an autonomous vehicle capable of adapting to a course that they had never seen before.
Only 25 of the original 50 teams who applied were able to make their way to Oakland University in Rochester, Michigan for the competition; with companies like Magna, Polaris, Fiat Chrysler, and Northrop Grumman all paying close attention to the outcome. Students from Florida Institute of Technology and Florida State University worked together to make a run at the podium. Whereas traditionally teams who compete prepare for two or three years before coming to competition, and often copy and paste code from libraries that are already written, Nyffenegger and his team did everything from scratch and they did it in only a year.
“Everything we did is new,” said Will Nyffenegger, team lead. “Most of the stuff we did has not been done before. We used technologies that people don’t use and decided to go more in-depth. The fact that we were able to put it together and go means that we did pretty well.”
Two of the main technologies that set this team apart from the competition were the computer vision aspects and the SBMPO motion planner.
Computer vision is one of the hardest challenges that autonomous car companies face because these cars have to work in many different conditions and try to account for an infinite number of possible situations. When working with computer vision, autonomous vehicles have to be able to identify lanes and obstacles regardless of if it’s raining or sunny outside. If you have an autonomous car in a neighborhood, you don’t want to hit a pedestrian crossing the street or cause any accidents so you need to be sure these problems can be avoided.
The team’s vision software allows them to see lines; anywhere where there’s a contrast the robot can see. By using contrast rather than color, the robot will not be phased by the multiple color changes between dawn and dusk. While others use this technique as well, the key difference is that the Florida Tech team does all of this on a graphics processing unit (GPU). The GPU, typically found in mobile phones or game consoles, allows you to run operations in parallel meaning you can perform 1,000 operations per second instead of one.
“We can tell you what’s in this hallway as fast as you can take pictures. Or faster, actually. And we can tell you with accuracy inside of a centimeter,” said Nyffenegger.
Normally, the objective of autonomous cars is to get from A to B as quickly as possible. If you’re in a car you don’t want your trip to take longer than it needs to; if you have a search and rescue robot, then you want it to reach the target as quickly as possible. Very similar to a video game, motion planning determines where your robot can move to next; based on its own capabilities and based on any surrounding obstacles.
Typical motion planning works based on position; if I want to go from square one to square two, is square two clear? Nyffenegger’s team, however, looks at what the robot can and can’t do and then searches possible movements from there; essentially doing path planning and motion planning at the same time. Instead of basing motion planning on positions, it is based on commands telling it where to go and how fast. Because commands can be similar, i.e. make a quarter inch turn, or make a 0.251 inch turn, it allows the user to pick 7-10 commands to run at once and run them 100-1,000 times a second; a task not possible with normal path planning.
Once the robot reaches its destination without hitting an obstacle, the robot direction and the successful path are saved for future use. Because the robot is able to recognize that there are different ways to arrive at a point, it can go through the same point multiple times; a task that would cause other robots to fail.
According to Nyffenegger, “Because we do this, if you map the whole course, the robot can plan at once and be done. And let’s say the robot screwed up, because robots tend to do that, our planning is so fast that the robot can adjust as fast as you can blink. If you provided the data, this thing will not hit an obstacle because it is able to adjust that fast.”
All of the necessary data is collected with a combination of a stereoscopic camera, a LIDAR sensing method, and a high-powered GPS. The camera has two angled lenses and an image sensor that allows for a better idea of the distance between the robot and the lanes. The LIDAR system, similar to a whale’s sonar, sends out pulsed lasers to measure range of distance from obstacles. The final piece, the high-powered GPS, tells the team where the robot is with extreme accuracy. All of these pieces are attached to the robot, and running extremely fast, to provide as much data as possible. The combination of the high-tech equipment used and the relative lack of time to complete the project ended up stopping the Florida Tech–Florida State team’s impressive first appearance at the competition.
“Now the problem with all this fancy stuff is that you have to test a lot. You have to parametrize it and a first year team doesn’t really have the time to do that,” said Nyffenegger.
Although a last minute setback kept the joint Florida Tech–Florida State team off the podium, in this their first year of competition, the experience gained offers high hopes for them in the future. Next year Florida Tech will return to competition as an independent team, using the valuable information they gained from the first run, with some necessary improvements.
The goal for next year is to create a smaller, more modular robot that will allow for software adjustments on the fly. A robot whose components will be tested thoroughly before competition. In addition, Nyffenegger and the team hope to open source the code they created for this project. Open sourcing code allows for collaborators around the world to potentially make improvements, but Nyffenegger is confident that their method is the best currently out there. Initially a team of five students and a professor, without any idea of how to build an autonomous car, this year’s team has set a strong foundation for the future.