Blog

Training the “brain” of the self-driving car

on October 5, 2017

When a self-driving car travels down a road, even the simplest environment generates massive amounts of data every minute, as sensors and cameras continually capture a 360-degree view of the car’s surroundings.

Torc utilizes deep learning to train the autonomous system to recognize and classify notices like speed limit signs in real time.

Torc utilizes deep learning to train the autonomous system to recognize and classify notices like speed limit signs in real time.

The “eyes” of our car are composed of radar, LiDAR, and cameras. They work together to collect information about the road in real time—including objects, signs, lane lines, and traffic lights.  The next step is making sense of all that data. Our self-driving cars use a key computing component that originates from video game technology – the graphics processing unit (GPU).

The GPU is known for applications in video game graphics, where its ability to process large amounts of data at once is used to generate the pixels and shapes that make up the game. As GPUs have become more powerful, other applications for this technology emerged, including those in artificial intelligence and self-driving cars.

We have been using NVIDIA Pascal Architecture GPUs to perform training and inference for our autonomous system from the start of our current self-driving car program. They are utilized in servers outside of the vehicle to train and refine our algorithms, as well as in the car to detect and categorize sensor data.

Torc Chief Technology Officer Ben Hastings says, “NVIDIA GPUs enable us to rapidly train and deploy neural networks and other massively parallel algorithms that allow our vehicles to make sense of the world around them.”

To create a system that can make smart decisions on the road, we use deep neural networks, which are designed to learn in a similar way to the human brain. Our algorithms are trained using servers of GPUs that simulate scenarios on the road. Through deep learning, we can rapidly improve the system’s classification and decision making without having to physically drive the autonomous car through every possible scenario. For example, we can train the system to recognize speed limit signs by feeding the network data about a variety of signs. Once it is on the road, it can recognize a new speed limit sign passed on the road in real time, without the need to pre-program information about every speed sign ahead of time.

The applications also extend to object rendering and overlays on our displays of real-time video of the road. As the sensors and cameras feed information to the system, GPUs translate the raw data into imagery that displays what the car “sees” in a way that more closely mimics what a human would see.

Companies like NVIDIA are continuing to evolve their GPU design and performance from more general-use equipment to units specifically designed for self-driving car systems. For example, powerful yet energy efficient GPUs are essential for mass-producing self-driving cars, especially when used on electric vehicles. New innovations are being made every day, and every improvement is another step toward making autonomous transportation accessible to everyone.

Jim DudleyTraining the “brain” of the self-driving car

Related Posts

Take a look at these posts