Making Autonomous Vehicles Smarter

Combining real-world info and simulated conditions to build a data set.

Combining real-world info and simulated conditions to build a data set.

Deep learning (DL)—one of the newest trends in artificial intelligence—has driven remarkable progress in the development of autonomous driving systems, but the driverless car has a long way to go before it crosses the finish line.

One factor holding the technology back is its need for massive amounts of data for training, testing and validation. The growing fleet of driverless test vehicles is helping to meet this requirement. Industry experts estimate that just one of these vehicles can generate as much as 10TB of data per hour. But as astounding as it may sound, it’s not enough. Research teams need more data. In fact, they require another source of data to help the technology realize its full potential.

“Simulation is a key tool in correlating physical signals, AI scenarios and deep learning models.”

— Dhananjai Saranadhi, AutoX

Developers, however, don’t just need more data. They require a global collection of data—one that comes closer to accounting for the nearly countless variables encountered by vehicles on the road. To meet these demands, designers are turning to sophisticated simulators, tailored to create digital worlds that provide designers with more insight into the dynamic forces in play in today’s driving environments. In response to the growing contributions of these simulators, developers have begun to embrace a new design technique.

“The industry is using an approach of combining real-world data with simulated conditions,” says Tom Wilson, vice president of Automotive at Graphcore. “The AI models need more data to become more robust, and the only way to drive maximum robustness is to generate as much data—both real-world and simulated—as is possible. This is true for training and for performance validation.”

To appreciate the contribution that simulators make, you have to examine how they function within the context of the DL process and look at the impact that the data they provide has on the autonomous driving system.

Deep Learning Basics

Even a cursory look at DL drives home the importance of data to the technology. The fact is that if big data hadn’t come along, DL wouldn’t have become the disruptive technology it is today. But it did, and it has. So what are the nuts and bolts of this technology, and how do they work?

The real world frequently imposes conditions that challenge autonomous driving system’s ability to “see” and interpret the driving environment. Armed with rich databases enhanced with simulation data, developers can test and validate autonomous systems to ensure that they can safely function under various ambient conditions, such as low-light conditions, as shown here. Image courtesy of NVIDIA.

A subset of machine learning, DL helps computers learn to recognize patterns in the digital manifestations of things like sensor data, images and sounds. This enables the system to formulate predictions and ultimately make decisions.

This learning occurs in an iterative, three-layered process, built on a hierarchy of neural networks. In the first layer, the system receives input data from sensors and then passes it on to what are called “hidden layers.” In the second tier, the computer performs a succession of computations on the inputs, interpreting sensory data through machine-perception algorithms. Each hidden layer trains on a distinct set of features, based on the previous layer’s output.

During this process, the algorithms label and cluster raw input according to similarities among the example inputs. They then classify data when they have a labeled dataset to train with. The more hidden layers in the neural network, the more complex the features the network can recognize, ultimately constructing a more holistic and complete picture of conditions. Finally, in the third step—the output layer—the neural network provides output data—essentially the prediction or decision.

Neural networks learn correlations between relevant features and make connections between what those features represent. The more data a neural network trains on, the smarter it gets, and the more accurate it will be, even if the algorithms are flawed. For example, a bad algorithm trained on large amounts of data often outperforms a good algorithm trained on a small collection of data.

A Matter of Perception

One of the first areas where developers apply DL is a function called perception, which entails collecting data and extracting relevant information from the driving environment. Perception requires autonomous driving software to make sense of the inputs of the sensors mounted on the vehicle.

Autonomous vehicles must be able to take sensor signals, identify a broad range of objects—ranging from pedestrians, bicycles and all types of motor vehicles to curbs, traffic signs and lane markings—and determine their position, trajectories and velocities (if movement is a factor). These tasks are largely performed by DL, particularly regarding sensors such as cameras, whose output take the form of highly visual information.

To train the DL algorithm in these cases, developers must feed thousands of different images into the algorithm. The images should depict a broad variety of objects, differing in sizes, colors, shapes, ambient lighting, weather conditions and orientations.

“This is where simulation plays an important role,” says Sandeep Sovani, director, Global Automotive Industry, at ANSYS. “All these sorts of things need to be fed into the algorithm. Now we can capture this and create a database for this by actually going out and taking pictures of all these thousands and thousands of variations. But that’s very complicated. So instead, we can simply program a simulation. We can change the color of clothes on the pedestrians, change their height, change the time of day—all these types of things we can set up to churn out images, one after the other, by tens of thousands in simulation. It makes the process very efficient.”

A 3D Virtual Environment

AutoX, an autonomous vehicle startup, has created such a simulator. The 3D virtual environment promises to help engineers to take a single, real-world driving maneuver — such as a turn—and evaluate the impact of thousands of variables. The company has found that the library of data generated by each scenario allows its engineers to train algorithms and better understand how its DL software and test vehicles react to key factors in a scenario, allowing the designers to tune the vehicle’s responses to achieve maximum safety.

One of the most distinguishing features of AutoX’s simulator lies in the fact that it is based on multi-sensor, 3D simulated inputs. The simulator combines these inputs to form a coherent picture that the DL algorithms can use to enhance the design process.

To enrich the simulator’s effectiveness, AutoX models the physical properties and kinematics of various objects (e.g., vehicles, signs and pedestrians). The company says that the model makes the actual test more realistic. The full stack of driverless software, including the sensing module, can be placed in the simulator to optimize the rendition’s realism.

Simulating Sensor Input

NVIDIA’s DRIVE Sim software module aims to simulate the sensors being used on an automated vehicle. To make this segment of the simulation both realistic and effective, the module must be fed inputs from an eclectic collection of data sources, ranging from traffic and sensor models to area maps and scenario libraries. Image courtesy of NVIDIA.

AutoX asserts that efficient, real-time and realistic simulators not only speed up the development of artificial intelligence modules, but they also enhance engineers’ ability to evaluate rare scenarios that exist on the fringes of driving events.

“Simulation is a key tool in correlating physical signals, AI scenarios and deep learning models,” says Dhananjai Saranadhi, senior mechanical engineer at AutoX. “Any data collected during real-world driving is stored, and can then be used to recreate scenarios in simulation. The vehicle’s dynamics have also been captured and integrated into the system. Other vehicles in the simulation have a basic level of AI that allows them to follow (or break) rules, and their paths can be manipulated to create very particular scenarios that the engineering team wishes to study. Simulation thus provides the ideal environment to study the effect of external agents, new software features and deep-learning models on the vehicle’s driving performance.”

The Proof is in the Data

Autonomous driving systems raise a number of intriguing advantages for users and automakers alike. To cover the distance from drawing board to broad adoption, however, developers will have to find a way to prove that autonomous driving systems can deliver on their potential. Traditionally, that would translate into accruing an extraordinary number of autonomously driven miles, and therein lies the problem.

In its report titled “Simulation in Automotive: Training and Validating Autonomous Control Systems,” ABI Research estimates that accruing the billions of miles necessary to establish autonomous systems’ trustworthiness would require “the deployment of at least 3 million unproven autonomous vehicles over the course of 10 years.”

If you consider the market pressures developers have to contend with, it becomes clear that this approach won’t cut it. Taking this tack, there simply isn’t enough time to evaluate the elusive “corner cases,” or rare situations that seldom occur—situations like the sun shining directly into a car’s camera or the effects of inclement weather.

Because of these factors, the industry has started seeing simulation as a technology that will enable the testing and validating of autonomous driving systems. Simulation will not only build up the volume of data on the driverless experience, but it will also allow developers to test their designs in rare and potentially dangerous scenarios without risk of damage or loss of life.

A Constellation of Simulation Tools

One of the simulators currently on the market that is tailored for testing and validation is NVIDIA’s suite of cloud-based tools called DRIVE Constellation. This platform uses two servers to simulate real-world autonomous driving. One server runs DRIVE Sim software, which simulates the vehicle’s sensory inputs, providing data from virtualized camera, LiDAR and radar systems that mimic data captured during actual driving.

The second server runs DRIVE AGX Pegasus, which is the complete autonomous vehicle software stack. Pegasus processes the simulated sensor data from the first server and then sends control commands to the simulator, telling it to carry out driving activity just as it would an actual car on the road. The driving decisions from Pegasus are fed back to the simulator 30 times per second, enabling hardware-in-the-loop testing. This is a type of real-time simulation that shows how the controller responds in real time to virtual stimuli.

“It’s very difficult to verify and validate vehicle self-driving capabilities solely using on-road testing,” says Danny Shapiro, senior director of Automotive at NVIDIA. “Coupling actual road miles with simulated miles in the data center is the key to testing and validating autonomous vehicles. DRIVE Constellation bridges this verification and validation gap. It enables developers to test and validate the actual hardware and software that will operate in an autonomous vehicle before it’s deployed on the road. Furthermore, the platform can simulate rare and dangerous scenarios at a scale simply not possible with on-road test drives. The platform can simulate billions of miles in virtual reality, running repeatable regression tests and validating the complete AV system.”

Just the Beginning

Market research firms such as ABI Research predict that simulation will become a cornerstone technology for autonomous driving to achieve its full potential, serving with DL as the underpinning for training, testing and validating these systems.

According to ABI, simulation will prove to be a “must-have for adopters of deep-learning approaches.” Simulation helps developers determine whether the algorithms will make the right decisions. It saves money, and more importantly ensures that once the algorithm is on the road it is safe.

Today, you can find multiple simulation platforms on the market. Some will be offered as part of broader autonomous driving packages. At the same time, specialized vendors, such as Cognata and Metamoto, meet the need for virtual evaluation environments by offering simulation-as-a-service products.

The value of simulation has not been lost on the big industry players regarding its potential for generating profit. At the same time, those who develop autonomous driving systems gauge simulation’s worth in its ability to accelerate design and development efforts.

About the Author

Tom Kevan's avatar
Tom Kevan
Tom Kevan is a freelance writer/editor specializing in engineering and communications technology.
Follow Robotics 24/7 on Facebook
Follow Robotics 24/7 on Linkedin


Email Sign Up

Get news, papers, media and research delivered
Stay up-to-date with news and resources you need to do your job. Research industry trends, compare companies and get market intelligence every week with Robotics 24/7. Subscribe to our robotics user email newsletter and we'll keep you informed and up-to-date.


Robot Technologies