Understanding Unsettled Challenges in Autonomous Driving Systems
Recently published SAE Edge Reports, co-authored by test and research team at AAA Northern California, Nevada and Utah, highlights the key issues that the autonomous vehicles industry continues to face.
Authored by Atul Acharya
- SAE EDGE Reports on Simulation and Balancing ADS Testing
The promise of highly automated vehicles (HAVs) has long been in reducing crashes, easing traffic congestion, ferrying passengers and delivering goods safely — all while providing more accessible and affordable mobility enabling new business models.
However, the development of these highly complex, safety-critical systems is fraught with extremely technical, and often obscure, challenges. These systems are typically comprised by four key modules, namely:
(i) the perception module, which understands the environment around the automated vehicle using sensors like cameras, lidars, and radars,
(ii) the prediction module, which predicts where all other dynamic actors and agents (such as pedestrians, bicyclists, vehicles, etc.) will be moving in the next 2-10 seconds,
(iii) the planning module, which plans the AV’s own path, taking into account the scene and dynamic constraints, and
(iv) the control module, which executes the trajectory by sending commands to the steering wheel and the motors.
If you are an AV developer working on automated driving systems (ADS), or perhaps even advanced driver assistance systems (ADAS), you are already using various tools to make your job easier. These tools include simulators of various kinds — such as scenario designers, test coverage analyzers, sensor models, vehicle models — and their simulated environments. These tools are critical in accelerating the development of ADS. However, it is equally important to understand the challenges and limitations in using, deploying and developing such tools for advancing the benefits of ADS.
We in the AV Testing team at GoMentum actively conduct research in AV validation, safety, and metrics. Recently, we had an opportunity to collaborate with leading industry and academic partners to highlight these key challenges. Organized by SAE International, and convened by Sven Beiker, PhD, founder of Silicon Valley Mobility, two workshops were organized in late 2019 to better understand various AV testing tools. The workshop participants included Robert Siedl (Motus Ventures), Chad Partridge (CEO, Metamoto), Prof. Krzysztof Czarnecki and Michał Antkewicz (both of University of Waterloo), Thomas Bock (Samsung), David Barry (Multek), Eric Paul Dennis (Center for Automotive Research), Cameron Gieda (AutonomouStuff), Peter-Nicholas Gronerth (fka), Qiang Hong (Center for Automotive Research), Stefan Merkl (TUV SUD America), John Suh (Hyundai CRADLE), and John Tintinalli (SAE International), along with AAA’s Atul Acharya and Paul Wells.
One of the key challenges encountered in developing ADS is in developing accurate, realistic, reliable and predictable models for various sensors (cameras, lidars, radars), and actors and agents (such as vehicles of various types, pedestrians, etc.) and the world environment around them. These models are used for verification and validation (V&V) of advanced features. Balancing the model fidelity (“realism”) of key sensors and sub-systems while developing the product is a key challenge. The workshop addressed these important questions:
- How do we make sure simulation models (such as for sensors, vehicles, humans, environment) represent real-world counterparts and their behavior?
- What are the benefits of a universal simulation model interface and language, and how do we get to it?
- What characteristics and requirements apply to models at various levels, namely, sensors, sub-systems, vehicles, environments, and human drivers?
To learn more about these and related issues, check out the SAE EDGE Research Report EPR2019007:
Balancing ADS Testing in Simulation Environments, Test Tracks, and Public Roads
If you are a more-than-curious consumer of AV news, you might be familiar with various AV developing companies stating proudly that they have “tested with millions or billions of miles” in simulation environments, or “millions of miles” on public roads. How do simulation miles translate into real-world miles? Which matters more? And why?
If you have thought of these questions, then the second SAE EDGE report might be of interest.
The second workshop focused on a broader theme: How should AV developers allocate their limited testing resources across different modes of testing, namely simulation environments, test tracks, and public roads? Given that each mode of testing has its own benefits and limitations, and can accelerate or hinder development of ADS accordingly, this question is of paramount importance if the balance of limited resources is askew.
This report seeks to address three most critical questions:
- What determines how to test an ADS?
- What is the current, optimal, and realistic balance of simulation testing and real-world testing?
- How can data be shared in the industry to encourage and optimize ADS development?
Additionally, it touches upon other challenges such as:
- How might one compare virtual and real miles?
- How often should vehicles (and their subsystems) be tested? And in what modes?
- How might (repeat) testing be made more efficient?
- How should companies share testing scenarios, data, methodologies, etc. across industry to obtain the maximum possible benefits?
To learn more about these and related challenges, check out the SAE EDGE Research Report EPR2019011: