Tag Archive for: Safety Metrics

A Roadmap for Accelerating Safe Automated Driving Deployment

By Atul Acharya

The automated vehicle (AV) industry has been grappling with critical questions for some time now regarding development of automation. Along with the industry, however, regulators in various jurisdictions have also been grappling with their own concerns. Namely, how might the benefits of automation be disseminated widely, yet safely, among the general public? How should regulators decide which vehicles should be deployed? How should AVs be evaluated for safety in the first place? What should a technical evaluation process that is fair to all look like? These questions are hardly those of the regulators’ alone; in fact, AV developers and other stakeholders also have similar concerns, primarily because they all are either direct beneficiaries or are directly responsible for creating safe technologies.

The World Economic Forum (WEF), earlier this year, launched an initiative to help regulators — local, national and international — create a data-driven policy framework to help address these questions. In partnership with the consulting firm McKinsey & Co, and technology platform company Deepen.ai, the Forum launched the Safe Drive Initiative (SafeDI) to formalize just such a framework. The Forum invited several industry participants, including AAA Northern California’s Autonomous Vehicle team, several leading AV developers, policy makers, academics and safety experts to help develop this framework.

AAA Northern California’s team was led by Atul Acharya Director, AV Strategy and Operations and Xantha Bruso Manager, AV Policy. As a key contributor to the steering committee, the team helped guide the framework development by asking big questions. The expertise gained from testing AVs at GoMentum Station was critical in helping develop the scenario-based assessment framework. Going deeper, the committee asked critical questions, such as:

  • How should AVs be evaluated for safe automation? 
  • How should the operational design domain (ODD) be specified such that an equivalence can be established between testing mode and deployment mode?
  • How should regulators design a scenario-based assessment framework, given that the vast majority of scenarios (~approximately 1-10 million) may never be tested on roads?
  • What combination of testing modes — such as simulation, closed-course testing, and open road testing — should the AVs be tested in?
  • What safety metrics matter when AV performance is assessed? And which metrics should be made public?
  • How should regulators ask for such metrics when they do not necessarily know all the technical implementation details?
  • What is the role of independent testing in evaluating AVs?
  • How should scenarios be shared within the industry so that safety is not a competition but an essential requirement?

Over the course of 2020, the steering committee met monthly to guide the framework development process. The committee created several technical work groups comprised of experts academia and industry that each explored various technical aspects of the framework, such as defining ODD; elaborating scenario-based assessment;  exploring available and upcoming technical safety standards, such as ANSI/UL 4600; and exploring AV policy regimes with examples from light-touch (e.g. US-based) to high-touch (e.g. Singapore / EU based) approaches, and identifying gaps in these policies. 

The group defined a four-stage, graduated approach to testing and assessing AVs, taking into account the requirements from various stakeholders, including the general public, the ultimate beneficiaries of automation. Broadly speaking, the Safe Drive Initiative seeks to improve regulators’ decision-making abilities on automated vehicles technologies. 

The guiding principles of the framework include:

  • Multi-stakeholder approach – regulators and AV developers should benefit from the framework and find the guidance both practical implementable
  • Scenario-based assessment – use of key scenarios within deployment ODD to evaluate the AV’s performance, while noting such a scenario-database would be a starting point, not an end-goal
  • Common set of metrics – leveraging a common set of metrics for AV assessment, such as ODD excursions, operational safety, and more (some developed, others still emerging in new standards) 
  • Covering simulation, closed-course testing, and on-road testing – using all three modes for evaluation to ensure efficiency and effectiveness of testing

The approach defined in the SafeDI framework is broadly divided into four stages:

  1. Prepare: convene necessary stakeholders, define the end goal, and establish process
  2. Define: establish the required behavioral competencies for the AV, define geographic areas, and parameters for each interim milestone
  3. Measure: specify on-road, controlled-environment, and simulation tests, and determine success/advancement criteria, 
  4. Execute: conduct tests, collect required data from AV developers as necessary, improving the safety assurance process as needed

This framework is designed to provide a high-level guidance to regulators. As such, it is flexible enough for regulators to adapt to their jurisdictions, and is detailed enough to cover underlying technology changes. The committee recognizes that no one-size-fits-all solution will be sufficient for all jurisdictions, and that customization at each stage will be balanced with standardization and harmonization at the highest levels. 

For full details of the policy framework, refer to WEF’s website at:

Safe Drive Initiative 

Safe Drive Initiative: Creating safe autonomous vehicle policy 

Safe Drive Initiative: Scenario-based AV Policy Framework 

 

Implications 

The SafeDI framework enables regulators to evaluate AVs, potentially by independent testing organizations, such that regulators may focus their efforts on guiding AV developers, rather than performing the tests themselves. As such, this framework encourages the use of new and upcoming standards, such as ANSI/UL 4600 in safety evaluation of AVs. 

It is our hope that this approach will lead to a safer, more inclusive deployment of automated vehicles.

GoMentum Presents at SAE World Congress WCX 2020 Digital Summit

GoMentum Station presents two key papers at SAE WCX 2020, on testing automated driving systems, and on a novel collision avoidance safety metric.

We are excited to announce that research performed by AAA’s AV Testing team, in collaboration with two key partners, is being presented at the SAE World Congress 2020.

Of the many technical standards groups and industry conferences, SAE and World Congress stand apart. This year, in place of the Detroit event, we are excited to support SAE WCX virtually and showcase our research via the Digital Summit.

Both the papers, and their oral presentations, are available for on-demand viewing at the SAE WCX website in the Body / Chassis / Occupant and Pedestrian Safety Structure technical sessions category. 

Modes of Automated Driving System Scenario Testing: Experience Report and Recommendations (SAE Paper 2020-01-1204) 

This research, performed in collaboration with University of Waterloo’s Professor Krzysztof Czarnecki, Michal Antkiewicz and team at the Waterloo Intelligent Systems (WISE) lab, explores testing autonomous vehicles using four different modes including simulation, mixed reality testing, and test-track testing. The team tested UW’s automated vehicle, dubbed “UW Moose”, through six rigorous scenario tests in different modes, and compared and contrasted their benefits and drawbacks. The paper closes with 12 recommendations on choosing testing modes for automated driving systems.

The SAE paper 2020-01-1204 may be purchased here

Development of a Collision Avoidance Capability Metric (SAE Paper 2020-01-1207)

This research paper discusses the development and application of a novel metric for evaluating and quantifying the capability of a vehicle / controller (including a human driver) to avoid potential future collisions. The metric was developed in partnership with DRI, and is applicable to potentially any scenario, including with multiple actors and roadside objects. 

The SAE paper 2020-01-1207 may be purchased here

 

To discuss these, and other research at GoMentum, feel free to contact Atul Acharya or Paul Wells at [email protected] 

Introducing The GoMentum Blog

The Bay Area has been the epicenter of the self-driving vehicle industry since Waymo’s Chauffer project in 2013. Over the last seven-odd years the industry evolved tremendously; sixty-six companies are currently permitted to test vehicles on California public roads. For Waymo and startups alike, public road testing is but one environment in a broader testing regime. Simulation, of course, is another dominant mode – with industry leaders citing millions or billions of virtual miles driven. Similarly though, controlled testing on private roads is another piece of a holistic approach to validation, and one that has been employed in the auto industry for decades (GM first bought Milford in 1936).

When the AV industry began, Bay Area teams were equipped primarily with research vehicles and needed only parking lots for controlled, full-vehicle testing before moving to public road driving. Especially over the last two years, now with fleets of vehicles driving on public roads and a focus on scale, most of the industry has outgrown parking lots. GoMentum, a military base turned proving ground since 2014, serves as a more robust tool for the growing needs of validation teams. AAA Northern California, Nevada & Utah became formally involved in 2018 and has since invested into infrastructure and operations to support efficient use of the site. We’re proud to partner with multiple companies.

As the use of GoMentum as a development tool grew, we also began to think about other opportunities – beyond the physical facility – that could advance the industry’s understanding of safety. Guided by many questions that have crystalized in the field – most notably, how safe is safe enough? – GoMentum created a research agenda in 2019. This agenda sought to complement the work of ISO 21448, UL 4600, Pegasus, and Safety First for Automated Driving by focusing on three key areas: methods for full-vehicle validation, the relationship between physical and virtual testing, and safety metric development.

Having wrapped several of these projects, we’re now excited to share our results. Please stay tuned as we continue to release more about our work and offer insights into questions like: How can AV safety be measured? What are the challenges introduced when distributing validation across physical and virtual environments? What is the state of the current validation toolchain? Which requirements are important for vehicle safety? What needs to be done by legislators and policy makers to ensure the safety of the public?