By Atul Acharya
The automated vehicle (AV) industry has been grappling with critical questions for some time now regarding development of automation. Along with the industry, however, regulators in various jurisdictions have also been grappling with their own concerns. Namely, how might the benefits of automation be disseminated widely, yet safely, among the general public? How should regulators decide which vehicles should be deployed? How should AVs be evaluated for safety in the first place? What should a technical evaluation process that is fair to all look like? These questions are hardly those of the regulators’ alone; in fact, AV developers and other stakeholders also have similar concerns, primarily because they all are either direct beneficiaries or are directly responsible for creating safe technologies.
The World Economic Forum (WEF), earlier this year, launched an initiative to help regulators — local, national and international — create a data-driven policy framework to help address these questions. In partnership with the consulting firm McKinsey & Co, and technology platform company Deepen.ai, the Forum launched the Safe Drive Initiative (SafeDI) to formalize just such a framework. The Forum invited several industry participants, including AAA Northern California’s Autonomous Vehicle team, several leading AV developers, policy makers, academics and safety experts to help develop this framework.
AAA Northern California’s team was led by Atul Acharya Director, AV Strategy and Operations and Xantha Bruso Manager, AV Policy. As a key contributor to the steering committee, the team helped guide the framework development by asking big questions. The expertise gained from testing AVs at GoMentum Station was critical in helping develop the scenario-based assessment framework. Going deeper, the committee asked critical questions, such as:
- How should AVs be evaluated for safe automation?
- How should the operational design domain (ODD) be specified such that an equivalence can be established between testing mode and deployment mode?
- How should regulators design a scenario-based assessment framework, given that the vast majority of scenarios (~approximately 1-10 million) may never be tested on roads?
- What combination of testing modes — such as simulation, closed-course testing, and open road testing — should the AVs be tested in?
- What safety metrics matter when AV performance is assessed? And which metrics should be made public?
- How should regulators ask for such metrics when they do not necessarily know all the technical implementation details?
- What is the role of independent testing in evaluating AVs?
- How should scenarios be shared within the industry so that safety is not a competition but an essential requirement?
Over the course of 2020, the steering committee met monthly to guide the framework development process. The committee created several technical work groups comprised of experts academia and industry that each explored various technical aspects of the framework, such as defining ODD; elaborating scenario-based assessment; exploring available and upcoming technical safety standards, such as ANSI/UL 4600; and exploring AV policy regimes with examples from light-touch (e.g. US-based) to high-touch (e.g. Singapore / EU based) approaches, and identifying gaps in these policies.
The group defined a four-stage, graduated approach to testing and assessing AVs, taking into account the requirements from various stakeholders, including the general public, the ultimate beneficiaries of automation. Broadly speaking, the Safe Drive Initiative seeks to improve regulators’ decision-making abilities on automated vehicles technologies.
The guiding principles of the framework include:
- Multi-stakeholder approach – regulators and AV developers should benefit from the framework and find the guidance both practical implementable
- Scenario-based assessment – use of key scenarios within deployment ODD to evaluate the AV’s performance, while noting such a scenario-database would be a starting point, not an end-goal
- Common set of metrics – leveraging a common set of metrics for AV assessment, such as ODD excursions, operational safety, and more (some developed, others still emerging in new standards)
- Covering simulation, closed-course testing, and on-road testing – using all three modes for evaluation to ensure efficiency and effectiveness of testing
The approach defined in the SafeDI framework is broadly divided into four stages:
- Prepare: convene necessary stakeholders, define the end goal, and establish process
- Define: establish the required behavioral competencies for the AV, define geographic areas, and parameters for each interim milestone
- Measure: specify on-road, controlled-environment, and simulation tests, and determine success/advancement criteria,
- Execute: conduct tests, collect required data from AV developers as necessary, improving the safety assurance process as needed
This framework is designed to provide a high-level guidance to regulators. As such, it is flexible enough for regulators to adapt to their jurisdictions, and is detailed enough to cover underlying technology changes. The committee recognizes that no one-size-fits-all solution will be sufficient for all jurisdictions, and that customization at each stage will be balanced with standardization and harmonization at the highest levels.
For full details of the policy framework, refer to WEF’s website at:
The SafeDI framework enables regulators to evaluate AVs, potentially by independent testing organizations, such that regulators may focus their efforts on guiding AV developers, rather than performing the tests themselves. As such, this framework encourages the use of new and upcoming standards, such as ANSI/UL 4600 in safety evaluation of AVs.
It is our hope that this approach will lead to a safer, more inclusive deployment of automated vehicles.