Tag Archive for: Automated Driving

A Roadmap for Accelerating Safe Automated Driving Deployment

By Atul Acharya

The automated vehicle (AV) industry has been grappling with critical questions for some time now regarding development of automation. Along with the industry, however, regulators in various jurisdictions have also been grappling with their own concerns. Namely, how might the benefits of automation be disseminated widely, yet safely, among the general public? How should regulators decide which vehicles should be deployed? How should AVs be evaluated for safety in the first place? What should a technical evaluation process that is fair to all look like? These questions are hardly those of the regulators’ alone; in fact, AV developers and other stakeholders also have similar concerns, primarily because they all are either direct beneficiaries or are directly responsible for creating safe technologies.

The World Economic Forum (WEF), earlier this year, launched an initiative to help regulators — local, national and international — create a data-driven policy framework to help address these questions. In partnership with the consulting firm McKinsey & Co, and technology platform company Deepen.ai, the Forum launched the Safe Drive Initiative (SafeDI) to formalize just such a framework. The Forum invited several industry participants, including AAA Northern California’s Autonomous Vehicle team, several leading AV developers, policy makers, academics and safety experts to help develop this framework.

AAA Northern California’s team was led by Atul Acharya Director, AV Strategy and Operations and Xantha Bruso Manager, AV Policy. As a key contributor to the steering committee, the team helped guide the framework development by asking big questions. The expertise gained from testing AVs at GoMentum Station was critical in helping develop the scenario-based assessment framework. Going deeper, the committee asked critical questions, such as:

  • How should AVs be evaluated for safe automation? 
  • How should the operational design domain (ODD) be specified such that an equivalence can be established between testing mode and deployment mode?
  • How should regulators design a scenario-based assessment framework, given that the vast majority of scenarios (~approximately 1-10 million) may never be tested on roads?
  • What combination of testing modes — such as simulation, closed-course testing, and open road testing — should the AVs be tested in?
  • What safety metrics matter when AV performance is assessed? And which metrics should be made public?
  • How should regulators ask for such metrics when they do not necessarily know all the technical implementation details?
  • What is the role of independent testing in evaluating AVs?
  • How should scenarios be shared within the industry so that safety is not a competition but an essential requirement?

Over the course of 2020, the steering committee met monthly to guide the framework development process. The committee created several technical work groups comprised of experts academia and industry that each explored various technical aspects of the framework, such as defining ODD; elaborating scenario-based assessment;  exploring available and upcoming technical safety standards, such as ANSI/UL 4600; and exploring AV policy regimes with examples from light-touch (e.g. US-based) to high-touch (e.g. Singapore / EU based) approaches, and identifying gaps in these policies. 

The group defined a four-stage, graduated approach to testing and assessing AVs, taking into account the requirements from various stakeholders, including the general public, the ultimate beneficiaries of automation. Broadly speaking, the Safe Drive Initiative seeks to improve regulators’ decision-making abilities on automated vehicles technologies. 

The guiding principles of the framework include:

  • Multi-stakeholder approach – regulators and AV developers should benefit from the framework and find the guidance both practical implementable
  • Scenario-based assessment – use of key scenarios within deployment ODD to evaluate the AV’s performance, while noting such a scenario-database would be a starting point, not an end-goal
  • Common set of metrics – leveraging a common set of metrics for AV assessment, such as ODD excursions, operational safety, and more (some developed, others still emerging in new standards) 
  • Covering simulation, closed-course testing, and on-road testing – using all three modes for evaluation to ensure efficiency and effectiveness of testing

The approach defined in the SafeDI framework is broadly divided into four stages:

  1. Prepare: convene necessary stakeholders, define the end goal, and establish process
  2. Define: establish the required behavioral competencies for the AV, define geographic areas, and parameters for each interim milestone
  3. Measure: specify on-road, controlled-environment, and simulation tests, and determine success/advancement criteria, 
  4. Execute: conduct tests, collect required data from AV developers as necessary, improving the safety assurance process as needed

This framework is designed to provide a high-level guidance to regulators. As such, it is flexible enough for regulators to adapt to their jurisdictions, and is detailed enough to cover underlying technology changes. The committee recognizes that no one-size-fits-all solution will be sufficient for all jurisdictions, and that customization at each stage will be balanced with standardization and harmonization at the highest levels. 

For full details of the policy framework, refer to WEF’s website at:

Safe Drive Initiative 

Safe Drive Initiative: Creating safe autonomous vehicle policy 

Safe Drive Initiative: Scenario-based AV Policy Framework 

 

Implications 

The SafeDI framework enables regulators to evaluate AVs, potentially by independent testing organizations, such that regulators may focus their efforts on guiding AV developers, rather than performing the tests themselves. As such, this framework encourages the use of new and upcoming standards, such as ANSI/UL 4600 in safety evaluation of AVs. 

It is our hope that this approach will lead to a safer, more inclusive deployment of automated vehicles.

Deep Dive Into AAA’s Latest Active Driving Assistance Report

GoMentum testing team leads closed-course evaluation of ADA systems

Authored by Atul Acharya

Today, AAA published the results of testing active driving assistance (ADA) functions available on several commercially available vehicles. The results show that critical ADA functions, namely Lane Keep Assist (LKA) and Adaptive Cruise Control (ACC) that drivers rely on, fall short of expectations. These ADA functions — categorized as SAE Level 2 (L2) automation — are a subcategory of the generally known advanced driver assistance systems (ADAS). 

Why

AAA’s Automotive Engineering Group regularly performs research that benefits AAA’s 60 million members and the general public; this research is executed on commercially available vehicles (not on research prototypes). Previous research examined important ADAS functions such as automatic emergency braking (AEB) technology, revealing shortcomings of various available systems. Additional research recommended renaming various ADAS functions, a position now endorsed by SAE, noting how the commercial names for ADAS functions have become too confusing for consumers. These studies are independent evaluations and aim to be objective in their methodology and findings.

In the same vein, the recently concluded research aimed to examine the limitations of lane keep assistance and adaptive cruise control acting as one system. This L2 feature forms a core of automation functionality as vehicles become more complex on their way towards full automation. As more auto OEMs launch more ADAS features, it is imperative that motorists and consumers get an unbiased view of their benefits and limitations. Thus, the aim of the project was to find limitations of active driving assistance and inform both consumers and OEMs on their performance with the aim of improving them.

Traffic jam simulation

Why GoMentum?

The latest research was led by AAA’s Automotive Engineering Group, in collaboration with AAA Northern California’s AV Testing team at GoMentum Station, and Automobile Club of Southern California’s Automotive Research Center. The test plan included two equally important parts: closed-course testing at GoMentum Station, and naturalistic driving on highways between Los Angeles and San Francisco. The tests were conducted over a period of a few weeks in late October and early November 2019. The work at GoMentum was led by Atul Acharya along with Paul Wells.

GoMentum Station was specifically chosen for closed-course testing as it is one of the premier sites for AV and ADAS testing, and includes features such as 1-mile long straight roads in the Bunker City test that include fresh lane markings, along with curved roads like Kinne Boulevard that has degraded lane markings. These features are ideally suited for testing Lane Keep Assistance functions that rely on lane markings, with the degraded lane markings offering an additional challenge to the vehicle’s sensors. Other areas of Bunker City were used to test Traffic Jam Assist (TJA) functionality, as well as testing the subject vehicle approaching a simulated disabled vehicle.

For closed-course testing, the key questions were:

  • How do vehicles with active driving assistance systems perform during scenarios commonly encountered in highway situations?
  • Specifically: 
    • How well does the lane keep assist system perform?
    • How does a vehicle perform in stop-and-go traffic?
    • How does a vehicle respond to disabled vehicle on the roadway?

Instrumentation

All vehicles were equipped with industry standard equipment such as:

  • OxTS RT 3000 – inertial measurement unit
  • OxTS RT-Range S hunter – for accurately tracking ranges to target vehicles
  • DEWEsoft CAN interfaces for reading CAN bus messages 
  • DEWEsoft CAM-120 cameras

Target vehicles were equipped with:

  • OxTS RT 3000 and OxTS RT-Range S

Lane survey equipment on 12th Street at GoMentum Station

Testing Methodology Overview

Lane Keep Assist Testing

Sustained lane-keeping functionality is one of the primary capabilities of active driving assistance. To test LKA, the roadway utilized must have visible lane markings. Prior to test, a lane survey was performed on GoMentum’s 12th street test zone, a straight, 1.2-mile roadway with clear and fresh lane markings. This roadway is ideal for high speed testing so that vehicles can be tested at various speeds. Using the same high-precision lane survey equipment from OxTS, a precise map of lane markings was created by walking the entire length of the road. The map is then used as an underlay when lane tests are performed.

During testing, the OxTS 3000 inertial measurement unit tracks the precise movement of the vehicle under test (VUT) as it moves along the road when LKA function is active. As part of configuration setup, a polygon is previously defined that marks the edge boundaries of the VUT. Range data is collected that determines precise lateral distances from the vehicle’s polygon boundaries (more specifically, from its leftmost and rightmost points)  to the nearest lane markings. All this data is captured at 100Hz, and then subsequently plotted. The charts show the vehicle’s lane centering position, as well as its distance to the right lane mark and the left lane mark. When charted appropriately, the data can show whether the VUT had any bias towards left/right placement when traveling in the lane.

Traffic Jam Assistance testing

Stop-and-go traffic situations are frequent while driving on highways. Nominally, adaptive cruise control (ACC) systems will “follow” a lead vehicle at a safe distance, accelerating automatically if the lead vehicle accelerates, and decelerating automatically if the lead vehicle decelerates. Of course, exactly what a “safe distance” is, and just how soon the vehicle accelerates or decelerates depends on the vehicle. Knowing the limits of these systems is important to motorists so that they are aware of potential risks. 

To test traffic-jam assistance, the team utilized a DRI Soft Car 360® on a Low Profile Robotic Vehicle (LPRV) platform. The DRI Soft Car 360® is a foam car that is mounted on the LPRV platform which itself can move at speeds of up to 50 mph. With the DRI Soft Car acting as a simulated “lead vehicle”, a vehicle under test (VUT) activates its ACC system (by reaching a certain speed, such as 30 mph) and then lets the ACC system follow the lead vehicle automatically. The lead vehicle is then programmed to accelerate for some time, which causes the VUT to accelerate while maintaining a safe distance. Similarly, the lead vehicle is then programmed to decelerate, which causes the VUT to decelerate. The lead vehicle once again accelerates, causing similar stop-and-go behavior in the VUT. At all times, the vehicles’ kinematic data is recorded in a data logger. The vehicles are subjected to varying levels of deceleration at 0.3g, 0.45g, 0.6g and three runs are performed for each VUT. The following distance, separation distance / time-to-collision at start of braking, speed differential at start of braking, average and max instantaneous deceleration are all recorded. When charted out, the data reveals the system performance. 

Simulated Disabled Vehicle approach testing

Driving on highways is often risky. AAA, the largest emergency road services (ERS) provider, alone handles over 30 million emergency road service requests nationwide every year. Encountering disabled vehicles on highways in a risky scenario for motorists. The team wanted to find how active driving assistance systems react when faced with such a situation. 

To create a disabled vehicle situation, the team created a simulated scenario with the DRI Soft Car 360 placed halfway on the roadway, with 50% of the soft car in the travel lane and the rest 50% on the right shoulder. A vehicle under test is then subjected to this situation and its ADA system reaction is noted. 

Results

So how did these vehicles perform? While active driving systems mostly worked the way they were designed, there were notable shortcomings in their performance when these systems were pushed to the limits. Consumers and motorists should always be vigilant and attentive when driving, and be ready to take over at a moment’s notice whenever these L2 automation systems are active. 

To learn more about the ADA L2 Testing, please download the full report.

If you are an AV or ADAS developer, or a technology vendor working on core components of automation, and would like to confidentially test the limits of your system, or to learn more about the ADA L2 Testing project, please get in touch with Atul Acharya or Paul Wells at: [email protected]