In August, the California Public Utilities Commission made history when it voted to allow two self-driving car companies, Waymo and Cruise, to commercially operate their “robotaxis” around the clock in San Francisco.
Within hours, Cruise reported at least 10 incidents where vehicles stopped short of their destination, blocking city streets. The commission demanded they recall 50% of their fleet.
Despite these challenges, other cities — including Las Vegas, Miami, Austin and Phoenix — are allowing autonomous vehicle startups to conduct tests on public roads.
"Self-driving car proponents see the jump from laboratories to real-world testing as a necessary step that has been a long time coming."
Self-driving car proponents see the jump from laboratories to real-world testing as a necessary step that has been a long time coming. The first autonomous vehicle was tested on the Autobahn in Germany in 1986, but the advances stalled in the 1990s due to technology limitations.
After a 2007 Defense Department’s Advanced Research Projects Agency (DARPA) competition featuring autonomous driving capabilities, it seemed like the era of driverless cars had finally arrived. The competition kickstarted a Silicon Valley race to develop the first commercial driverless car. Optimism abounded, with engineers, investors and automakers predicting there would be as many as 10 million self-driving cars on the road by 2020.
“The question for the last 30 years is — how long is this going to take?” said Javier von Stecher (PhDPhys’08), senior software engineer at Nvidia who has worked on self-driving car technology at companies including Uber and Mercedes-Benz. “I think a lot of people were oversold on the idea that we could get this working fast. The biggest shift I’ve seen over the past decade is people realizing how hard this problem really is.”
The stakes may be high, but that’s not deterring CU Boulder researchers. From creating systems and models to studying human-machine interactions, university teams are working to advance the field safely and responsibly as self-driving cars become a fixture in our society.
Their next big question: Can we learn to trust these vehicles?
The idea behind autonomous vehicles is simple. An artificial intelligence system pulls in data from an array of sensors including radar, high-resolution cameras and GPS, and uses this data to navigate from point A to point B while avoiding obstacles and obeying traffic laws. Sounds simple? It’s not.
When a self-driving car encounters an unexpected obstacle, it makes split-second judgment calls — should it brake or swerve around it? — that develop naturally in humans but are still beyond even the most sophisticated AI systems.
Moreover, there will always be an edge case that the AI-powered car hasn’t seen before, which means the key to safe autonomous vehicles is building systems that can correctly favor safe choices in unfamiliar situations.
Majid Zamani, associate professor and director of CU Boulder’s Hybrid Systems Control Lab, studies how to create software for autonomous systems such as cars, drones and airplanes. In autonomous vehicles’ AI systems, data flows into the AI and helps it make decisions. But how the AI creates those decisions is a mystery. This, said Zamani, makes it difficult to trust the AI system — and yet trust is critically important in high-stakes applications like autonomous driving.
“These are what we call safety critical applications because system failure can cause loss of life or damage to property, so it’s really important that the way those systems are making decisions is provably correct,” Zamani said.
In contrast to AI systems that use data to create models that are not intelligible to humans, Zamani advocates for a bottom -up approach where the AI’s models are derived from fundamental physical laws, such as acceleration or friction, which are well-understood and unchanging.
“If you derive a model using data, you have to be able to ensure that you can quantify how much error is in that model and the actual system that uses it,” Zamani said.
Mathematically demonstrating the safety of the models used by autonomous vehicles is important for engineers and policymakers who need to guarantee safety before they’re deployed in the real world. But this raises some thorny questions: How safe is “safe enough,” and how can autonomous vehicles communicate these risks to drivers?
Computer, Take the Wheel
Each year, more than 40,000 Americans die in car accidents, and according to the National Highway Traffic Safety Administration (NHTSA), about 90% of U.S. auto deaths and serious crashes are attributable to driver error. The great promise of autonomous vehicles is to make auto deaths a relic of history by eliminating human errors with computers that never get tired or distracted.
The NHTSA designates six levels of “autonomy” for self-driving cars, which range from Level 0 (full driver control) to Level 5 (fully autonomous). For most of us, Level 5 is what we think of when we think of self-driving cars: a vehicle so autonomous that it might not even have a steering wheel and driver’s seat because the computer handles everything. For now, this remains a distant dream, with many automakers pursuing Level 3 or 4 autonomy as stepping stones.
“Most modern cars are Level 2, with partial autonomous driving,” said Chris Heckman, associate professor and director of the Autonomous Robotics and Perception Group in CU Boulder’s computer science department. “Usually that means there’s a human at the wheel, but they can relegate some functions to the car’s software such as automatic braking or adaptive cruise control.”
While these hybrid AI-human systems can improve safety by assisting a driver with braking, acceleration and collision avoidance, limitations remain. Several fatal accidents, for example, have resulted from drivers’ overreliance on autopilot, which stems from issues of human psychology and AI understanding.
This problem is deeply familiar to Leanne Hirschfield, associate research professor at the Institute of Cognitive Science and the director of the System-Human Interaction with NIRS and EEG (SHINE) Lab at CU Boulder. Hirschfield’s research focuses on using brain measurements to study the ways humans interact with autonomous systems, like self-driving cars and AI systems deployed in elementary school classrooms.
"When an autonomous vehicle can show the driver information about how it’s making decisions or its level of confidence in its decisions, the driver is better equipped to determine when they need to grab the wheel."
Trust, Hirschfield said, is defined as a willingness to be vulnerable and take on risks, and for decades the dominant engineering paradigm for autonomous systems has been focused on ways to foster total trust in autonomous systems.
“We’re realizing that’s not always the best approach,” Hirschfield said. “Now, we’re looking at trust calibration, where users often trust the system but also have enough information to know when they shouldn’t rely on it.”
The key to trust calibration, she said, is transparency. When an autonomous vehicle can show the driver information about how it’s making decisions or its level of confidence in its decisions, the driver is better equipped to determine when they need to grab the wheel.
Studying user responses is challenging in a laboratory setting, where it’s difficult to expose drivers to real risks. So Hirschfield and researchers at the U.S. Air Force Academy have been using a Tesla modified with a variety of internal sensors to study user trust in autonomous vehicles.
“Part of what we’re trying to do is measure someone’s level of trust, their workload and emotional states while they’re driving,” Hirschfield said. “They’ll have the car whipping around hills, which is how you need to study trust because it involves a sense of true risk compared to a study in a lab setting.”
Although Hirschfield said that researchers have made a lot of progress in understanding how to design autonomous vehicles to foster driver trust, there is still a lot of work to be done.
Sidney D’Mello, a professor at the Institute of Cognitive Science, studies how human-computer interactions shift the way we think and feel. For D’Mello, it’s unclear whether the current crop of self-driving cars can shift to a new driver-focused paradigm from the current perfected engineering-forward approach.
“I think we need an entirely new methodology for the self-driving car context,” D’Mello said. “If you really want something you can trust, then you need to design these systems with users starting from day one. But every single car company is kind of stuck in this engineering mindset from 50 years ago where they build the tech and then they present it to the user.”
The good news, D’Mello said, is that automakers are starting to take this challenge seriously. A collaboration between Toyota and the Institute of Cognitive Science focused on designing autonomous vehicles that foster trust in the user.
“The autonomous model typically implies the AI is in the center with the human hovering around it,” said D’Mello. “But this needs to be a model with the human in the center.”
Even when users learn to trust autonomous vehicles, living with driverless cars and reconceptualizing how they relate to them is complex. But there’s a lot we can apply from research on prosthetics, said Cara Welker, assistant professor in biomechanics, robotics and systems design.
Much like autonomous vehicles analyze surroundings to make navigation and control decisions, robotic prostheses monitor a wearer’s movements to understand appropriate behavior. And just as teaching users to trust prosthetics requires strong feedback loops and predictable prosthetic behavior, teaching drivers to trust autonomous vehicles means providing drivers with information about what the AI is doing — and it requires drivers to reconceptualize vehicles as extensions of themselves.
“There’s a difference between users being able to predict the behavior of an assistive device versus having some kind of sensory feedback,” Welker said. “And this difference has been shown to affect whether the people think of it as ‘me and my prosthesis’ instead of just ‘me, which includes my prosthesis.’ And that’s incredibly important in terms of how users will trust that device.”
How, then, will drivers evolve to experience cars as extensions of themselves?
In 2018, a pedestrian was killed by a self-driving Uber in Arizona, which marked the first fatality attributed to an autonomous vehicle. Although the driver pleaded guilty in the case, the question of who is responsible when autonomous vehicles kill is far from settled.
Today, there is limited regulation dictating autonomous vehicle safety and liability. One problem is that vehicles are regulated at the federal level while drivers are regulated at the state level — a division of responsibility that doesn’t account for a future where the driver and vehicle are more closely aligned.
Researchers and automakers have voiced frustration with existing autonomous driving regulations, agreeing that updated regulations are necessary. Ideally, regulations would ensure driver, passenger and pedestrian safety without quashing innovation. But what these policies might look like is still unclear.
The challenge, said Heckman, is that the engineers don’t have complete control over how autonomous systems behave in every circumstance. He believes it’s critical for regulations to account for this without insisting on impossibly high safety standards.
“Many of us work in this field because automotive deaths seem avoidable and we want to build technologies that solve that problem,” Heckman said. “But I think we hold these systems [to] too high of a standard — because yes, we want to have safe systems, but right now we have no safety frameworks, and automakers aren’t comfortable building these systems because they may be held to an extremely high liability.”
Other industries may offer a vision for how to regulate the autonomous driving industry while providing acceptable safety standards and enabling technological development, Heckman said. The aviation industry, for example, adopted rigorous engineering standards and fostered trust in engineers, pilots, passengers and policymakers.
“There’s an engineering principle that trust is a perception of humans,” Heckman said. “Trust is usually built through experience with a system, and that experience confers trust on the engineering paradigms that build safe systems.
“With airplanes, it took decades for us to come up with designs and engineering paradigms that we feel comfortable with. I think we’ll see the same in autonomous vehicles, and regulation will follow once we’ve really defined what it means for them to be trustworthy.”