Skip to main contentSkip to navigationSkip to navigation
Illustration: Getty Images.
Illustration: Getty Images.
Illustration: Getty Images.

How self-driving cars got stuck in the slow lane

This article is more than 2 years old

The technology behind autonomous vehicles has proved devilishly hard to perfect. And progress hasn’t been helped by Tesla boss Elon Musk’s army of superfans

“I would be shocked if we do not achieve full self-driving safer than a human this year,” said Tesla chief executive, Elon Musk, in January. For anyone who follows Musk’s commentary, this might sound familiar. In 2020, he promised autonomous cars the same year, saying: “There are no fundamental challenges.” In 2019, he promised Teslas would be able to drive themselves by 2020 – converting into a fleet of 1m “robotaxis”. He has made similar predictions every year going back to 2014.

From late 2020, Tesla expanded beta trials of its “Full Self-Driving” software (FSD) to about 60,000 Tesla owners, who must pass a safety test and pay $12,000 for the privilege. The customers will pilot the automated driver assistance technology, helping to refine it before a general release.

With the beta rollout, Tesla is following the playbook of software companies, “where the idea is you get people to iron out the kinks”, says Andrew Maynard, director of the Arizona State University risk innovation lab. “The difficulty being that when software crashes, you just reboot the computer. When a car crashes, it’s a little bit more serious.”

Placing fledgling technology into untrained testers’ hands is an unorthodox approach for the autonomous vehicle (AV) industry. Other companies, such as Alphabet-owned Waymo, General Motors-backed Cruise and AV startup Aurora, use safety operators to test technology on predetermined routes. While the move has bolstered Tesla’s populist credentials with fans, it has proved reputationally risky. Since putting its tech into the hands of the people, a stream of videos documenting reckless-looking FSD behaviour has racked up numerous views online.

There’s the video of a car in FSD mode veering sharply into oncoming traffic, prompting the driver to swerve off the road into a field. The one that shows a car repeatedly attempting to turn on to train tracks and into pedestrians. Another that captures the driver struggling to regain control of the car after the system prompts him to take over. What would appear to be the first crash involving FSD was reported to the US National Highway Traffic Safety Administration (NHTSA) in November last year; no one was injured, but the vehicle was “severely damaged”.

Tesla boss Elon Musk has promised the arrival of self-driving cars several times over the years. Photograph: Stephen Lam/Reuters

FSD is proficient at driving on motorways, where it’s “straightforward, literally”, says Taylor Ogan, a Tesla FSD owner and chief executive of Snow Bull Capital. On more complex, inner-city streets, he says the system is more unpredictable. Continuous software updates are supposed to iron out glitches. For example, the NHTSA forced Tesla to prevent the system from executing illegal “rolling stops” (moving slowly through a stop sign without ever coming to a full stop, while an “unexpected braking” problem is the subject of a current inquiry. In Ogan’s experience of trialling FSD though, “I haven’t even seen it get better. It just does crazier things more confidently.”

Maynard says the “learner driver” metaphor holds for some of FSD’s issues, but falls apart when the technology engages in indisputably non-human behaviour. For example, a lack of regard for getting dangerously close to pedestrians and the time a Tesla ploughed into a bollard that FSD failed to register. Similar problems have emerged with Tesla’s Autopilot software, which has been implicated in at least 12 accidents (with one death and 17 injuries) owing to the cars being unable to “see” parked emergency vehicles.

There’s reason to believe that the videos that make their way online are some of the more flattering ones. Not only are the testers Tesla customers, but an army of super-fans acts as an extra deterrent to sharing anything negative. Any reports of FSD behaving badly can trigger a wave of outrage; any critical posts on the Tesla Motors Club, a forum for Tesla drivers, are inevitably greeted by people blaming users for accidents or accusing them of wanting Tesla to fail. “People are terrified that Elon Musk will take away the FSD that they paid for and that people will attack them,” says Ogan.

This helps to shield Tesla from criticism, says Ed Niedermeyer, the author of Ludicrous: The Unvarnished Story of Tesla Motors, who was “bombarded by an online militia” when he started reporting on the company. “Throughout Tesla’s history, this faith and sense of community… has been absolutely critical to Tesla’s survival,” he says. The proof, he adds, is that Musk can claim again and again to be a year from reaching full autonomous driving without losing the trust of fans.


But it’s not just Tesla that has missed self-imposed autonomous driving deadlines. Cruise, Waymo, Toyota and Honda all said they would launch fully self-driving cars by 2020. Progress has been made, but not on the scale anticipated. What happened?

“Number one is that this stuff is harder than manufacturers realised,” says Matthew Avery, director of research at Thatcham Research. While about 80% of self-driving is relatively simple – making the car follow the line of the road, stick to a certain side, avoid crashing – the next 10% involves more difficult situations such as roundabouts and complex junctions. “The last 10% is really difficult,” says Avery. “That’s when you’ve got, you know, a cow standing in the middle of the road that doesn’t want to move.”

It’s the last 20% that the AV industry is stuck on, especially the final 10%, which covers the devilish problem of “edge cases”. These are rare and unusual events that occur on the road such as a ball bouncing across the street followed by a running child; complicated roadworks that require the car to mount the kerb to get past; a group of protesters wielding signs. Or that obstinate cow.

Self-driving cars rely on a combination of basic coded rules such as “always stop at a red light” and machine-learning software. The machine-learning algorithms imbibe masses of data in order to “learn” to drive proficiently. Because edge cases only rarely appear in such data, the car doesn’t learn how to respond appropriately.

An Uber self-driving car at its Pittsburgh technology centre in 2016. Photograph: Angelo Merendino/Getty

The thing about edge cases is that they are not all that rare. “They might be infrequent for an individual driver, [but] if you average out over all the drivers in the world, these kinds of edge cases are happening very frequently to somebody,” says Melanie Mitchell, computer scientist and professor of complexity at the Santa Fe Institute.

While humans are able to generalise from one scenario to the next, if a self-driving system appears to “master” a certain situation, it doesn’t necessarily mean it will be able to replicate this under slightly different circumstances. It’s a problem that so far has no answer. “It’s a challenge to try to give AI systems common sense, because we don’t even know how it works in ourselves,” says Mitchell.

Musk himself has alluded to this: “A major part of real-world AI has to be solved to make unsupervised, generalised full self-driving work,” he tweeted in 2019. Failing a breakthrough in AI, autonomous vehicles that function on a par with humans probably won’t be coming to market just yet. Other AV makers use high-definition maps – charting the lines of roads and pavements, placement of traffic signs and speed limits – to partly get around this problem. But these maps need to be constantly refreshed to keep up with ever-changing conditions on roads and, even then, unpredictability remains.

The edge-case problem is compounded by AV technology that acts “supremely confidently” when it’s wrong, says Philip Koopman, associate professor of electrical and computer engineering at Carnegie Mellon University. “It’s really bad at knowing when it doesn’t know.” The perils of this are evident in analysing the Uber crash in which a prototype AV killed Elaine Herzberg as she walked her bicycle across a road in Arizona, in 2018. An interview with the safety operator behind the wheel at the time describes the software flipping between different classifications of Herzberg’s form – “vehicle”, “bicycle”, “other” – until 0.2 seconds before the crash.


The ultimate aim of AV makers is to create cars that are safer than human-driven vehicles. In the US, there is about one death for every 100m miles driven by a human (including drunk driving). Koopman says AV makers would have to beat this to prove their technology was safer than a human. But he also believes somewhat comparable metrics used by the industry, such as disengagement data (how often a human needs to take control to prevent an accident), elide the most important issues in AV safety.

“Safety isn’t about working right most of the time. Safety is all about the rare case where it doesn’t work properly,” says Koopman. “It has to work 99.999999999% of the time. AV companies are still working on the first few nines, with a bunch more nines to go. For every nine, it’s 10 times harder to achieve.”

Some experts believe AV makers won’t have to completely crack human-level intelligence to roll out self-driving vehicles. “I think if every car was a self-driving car, and the roads were all mapped perfectly, and there were no pedestrians around, then self-driving cars would be very reliable and trustworthy,” says Mitchell. “It’s just that there’s this whole ecosystem of humans and other cars driven by humans that AI just doesn’t have the intelligence yet to deal with.”

Cruise Origin founder Kyle Vogt at the company’s launch. Photograph: Stephen Lam/Reuters

Under the right conditions, such as quiet roads and favourable weather, self-driving cars can mostly function well. This is how Waymo is able to run a limited robotaxi service in parts of Phoenix, Arizona. However, this fleet has still been involved in minor accidents and one vehicle was repeatedly stumped by a set of traffic cones despite a remote worker providing assistance. (A Waymo executive claimed they were not aware of these incidents happening more than with a human driver.)

Despite the challenges, the AV industry is speeding ahead. The Uber crash had a temporarily sobering effect; manufacturers suspended trials afterwards owing to negative press and Arizona’s governor suspended Uber’s testing permit. Uber and another ride-hailing company, Lyft, both then sold their self-driving divisions.

But this year has marked a return to hubris – with more than $100bn invested in the past 10 years, the industry can hardly afford to shirk. Carmakers General Motors and Geely and AV company Mobileye have said people may be able to buy self-driving cars as early as 2024. Cruise and Waymo both aim to launch commercial robotaxi operations in San Francisco this year. Aurora also plans to deploy fully autonomous vehicles in the US within the next two to three years.


Some safety experts are concerned by the lack of regulation governing this bold next step. At present, every company “basically gets one free crash”, says Koopman, adding that the regulatory system in the US is predicated on trust in the AV maker until a serious accident occurs. He points to Uber and AV startup Pony.ai, whose driverless test permit was recently suspended in California after a serious collision involving one of its vehicles.

A side-effect of Tesla sharing its technology with customers is that regulators are taking notice. Tesla has so far avoided the more stringent requirements of other AV makers, such as reporting crashes and systems failures and using trained safety professionals as testers, because of the claim that its systems are more basic. But California’s Department of Motor Vehicles, the state’s autonomous driving regulator, is considering changing the system, in part because of the dangerous-looking videos of the technology in action, as well as investigations into Tesla by the NHTSA.

The dearth of regulation so far highlights the lack of global consensus in this space. The question, says Maynard, is “is the software going to mature fast enough that it gets to the point where it’s both trusted and regulators give it the green light, before something really bad happens and pulls the rug out from the whole enterprise?”

More on this story

More on this story

  • ‘Watershed moment’ for Tesla as Elon Musk’s visit to China reaps quick reward

  • Satnavs and Google Maps to be updated in readiness for driverless cars

  • GM’s Cruise CEO resigns amid concerns over driverless car safety

  • Ford unleashes the UK’s first legal hands-free drive car – but who will buy it?

  • Self-driving buses to serve 14-mile Edinburgh route in UK first

  • Self-driving car users could watch films on motorway under new DfT proposals

  • Driven to distraction: how close are we to watching films in self-driving cars?

  • Self-driving car users should have immunity from offences – report

Comments (…)

Sign in or create your Guardian account to join the discussion

Most viewed

Most viewed