Self-Driving Cars: a Reality Check

Today, the media is awash with buzz about the inevitable arrival of autonomous automobiles, personal vehicles that could transport passengers under complete computer control. Writing for Forbes, David Galland predicts that ten million autonomous cars will be on American streets by 2020 (Galland). He expects the adoption of autonomous cars to have profound, transformative effects on our society, by “reducing the number of traffic accidents by upward of 90%,” offering new mobility options for seniors and people with disabilities, eliminating the need for expensive and scarce downtown parking, and “[banishing] the whole idea of rush hour … to the history books” (Galland). Ford plans to sell “true self-driving cars” without controls for human drivers such as pedals or steering wheels by 2021 (Isidore). Not to be left behind, US Senators Gary Peters and John Thune have announced they plan to introduce new legislation to foster the development of autonomous vehicles that will “[leave] room for innovators to reach their full potential” (“Joint Effort”). They believe that autonomous cars “have the potential to dramatically reduce the … lives lost on our roads and highways every year and fundamentally transform the way we get around” (“Joint Effort”).

But before we speculate on the long-term impacts of autonomous cars, and especially before we formulate sweeping national policies concerning them, we ought to consider just how soon they will become reality. There are difficult ethical, technical, and human interface challenges that the industry has not yet addressed and hard questions that our society has not yet answered. Should autonomous vehicles favor the survival of passengers or pedestrians in the event of an accident? How will we produce and maintain high-resolution maps of every road on which autonomous vehicles will be expected to operate? How will we keep passengers alert and prepared to retake control in the event of an emergency? We are not five years away from autonomous cars, as Ford claims, much less six months away from “full self-driving” Tesla vehicles, as CEO Elon Musk claims (@elonmusk). The barriers to designing safe and reliable autonomous cars are so massive that they will preclude their mainstream introduction for many decades, if not indefinitely.SAE International, an automobile standards organization, has developed a widely used scale that categorizes vehicles by their level of automation. Level 1 and 2 vehicles are capable of limited control of steering, acceleration, or both under the constant supervision of human drivers, as is the case in some of today’s luxury cars. “Conditional automation” in level 3 vehicles can perform all driving tasks, including steering, accelerating, changing lanes, and making turns, but must request intervention by human drivers in exceptional situations. “High automation” in level 4 vehicles can perform these tasks in most situations without falling back on human intervention. “Full automation” in level 5 vehicles includes the ability to navigate anywhere, anytime, throughout all phases of a trip. This represents the pinnacle of autonomy (“Automated Driving”).

Clearly, achieving level 4 automation or better is necessary to bring about the happily autonomous world that Galland envisions, in which seniors catch self-driving rides across town and travelers are “delivered to their destination in comfort by a self-driving Uber and wave goodbye to the car as it drives off” (Galland). It’s also a perquisite for Ford’s cars of the future if they will indeed lack steering wheels and brake pedals. But while much effort has been expended to make level 4 and 5 autonomous vehicles technically feasible, the ethical implications of such systems have received little consideration. A study led by Thierry Fraichard, a research scientist in robotics, suggests that autonomous vehicles, no matter how sophisticated, could never prevent all possible collisions. “… It appears that absolute motion safety (in the sense that no collision will ever take place whatever happens in the environment) is impossible to guarantee in the real world,” he writes (Fraichard 11). “Today, most autonomous vehicles relies [sic] upon probabilistic modeling and reasoning to drive themselves. Probabilities are ideal to handle uncertainty but they will never allow strict motion safety guarantees” (11). In other words, it is impossible to know the future with absolute certainty. The computer programs that will drive autonomous vehicles may be more precise and perceive events faster than humans, but they must still contend with the physics and unpredictability of the real world, which no mathematical or computational model can forecast.

Given that fully autonomous vehicles will eventually hit something, questions arise over how they should seek to minimize damages, injuries, and loss of life. It is easy to imagine unfortunate situations, such as total loss of control or an imminent crash, in which tragedy is unavoidable – but computer algorithms, not humans with free wills, will be making the life-changing decisions. In “Why Ethics Matters for Autonomous Cars,” Patrick Lin, a California Polytechnic State University philosophy professor, presents a variety of frightening scenarios in which autonomous vehicles may be forced to pick and choose victims and survivors. For example, if an autonomous car were on course to strike either a young girl or an old grandmother, the girl’s “entire life in front of her – a first love, a family of her own, a career, and other adventures and happiness” would be weighed against the grandmother’s “right to life and as valuable a life as the little girl’s” (Lin 70). Or an autonomous car might make use of “crash-optimization strategies” by targeting lighter vehicles or vehicles with better safety reputations in the event of a collision (72). Is it right to choose the grandmother over the girl (or vice versa), or to systematically punish bicyclists and SUV riders? How autonomous cars should act in such situations is a question of ethics and what we value, not smarter code or sharper sensors. As Lin aptly notes, “If ethics is ignored and the robotic car behaves badly, a powerful case could be made that auto manufacturers were negligent … and that opens them up to tremendous legal liability” (82). Until our society resolves these thorny ethical dilemmas and clarifies how it expects autonomous cars to behave, level 4 or 5 autonomous cars will not be suitable for general use.

These scenarios will seem abstract until level 4 or 5 vehicles become reality. Recent years have born witness to much high-profile progress and media hype about the development of autonomous cars, but such reports routinely overstate and grossly exaggerate the state of the art. In reality, the industry has a dirty little secret: autonomous cars will not be able to travel everywhere, on every road. We hear constantly about advances in sensor and camera technology that promise a breakthrough in autonomous vehicle technology; for example, Tesla recently announced that all of its electric cars in production have the 360 degree cameras and ultrasonic sensors necessary for “full self-driving capability” (“All Tesla Cars”). But the hardware on the car is only one side of the equation. Autonomous cars also require immensely detailed three-dimensional scans of the roads and environments in which they operate (Boudette). These digital maps include the locations of every road sign, building, and traffic signal along the way, because current technology is not sophisticated enough to recognize every road feature on the fly. In principle, they’re similar to digital collections of street imagery, such as Google Street View, except much more complicated and intricate. To produce its maps, Google uses specialized laser scanning equipment that costs upwards of $100,000 per outfitted vehicle (Boudette). Even then, human classifiers must pour over all the data, tagging signs and features by hand in a time-consuming and laborious process (Boudette).

The autonomous vehicle industry admits that such maps will be indispensable for the foreseeable future. “If we want to have autonomous cars everywhere, we have to have digital maps everywhere,” says Amnon Shashua, chief technology officer at Mobileye (Boudette). Writing for The New York Times, automotive journalist Neal Boudette sums up the problem: “The reason digital maps are so important is that even the most advanced sensors, like radar and cameras, are not enough to enable a car to navigate a chaotic and changing world … “ (Boudette). Autonomous cars may be programmed with the most sophisticated driving abilities and the most detailed road maps available, but reality is as unpredictable as collisions are inevitable. Every moment, accidents damage infrastructure, construction disrupts traffic, and bad weather renders roads dangerous or impassable. Even the simple act of tearing down a picket fence along the street would invalidate the maps. If we’re going to have the autonomous cars, we face the daunting challenge of creating these maps for every road and keeping them up to date. This may be justifiable for large urban areas where autonomous cars will be common, but it will probably be cost-prohibitive for small towns and rural byways. Eventually, we might have level 4 and 5 autonomous cars, but we may not be able to ride them everywhere.

It’s true that the industry is taking steps to ease the mapping process. Companies such as Here, Mobileye, and Civil Maps are installing cameras on today’s cars and trucks to collect data on lots of roads quickly, and researchers hope to develop new computer algorithms to reduce the need for human tagging (Boudette). But the safe operation of autonomous cars will require 3D road maps that are accurate. In 2016, Joshua Brown was killed in a fatal collision with a truck while his Tesla Model S was under the control of its “Autopilot” driver assist function. According to Tesla, the Autopilot software may have perceived the truck to be an overpass, and hence did not apply the brakes. (Neither did Brown.) Autopilot did not have a road map, which could have provided enough information for the software to avoid a collision (Boudette). Google’s autonomous cars rely completely on their maps to identify critical traffic control devices like traffic signals and stop signs. If a Google autonomous car encounters an unmapped traffic signal, it might fail to recognize it and run a red light (Gomes). The trouble with the crowdsourced imagery and computer analysis techniques currently being pursued is that they may not be sufficient to produce maps that are always right. Even if they are, the public may be unwilling to accept that such a critical and volatile component of the autonomous car would not be vetted for quality control by humans. Thus, the maps needed by autonomous cars will likely remain very expensive to produce.

Level 4 and 5 autonomous vehicles present difficult ethical problems that remain unsolved, and their need for detailed road maps will confine their use to select locations. But what about level 3 automation, in which computers drive vehicles most of the time, but human drivers intervene in exceptional situations? Though it may seem easier to design a level 3 autonomous car, level 3 vehicles would face the extreme difficulty of maintaining the attention of human supervisors. “A car with any level of autonomy that relies upon a human to save the day in an emergency poses almost insurmountable engineering, design, and safety challenges, simply because humans are for the most part horrible backups,” writes Alex Davies, a transportation reporter for Wired. “They are inattentive, easily distracted, and slow to respond” (Davies). Ford faces this exact problem testing its experimental autonomous cars. Ford engineers, despite specialized training, cannot help but fall asleep and lose focus while monitoring the cars, even when partners are introduced to watch the watchmen (Naughton). It’s simply not reasonable to ask a human passenger who has been relaxing in an autonomous car for hours to be prepared to retake control at a moment’s notice. As a result, Google, Uber, Ford, and other key players in the autonomous vehicle industry have effectively given up on practical level 3 automation (Davies). Autonomous cars will not gradually evolve from level 2 to level 3 to levels 4 and 5; we’ll have limited driver assistance one day, then full automation the next. Making that transition will be extremely challenging given the ethical and technical issues inherent to fully autonomous vehicles, and it will be an all or nothing proposition.

Autonomous cars are coming. But as citizens and lawmakers, we should be skeptical about the grandiose claims made by the autonomous vehicle industry that have “gotten totally out of sync with reality,” according to University of California autonomous driving researcher Steven Shladover (Simonite). By 2021, we might be able to ride a level 4 Ford autonomous car, but it will be a “low-speed taxi service limited to certain roads” that Shladover says we should “[not] expect … to come in the rain” (Simonite). Galland’s self-driving utopia, in which mobility is painless and traffic congestion is nonexistent, will remain quixotic fantasy for some time. But the grand delusion that the arrival of autonomous cars is imminent is not just a harmless misconception; it’s having very real consequences for public policy. Senators Peters and Thune are pursuing the deregulation of the autonomous vehicle industry under the guise that the “slow pace of regulation could become a significant obstacle to the development of new and safer vehicle technology” (“Joint Effort”). Transportation commentators like Randal O’Toole argue that investments in public transit “are likely to soon be obsolete” because “self-driving cars will dominate the roads sooner than most people think” (O’Toole). Planning for a future filled with white elephants is foolish. If we gamble it all on the advent of autonomous vehicles, allowing the industry to wreak havoc on our roads and communities and neglecting pressing transportation investments, we will be much like the passengers of Ford’s autonomous cars of 2021: going nowhere fast and left out in the rain.

Works Cited

“All Tesla Cars Being Produced Now Have Full Self-Driving Hardware.” Tesla, 19 Oct. 2016, www.tesla.com/blog/all-tesla-cars-being-produced-now-have-full-self-driving-hardware. Accessed 8 Mar. 2017.

“Automated Driving: Levels of driving automation are defined in new SAE International standard J3016.” SAE International, 2014, www.sae.org/misc/pdfs/automated_driving.pdf. Accessed 28 Feb. 2017.

Boudette, Neal. “Building a Road Map for the Self-Driving Car.” The New York Times, 2 Mar. 2017, www.nytimes.com/2017/03/02/automobiles/wheels/self-driving-cars-gps-maps.html. Accessed 7 Mar. 2017.

Davies, Alex. “The Very Human Problem Blocking the Path to Self-Driving Cars.” Wired, 1 Jan. 2017, www.wired.com/2017/01/human-problem-blocking-path-self-driving-cars. Accessed 9 Mar. 2017.

@elonmusk. “@tsrandall 3 months maybe, 6 months definitely.” Twitter, 23 Jan. 2017, 7:00 p.m., twitter.com/elonmusk/status/823727035088416768.

Fraichard, Thierry. “Will the Driver Seat Ever Be Empty?” Institut National de Recherche en Informatique et en Automatique, research report RR-8493, 2014, hal.inria.fr/hal-00965176v2/document. Accessed 28 Feb. 2017.

Galland, David. “10 Million Self-Driving Cars Will Hit The Road By 2020 – Here’s How To Profit.” Forbes, 3 Mar. 2017, www.forbes.com/sites/oliviergarret/2017/03/03/10-million-self-driving-cars-will-hit-the-road-by-2020-heres-how-to-profit. Accessed 7 Mar. 2017.

Gomes, Lee. “Hidden Obstacles for Google’s Self-Driving Cars.” MIT Technology Review, 28 Aug. 2014, www.technologyreview.com/s/530276/hidden-obstacles-for-googles-self-driving-cars. Accessed 28 Feb. 2017.

Isidore, Chris. “True self-driving cars will arrive in 5 years, says Ford.” CNN, 16 Aug. 2016, money.cnn.com/2016/08/16/technology/ford-self-driving-cars-five-years. Accessed 7 Mar. 2017.

Lin, Patrick. “Why Ethics Matters for Autonomous Cars.” Autonomes Fahren, edited by Markus Maurer, et al., Springer, 2015, pp. 69-85.

Naughton, Keith. “Ford’s Dozing Engineers Side With Google in Full Autonomy Push.” Bloomberg, 17 Feb. 2017, www.bloomberg.com/news/articles/2017-02-17/ford-s-dozing-engineers-side-with-google-in-full-autonomy-push. Accessed 28 Feb. 2017.

O’Toole, Randal. “Transit is dead. Let’s prepare for the next mobility revolution.” The Washington Post, 1 Mar. 2016. www.washingtonpost.com/news/in-theory/wp/2016/03/01/transit-is-dead-lets-prepare-for-the-next-mobility-revolution. Accessed 27 Apr. 2017.

“Sens. Peters and Thune Announce Joint Effort on Self-Driving Vehicles.” Gary Peters: United States Senator for Michigan, 13 Feb. 2017, www.peters.senate.gov/newsroom/press-releases/sens-peters-and-thune-announce-joint-effort-on-self-driving-vehicles. Accessed 7 Mar. 2017.

Simonite, Tom. “Prepare to be Underwhelmed by 2021’s Autonomous Cars.” MIT Technology Review, 23 Aug. 2016, www.technologyreview.com/s/602210/prepare-to-be-underwhelmed-by-2021s-autonomous-cars. Accessed 28 Feb. 2017.