With the caveat that driverless electric cars, if they work the way they do in the movies*, could serve as a reasonable stop-gap between now and when we finally get around to re-engineering our cities so that cars aren’t an almost universal requirement (which driverless vehicles may well further delay), and the further caveat that brandishing a firearm at people and deliberately trying to run them off the road is not okay…
*As a side note, near-future sci-fi movies tend to have autonomous vehicles operating only in highly controlled environments, like the restricted-access freeways in Minority Report and I, Robot. Everywhere else, characters are driving manually.
2018 has been a year loaded with hype bubbles around autonomous vehicles popping left and right as tech and car companies finally realize that, actually, creating an AI that’s smart and general-purpose enough to operate a multi-ton vehicle at speed alongside people in an uncontrolled environment is actually way more complicated than they thought it would be (if I had a nickel, Silicon Valley…). Uber’s overtly reckless and unregulated testing killing someone in Phoenix is only the most egregious example. Tesla’s self-driving technology, while actually quite limited, has been so over-sold that people have relied on it to drive them directly into trucks that got mistaken for a cloudless sky and concrete crash barriers at highway speeds. GM’s Cruize has had trouble differentiating traffic sign poles and plastic bags from actual obstructions in the roadway, resulting in erratic and sometimes dangerous driving, even though collisions thus far have ultimately been the legal fault of the driver following the Cruize who expected it to behave like a normal vehicle. Waymo’s own autonomous taxi service, announced to great press accolades as launching by the end of this year, is still limited to the people who were already in the closed beta test for the service, and there’s still a safety driver behind the wheel (they said there wouldn’t need to be one). The self-driving car industry seems to be approaching this problem with the same “move fast and break things” attitude that Facebook has toward its social network, I think it’s understandable that people are just a little bit put off by that.
Compounding the technical problems is the complete lack of transparency and oversight surrounding the development and testing of these platforms, which is also understandably something that makes people anxious. Nobody but Waymo, Uber, Cruize, and Tesla knows how good their technology is, and this isn’t exactly a “revolt against the Jacquard Loom” scenario where people are just afraid of change. People are being forced to interact with unproven technology in dangerous situations without their consent, or sometimes even their knowledge. California is the only state, afaik, that has strict reporting requirements for all autonomous vehicle incidents (both in terms of reporting everything as well as reporting it in a timely manner), which is why almost everyone moved their testing to Arizona, where the state government basically welcomed them in and promised not to pay any attention to what was going on.
There’s also the distressing Big Brother implications of these vehicles, which almost never seem to make headlines. Generally speaking, the people who get up in arms about red light cameras, the UK’s addiction to CCTV, and the creepy GPS-enabled driving trackers being pushed by auto insurance companies have a lot of sympathy around here, and yet the fully-3D, real-time, multimedia surveillance necessary for autonomous vehicles to even operate (let alone be audited in the event of a collision) is somehow expected to be exempt from that concern.
Yes, there are stupid people who do dangerous things on the roads every day. But those things are still being done by people. We’re quite good at assigning blame for incidents and punishing those responsible when they’re people. Assigning blame and prosecuting the person responsible when a faceless corporations with beta-quality (at best) blackbox machine learning algorithms fails to properly identify a pedestrian in the roadway because “well, we’re going to have to start testing on the open road someday, so why not now?” is a much thornier issue.
I’m not disputing that people are bad drivers. We are. We’re awful at it. But getting a computer to drive safely and accurately in an uncontrolled environment is trying to accomplish about 10 different things at the same time, all of them wickedly complex and deeply inter-related, and most of them things that we as humans are still way better at than computers (object recognition and categorization, understanding when someone is about to step off the sidewalk versus just standing there, knowing when someone is waving you on at an intersection, having a reasonable suspicion that the BMW driver is about to cut you off, interacting with cyclists at intersections, knowing where the road is when it’s covered with snow, etc.). While LIDAR technology and the eternal vigilance of computers means that they can in theory do a much better job than we can, contrary to what these companies’ PR departments say, I don’t think they’re anywhere close to actually being ready for prime-time. Reality is basically nothing but edge-cases, and most of these companies still seem to be having trouble with the basics.