So far, that technology has been vastly overrated, though. I’d argue any ideology that assumes perfectability despite being made by, you know, us imperfect assholes, is going to fail to meet those assumptions.
Unless, of course, it benefits the military.
I really don’t. Given the state of things now, I’m not at all confident it’s going to be one of those that makes it into the future.
Sounds like we finally find a point of agreement here!
Yes, but the non-Tesla companies have still made real progress. There are fleets of driverless taxis driving around San Francisco right now, and fortunately none of them has killed anyone there yet. So whether we think it’s a good idea or not it’s definitely happening.
That will never happen, because it disallows Tesla from pushing updates to the software whenever they wish it. It might start to make people think they “own” the things they buy, rather than just leasing it until such time as The Overlord decides to take control and order the car to drive away because you are a day late on some payment.
I don’t see self-driving cars as a remedy for traffic problems or energy consumption or environmental disaster or most of the other problems currently caused by cars. But one area where they have the potential to bring transformative good is in providing freedom for people who are otherwise unable to safely drive themselves due to age and/or physical disability. For a legally blind person just having the freedom to drive themselves to the grocery store or visit a relative could be a life-changing innovation.
That, I will agree, would be a good. But then again, so would enhancing public transit for the disabled. I know in the ATL metro, MARTA provides on demand services for the disabled.
If you believe Elon Musk, your car will be busy making you money as a taxi, so there should be less cars on the road. Or selling excess power back to the grid. Or something…
ETA: you’re absolutely right about the need for better public transportation. Maybe we can use this ICE to EV migration period to re-think things?
They will still have to communicate with each other and with GPS satellites and they will have to perceive the world via sensors. All of these are attack vectors, for example though spoofing input, even if you disable everything else.
Remember that a lot of these cars were developed by German companies specifically with the Autobahn in mind. It’s relatively easy to create a smart cruise system for a road where you will never have oncoming traffic, everyone is more or less following the rules and the road doesn’t have any sharp turns or sudden stops (except for congestion and roadworks).
(Incidentally I want to scream at everyone I see here in Norway who has a Porsche “why would you buy a car that you can literally never drive in the way it was supposed to be driven? Buy an Italian or English or Japanese roadster, those were made to zip around winding country roads!”)
I don’t, because he’s a disingenous twat-waffle who got his seed money from apartheid.
Or maybe we can just… expand public transit? But I get it… it’s not cool and shiny, and technically exciting… all it does is solve the problems of people who aren’t tech-dude-bros, so we have to act like it’s dumb idea although pretty much every other country on earth has figured out how to do it…
I ordered a car from an American car manufacturer, who is developing it for the American interstate freeway system. It’s similar to the autobahn, with limited access, opposing lanes separated by physical dividers, and fences keeping pedestrians out.
Plus, I live in a “flyover state” (in the boring area between the coastal regions), where there are only a few small metropolitan areas, and they are separated by hours of 70+MPH/110+KPH driving on arrow straight freeways. It’s an ideal environment for smart cruise assistance.
I’m personally less concerned about the regulatory challenges of new technology since (while the techniques remain under active development and are almost certainly too immature for prescriptive standards of the 'the NTSB declares that your LIDAR shall be capable of generating a point cloud of this density" flavor) various autonomous vehicle systems mostly aim to either enhance a human operator or serve as a replacement for one; and we’ve been mandating minimum standards for, and conducting mass black-box testing of, human operators for quite some time.
If effective regulation of the systems that control vehicles on public roads required understanding their technology it could easily take literally centuries of constant effort in neurology to get to the point where we could issue a drivers license(just get your learners permit and then an exhaustive fMRI battery so we can evaluate your neural network, easy!). Since that’s functionally impossible, we don’t do that; we just specify some high level tests that operator systems have to pass; and, more or less reactively, regulate specific things like DUI and certain medical conditions that raise seizure risks or similar; and maintenance conditions like failing brakes.
I expect that some of the maintenance-related regulations will be more complex for self-driving systems, probably closer to FAA-attitude than to ‘do your tailights work and do your brake pads at least exist?’; but it’s not hard in principle to conceive of a “vendor handles getting the system to pass the high level performance tests and shows that it is engineered to meet reliability requirements/operator is required to perform maintenance as necessary to keep the system in the field substantially equivalent to the one that was tested” set of requirements that doesn’t delve too deeply into the black box of exactly how someone’s machine vision/sensor fusion witchcraft works.
What I do worry about is the risk assessments and value judgements that will be baked in. To try to put it as bluntly as moral clarity requires; we let pedestrians and cyclists die,in quantity, (disproportionately black and poor, because of course); for no better reason than that car buyers prefer armored luxury tanks with aggressive styling; which favors designs that are actively dangerous(eg. very tall vertical front grills that just hit you like a wall, rather than a low bumper and a sloped front, not to mention the typically excessive vehicle mass).
Aside from the technical challenges, regulation of autonomy systems is going to have a(probably somewhat obscured by dry technical language and dueling statisticians) underlying value determination about who matters and how much. It won’t be polite to say it in so many words; but some performance standards and reliability thresholds will boil down to “how many fatalities per mile for the convenience of not having to get nagged by ‘autopilot’ to keep your hands on the wheel?”. “How many child-shaped objects so we can cut the cost of the sensor package by omitting front LIDAR coverage?”.
I always remember the quote attributed to Anthony Levandowski, on hearing the first report of a driver killed while using a Tesla autopilot system: “I’m pissed we didn’t have the first death”; he was upset that his own operation wasn’t moving fast enough or breaking enough things and had been too risk-averse. That sort of attitude will obviously go somewhat less well with prospective buyers than with techbro leadership; but even there we’ve made a lot more progress on reducing the lethality of car crashes to occupants(both through improved chassis designs/crumple zones/airbag systems and through sheer size) than we have on reducing the lethality of crashes in general; so there’s no reason to believe that self-interest or self-preservation will lead either developers or buyers toward as much caution as public interest might require.
There’s science to back this up, too. Studies have shown that ride sharing has made traffic substantially worse, somewhat counterintuitively, and simulations show self-driving “cars as a service” will make this a lot worse still.
An issue that I forsee when autonomus self-driving cars are viable, is that instead of finding parking, people will tell their cars to circle the block indefinitely. In crowded areas/venues, it’d be problematic.
I’m not entirely opposed to autonomous cars… I’m opposed to mindlessly throwing technology at problems that aren’t solved only by technology. This seems to be the core mindset of the silicon valley approach to things - just throw some code at it and it magically is fixed. If we just “disrupt” what is already there, that’s all you need to do… Most problems in a society need a combination of thoughtful planning, creative thinking, looking at what works elsewhere, some genuine understanding of human beings and how we function, private and public sectors working together (instead of at cross-purposes) and yes, some technological innovation in some cases. To be fully honest with you, I’m a little frustrated with people who think our problems can just be technologied away and that we no longer need a robust public sector with people having other kinds of expertise.