So now SUV = Sports Utilitarian Vehicles?
I knew this was going to happen.
So now SUV = Sports Utilitarian Vehicles?
I knew this was going to happen.
In practical use, this splits into what the self-driving car can see, infer and know. Already Google Maps can know general traffic density, I guess by cell phones etc. So a car could know the general traffic in advance. But if something suddenly appears in front of it, like a body-sized shape falling off a truck, it canât know if that is a person, animal, etc. If a human-driven vehicle appears suddenly it may not be able to know how many people are on board. Or it may be able to infer that if theyâre all carrying their phones/tablets, etc. And what about the empty car sent on an errand?
A reverse problem today is the backup camera. Another driver seeing you back up without turning your head may assume youâre not looking. That changes as we get used to the new tech, but itâs constantly changing as are our responses.
Seems like a tough problem for a mix of lawyers, engineers, ethicists, etc.! And then usâwhat will be accepted by consumers? Will we really want the car to slow down in bad weather?
Yes. Because tap water and food is highly unlikely to cause immediate irreparable injury or death from impact at high speeds, and any idiot who seriously raises the Trolley Wankery in any context is not someone I want to have access to sharp objects, let alone involved with determining appropriate programmatic responses to vehicular traffic.
You can test water and food in the lab. You donât have to worry about someone coming by and eating the test material. You can perform said tests more cheaply and on a much greater scale than you can afford to build and maintain a fleet of cars.
Computers are stupid. I mean that in the most literal sense of the word. They follow instructions to the letter, and do nothing else, which is at the crux of this whole silly discussion. Thatâs fine on a closed course, or on open roads that arenât clogged thanks to an accident, or closed off entirely due to construction, but in real life thereâs a related issue: people are stupid. They speed, they make illegal turns, and no one follows all of the traffic laws 100% of the time. I really donât think either group will be able to cope with the other in large scale implementations in the West. In regions where traffic laws are considered vague rumour, youâd be better off attaching lawn chairs to flying drones.
People, this is corporate America. This wonât come down to philosophy or what is right. This is about money. Buy the low end model, you get a self sacrificing car. Splurge on the deluxe model, look out pedestrians here I come to avoid a collision with something bigger than me.
My property should protect me and mine before anything or anyone else. SOP when encountering a deer on a highway is to brake as much as possible and avoid swerving. This basically guarantees the deer dies (or is at least horribly maimed) but reduces the chance of the automobile flipping into the ditch and killing the passengers.
I donât see why an autonomous car should behave any different just because people are involved. The analysis and hand-wringing should focus on why the hell the other party was in the way.
The example in the article is awful. The full context is:
How will a Google car, or an ultra-safe Volvo, be programmed to handle a no-win situation â a blown tire, perhaps â where it must choose between swerving into oncoming traffic or steering directly into a retaining wall?
Two problems: the first is that the safest thing to do when a tire blows is actually just to keep driving. The second is that if you drive into a retaining wall, youâre very likely to survive without killing anyone else unless youâre at high speed.
This points to the real problem of pretending trolley problems has any bearing to real world problemsâtrue no-win scenarios are exceedingly rare, and when they arise, they usually arise because someone else has already fucked up. Travel at safe speeds and check your tires and youâll never be in this situation, ever. Once youâre reached a crisis point, the number of lives saved by which no-win decision doesnât really matter and you might as well flip a coin.
Iâve had tires blow at highway speed three times. I survived each time by doing what youâre supposed to do, none of which includes panicking, swerving, or crashing. That is a very stupid example that reads like it was written by someone whoâs never even passed driverâs ed.
That order of priorities would have the autonomous vehicle correct its skid through the bus-stop full of cute primary school kids in order to avoid wiping itself out on the oncoming truck âŚ
Actually, there was a case in one of the local lab. Before the cops had their own forensic labs, they were bringing samples to check there. One day they brought a bottle of pickled shrooms, to be checked if they arenât poisonous. The guy in charge left the bottle on the table. The next day the other guy came in, singing praises to the shroomies. So the mushrooms were proved to be non-poisonous by an in-vivo bioassay.
As of testing cars, what about putting the carâs brain into a virtual reality jar? Disconnect the sensors, replace them with computer-simulated inputs (framebuffers for cameras, 3d model-derived response curves for laser scanners and depth cameras, simulated data from vehicular networksâŚ). Make a simulation of the road situation, feed the brain with simulated sensor responses, see what the thing is doing. I believe a game engine could be repurposed for this.
Such sim rig will be a must-have for autonomous cars hacking, and for making sure the vehicles are doing the bidding of us and not of some bunch of philosophers or - worse - lawyers whose skin is not in the game.
There has been for many years a market for modifications to engine control software to boost an engineâs performance.
Why not upload people into their cars?
Possibly repurposed, vat-grown simian brains? Meat puppet cars?
Maybe thatâs why the easiest answer to the problem is collective ownership. In that case, the owners of the vehicle fleet owe a duty of care to society, and a utilitarian approach to accident mitigation would be appropriate.
The rule could then be a simple âwhatever you need to do to have as few injuries or deaths as possibleâ.
Everyone who rides would have read the EULA, right?
Because that tech is not available yet. On the contrary to self-driving cars and their environment simulators.
Biotech brains are difficult to make and control. You cannot easily backup the state and make a new one with the same state from scratch. Photolithography will be the king for some more years. Self-assembly of 3d structures will likely follow. Actually grown neural betworks on biological basis, I wouldnât bet on it much; the disadvantages are too many.
With better vision, better information, better reaction times, better road sensing and car handling, Im going to trust the safety of these cars over myself. And Im especially going to trust autonomous cars over many of the half wits I see on the road daily. What if the half wit is the one with the emergency and decides to swerve into my lane instead of taking himself off the bridge?
The optimism is stunning in its assumption that there will be enough control in an emergency to make a choice. I think itâs enough to try to avoid the accident or minimize damage to the vehicle as much as possible. You know, like we do when we drive.
The car will be able to make a final decision much faster than a human will. However, in these split second scenarios it is more likely the car still doesnt have enough time to react and will simply hit whatever is in front of it no matter what the decision was. The actual outcome should be more focused on taking out just one pedestrian instead of two, or hitting another car in the engine or trunk instead of the passenger compartment.
The idea of who dies is more of a moral dilema and is only confused by tech we dont understand yet⌠its not the tech causing these decisions.
This whole question really bugs me. Or more specifically, any time spent discussing it feels like a less-than-valuable distraction. I should work on a succinct informative summary.
Something something Sophieâs Choice.
The whole rule bound idea for self driving car kinda reminds me of the way corporations are required to maximize shareholder profit no matter who gets hurt. Now, if corporations are people, why shouldnât self driving cars be people too? Then can the dream of generations of men be realized, and they can marry their cars!
Exactly!
Every time I see an article on self-driving cars these days, they always put forward some form of the same straw man. âGiven he choice, can a robot be taught that a cyclist is a squishier than a suburban, a school bus is worth more than an ice cream truck, or that it should sacrifice a single passenger for a busload of orphans?â They trouble is, even humans donât do that. Ever. You stay in your lane, and if things happen you either slam on the brakes, or donât. Thatâs the best the human brain has come up with in all this time.
Not from impact at high speeds, but how about cancer? How about death from from infectious diseases? Or is speed the primary criterion upon which you make risk-based assessments?
Long-term hazards testing is expensive, and often frustratingly inconclusive. You can test cars on tracks or in simulators where other cars are piloted by real humans. Said simulators arenât half bad either.
This is not precisely true, now that computers are being programmed to âlearnâ (i.e. reprogram themselves to satisfy conditions.) But even if it is⌠so you donât fly in planes? TCAS is a collision avoidance system that is installed on (as far as I know) all passenger aircraft and pilots are now trained to listen to TCAS instructions over that of actual humans ever since two planes collided in mid-air precisely because a pilot was listening to a traffic controller rather than his computer. Computers are also considered way more trustworthy than humans in nuclear reactors, and a whole host of applications where human lives are at stake. That computers follow instructions is not a meaningful critique of their use, especially since they are most likely already in your car, controlling a number of systems, including when your car changes gears, and if you have ABS how well your car stops. Unless youâre prepared to go back to the bad old days of learning how to pump your brakes so you donât go into an uncontrolled skid, Iâd say the idea of computers controlling vital processes is here to stay.