Well that’s disappointing. How do they handle themselves during really common stuff like road construction? Can they tell the difference between a road crew worker holding a “stop” sign and one holding a “slow” sign?
This reminds me of people I know who will criticize a building’s architecture before it is done. What, he doesn’t want them to take on a difficult project because they are not there yet?
There real issue here is the stupid trolling headlines that now dominate the internet. Gets me every time.
If we are killing around ~34,000 people every year in vehicles with human drivers acceptably … how many people can be killed by AUTOs with a similar level of acceptably?
I get that self driving cars are still a long way out in the future but these kinds of challenges are true of just about any major undertaking. I wouldn’t call this a “dirty” secret - it’s just the realities of a massively complex project.
Remember that it took 11 Apollo missions and dozens of manned and un-manned Mercury missions to finally set foot upon the moon. The point being that you have to start somewhere.
The liability issues will probably force the implementation of self-driving elements gradually anyways. I’ve always figured we’d just see more and more automated aspects of driving until about 15 years down the line when the cars will be driving themselves most of the time even if they still are ostensibly “manually operated”.
That gives plenty of time to adjust to their needs, including mapping more roads and figuring out how to get them to deal with parking lots and construction sites.
Nope, not at all. In fact, when they see an orange cone they go berserk, transforming into Starscream and machine-gunning the workers.
It’s a glitch they’re still working out.
Well, to continue your architecture analogy, the problem here is that people are generally under the impression that the building is finished albeit largely inaccessible, and even the reporters taken to view the work being done don’t seem to be aware that what they’ve been shown is just the facade held up by some girders rather than the functional, final building.
Can’t the towed-in lights be just standardized with transmitters/radio beacons? Broadcasting the data about the road situation, to update the vehicle computers? There are already standards being brewed for vehicle-to-vehicle communication and VANET, for cars to “talk” with each other when on the road. Can’t this be leveraged?
I was thinking the same thing. If you were to drop a brand signal on the street over night on my commute route I would guarantee 25% of the people would miss it as well.
It seems clear that a database of changes to roads should be a government function. It’s not like I can just put up a stop sign wherever I feel like it without local government approval.
Nowhere in this lousy article did it mention the possibility of having the self-driving cars serve also as mappers. Both the cars that people actually use to get around, and ones that are sent out just to map. Either way we end up with a potentially huge fleet of mappers and the difficulty claimed goes way, way down.
A lot of temporary stop signs, detours and even traffic signals are rapidly erected to deal with unexpected circumstances. An emergency road crew may have to shut down a lane and add a temporary signal on short notice due to a flood or accident or whatnot, and they may or may not have time or ability to update a national database when they do it. If self-driving cars can’t handle a situation like that then they’re not ready for prime time yet.
So in other words, they are Doing It Wrong.
Now, a large segment of the population may be inclined to answer “More than 34,000?” when advised that it will mean they can browse Facebook or get freaky much more safely in a moving car than when they have to drive themselves. Since it could never happen to them I’m sure many would agree that 68,000 deaths is an acceptable trade for such freidham.
But that increase would be mitigated somewhat by the advance since some of those deaths in the current system are attributable to people browsing Facebook and getting freaky while driving.
ANYWAY back to writing my business plan / proposal for a new no-fault insurance scheme that I’m hoping Google will get behind. It’s called “As a matter of fact, I was totally drunk at the time Automobile Insurance” or AAMOFIWTDATTAI for short.
That’s what I thought would be the plan, incorporate all self driving vehicles to provide real time updating. But then I thought about how vulnerable that would leave us to Cylons & what have you.
Can’t wait to hack that!
Car 286483 “Proceeding to destination through intersection 9476 apace with right of way”
My car “I think you ought to know I’m feeling very depressed.”
Car 286483 “Unrecognized format please re-send confirm you intend to reduce speed”
My car “I’ve calculated your chance of survival, but I don’t think you’ll like it.”
Car 286483 “…”
Not surprised this is a crazy difficult task.
Could the “familiar territory” also be a result of minimizing testing variables so they can work on smaller sets of problems? How many miles did these cars travel on a closed oval before they ever made it to the street? Baby steps I guess.
With those 34,000 people, there are a large number of individual drivers with their own individual insurance, liabilities in place, etc.
If they were all in self-driving cars, would Google be responsible for all of those? It’s going to be problematic to put drivers in cars they can expect to self-drive and yet expect those drivers to handle (and be liable for) any unexpected actions of those cars. That’d be like making passengers liable for any mistakes the driver makes.
There were only four manned Apollo missions before Apollo 11. Apollos 2-6 were cancelled after the Apollo 1 fire on the launch pad. Additionally there were six manned Mercury missions, and ten manned Gemini missions.
Your basic point is correct though
I agree with this sentiment (on both points). I think the difficulty is that with a project of this scope, you need it to be pretty damn close to 100% before you can actually release automated cars into the wild and unpredictable frenzy of the modern world, there’s just too many variables and the end users will always manage some way to break your system intentionally or unintentionally.
It makes sense then that you have to cover a lot of the same ground, you can’t really put it into a real world scenario (with big machines that can kill people, and people who typically don’t like death) until you’re very, very close to an actual release. For the future of the world the concept is too important not to take seriously.