If Elon likes to do one thing it’s crash a ton of hardware and sift through the ashes for usable data
literally the only reason to have an “autopilot” is so the driver doesn’t have to pay attention
these half measures are a terrible idea
building it the other way around, so the driver is constantly engaged but the computer will override once in a while if the driver makes a mistake, might work better, but then again we don’t trust these computers to do that right, so maybe the whole thing should just go away
But last time I checked a teenager is still a human being. And a driver in training (i.e., has a learner’s permit) has a licensed driver in the car as well.
I guess it depends on your definition of who’s liable if the autopiloted car’s driver isn’t quick enough.
I’d still trust the teenager.
There’s plenty of reason to have an ‘autopilot’ beyond just ‘not paying attention’. Improving overall performance, decreasing the amount of attention and effort necessary, improving safety outcomes, etc.
‘Half measures’ are important when trying to actually design a whole, reliable system. Complexity is a hydra that’s defeated in stages, not in one great leap.
Jaguar’s been trying to build exactly what you say for awhile, but the problem is that the automation doesn’t understand context better than the driver does, so how to ensure that it overrides in a reliable manner? Even more exhausting.
we’re not talking about features in development for future release after the bugs are worked out
these are production vehicles being driven by paying customers on public roads
Seems he’s already pretty busy, watching oncoming traffic for the right gap, watching the Tesla display(s), possibly checking his drone footage and battery status. And probably checking his YouTube like count, Instagram likes, etc.
It just seems to me he’s putting others in more than zero danger, none of which they signed up for.
ETA: I sound grumpy because this dude’s a hazard. I’m hoping to see self-driving, I nap while it get me there in my lifetime. But I fear it’s a long way away.
Yeah, if there is one thing we learned from automated systems like autopilots is that humans are very, very, very, very, very bad at monitoring and checking automated systems.
Basically once you have an automated system you should never, ever trust on the operator/pilot/driver to stop it from doing something stupid. Because often they won’t.
Also the better your system is the more discipline it takes to check on it, it is very boring to look at a robot moving around hours on end waiting for that one in a million screw up.
You’re right, in that moving to a semi-automated system moves people from driver to supervisor. What people are better at is understanding the context of what’s going on and understanding how to takeover when conditions shift in strange ways.
Which is why you use a driver monitoring system to help close and assist the monitoring process, so that the required workload is more like engaged passenger rather than focused driver. There’s definitely a middle ground between daily driver & staring at your phone.
Tesla’s approach is very cavalier, and you’re entirely right to be skeptical of it, but there’s a reason why you don’t see these same horror stories from the Super Cruise line. There’s a responsible way to actually introduce and improve on these systems in a way that links automation and human in a cooperative partnership, rather than all or nothing. By the end of the day, it all comes back to the Ironies of Automation and the Skill Rule Knowledge framework, and the more complex the environment, the more the automation’s going to struggle, but there’s good merit in making sure that drivers don’t need to directly handle the long, tedious highway drives that can be much simplified.
Cars simply aren’t L3 or higher until the manufacturer is liable for injuries instead of the driver.
I’ve worked in automation most of adult life. Early in, an older colleague said to me “Whatever the customer thinks is easy is usually pretty hard. Whatever the customer thinks is tricky is usually pretty easy.”
Yes, people can’t calculate hyperbolic cosines in their heads a zillion times per second, so it looks powerful. But the first thing you get used to when programming is that these machines are blazingly fast idiots with no judgement whatsoever.
This topic was automatically closed after 5 days. New replies are no longer allowed.