Tesla autopilot involved in far more crashes than previously reported

Must it? When do you think that will start? Because in the history of the automobile, I’m not aware of one ever having been idiot-proof. Not even close.

I mean, an idiot-proof car would be nice to have, but I suspect it’s kind of like a truly secure operating system. (You know the old saw, you can truly secure a computer if you remove power, cast it in concrete, and sink in in the Mariana Trench. Otherwise, the best you can get is secure-ish. So it goes with making safe multi-ton chunks of metal moving at high speeds, Larry Niven’s Safe at Any Speed notwithstanding.)

1 Like

Considering that for the most part we can’t explain how sample-trained neural networks even come to any specific decision this would definitely be a challenge. I do like the idea of there being a smaller “Confidence Confidence” meter right beside the “Confidence Meter”. And so on, recursively, until it’s just meters all the way down

5 Likes

Error bars

12 Likes

We just take the number of non-crash minutes, divide it by the number of crash minutes, and display the result on an analog meter without a scale. Marketing tells me that it’s an “advanced Bayesian inference engine”…

(edit: I definitely left a divide-by-zero error in my laughable oversimplification for comic effect; not because I’m an incompetent hack. Definitely. We, um, pad the number of crash minutes with a positive nonzero ‘scalar safety offset’, for safety, and then divide the number of non-crash minutes by the enhanced safety number of crash minutes. Yes, just like I said the first time.)

3 Likes

Yes. Idiocy is a given. Its the default in our world. Assume people are dumber than you think and you always end up being prepared and rarely disappointed.

Any feature of a device as complicated as a car should be expected to be abused by lazy, panicked or inattentive drivers.

The question that needs to be asked is:

“How well can we keep this moron driver from killing themselves and other motorists?”

3 Likes

drawn in perhaps by the boing boing being shown on the late show? ( that’s my guess anyway )

4 Likes

Oh, I didn’t think of that. Good catch!

3 Likes

i didn’t see it in the op, but i do wonder what kind of accidents these were and whether driver inattention was part of the problem or not. i could imagine it could be, but there’s also the possibility that the algorithm just goes wrong sometimes. [ like suddenly swerving lanes ]

3 Likes

I think there needs to be a minimal level of engagement with the driving experience, below which accidents increase drastically. For me, being in the steering “loop” is plenty to keep me focused on safe driving (38 years driving, no accidents, no tickets!); the velocity loop is usually handled by cruise control. If I didn’t to actively steer, I’d be asleep within minutes.

Taking the driver out of the active steering loop requires either bulletproof autopilot (which we’re very far from, IMHO) or having the driver be a safe observer, ready to step in and take control immediately when required. Which is apparently 1 second before a Tesla expects impact if I read the article correctly.

I doubt I could maintain that level of focus unless I’m doing something. My experience as a passive passenger is usually sleeping. Which is what the autopilot fanbois think we’ll be able to do soon.

3 Likes

Can confirm. That’s why if you aren’t a fool you keep at least one hand on the wheel and are ready to take back control at any time. Sounds awful, works fine. (Except for some number of the statistics in the article I suppose, but dead men tell no tales.)

I mean, it doesn’t happen often. But it does happen.

3 Likes

That’s the main problem, though. People arent fools for thinking a feature named Autopilot will, under some conditions, take over driving for them.

That’s like saying someone who grabs a pack labeled “Parachute” to jump out of a plane, only to find it’s a “branded” napkin set, is a fool.

4 Likes

Close. It’s like saying someone who grabs some “Parachute” branded napkins to jump out of a plane, after spending the entire flight using them as napkins, and noticing how easily they shred, is a fool. I’m at peace with that characterization.

The analogy the way you worded it only works for a brand-new driver who’s never used the system before.

1 Like

Nonsense. If the driver doesn’t have an accident while using Autopilot to do their driving they get positive reinforcement and continue to use it that way - at least until they do get in an accident or near-miss, at which point it’s often too late.

1 Like

I’m just curious, how many hours of Autopilot driving do you have?

Are we just asking questions? How many Autopilot accidents have you been in?

1 Like

None. But yet I’m not deluded into thinking Autopilot is some kind of magic driving robot, which is what your hypothesis predicts I should believe.

The point is that your earlier comment only makes sense if you assume it’s hard to tell Autopilot can behave erratically if not supervised. It is not hard to tell, nor does it take long to work it out – somewhere back in the comments I threw out ten to twenty minutes behind the wheel, and I’ll stand by that.

It’s easy to convince yourself of all kinds of things, by the power of pure logic, if you haven’t tried doing them. Experience counts for something in coming to a well-informed opinion.

1 Like

This is disingenuous. Obviously, you can increase safety without eliminating all danger.

3 Likes

Yes, you’re very smart. [/ Peter Faulk]

But not all drivers or consumers are. That’s why naming the feature Autopilot is irresponsible. None of the other similar features imply the function operates that way. Assuming everyone has the same experience as you is…interesting. You may not die in a fiery Autopilot Tesla crash but you sure seem eager to die on this hill.

2 Likes

Speaking of disingenuous, that wasn’t the position I was taking, now was it? If you look at the post to which I replied, the poster was asserting that cars must be made idiot-proof. I paraphrase, but not by much. There’s considerable daylight between “good luck with that” and “cars can’t be made safer”.

You didn’t paraphrase, you set up a strawman. Maybe you didn’t intend to, but that was the effect. Hyperbole eliminates the middle ground where, I agree, the useful solutions lie.

2 Likes