Tesla on autopilot almost crashes into passing train

Originally published at: https://boingboing.net/2024/05/20/tesla-on-autopilot-almost-crashes-into-passing-train.html

7 Likes

As a ferroequinologist I can not condone this.

18 Likes

“Attempt to kill me once, shame on you…”

22 Likes

“FSD”? clearly i missed a step. When did they become “Full” self-driving??
(“you @#$ fool! ‘full’ in this case doesn’t mean ‘entirely’”)

FSD

Also found in: Encyclopedia, Wikipedia.

Category filter:

Acronym Definition
FSD Fifty Shades Darker (book and film)
FSD Fire Safety Director (various locations)
FSD Fonctionnaire de Sécurité de Défense (French: Defense Security Officer)
FSD Fox Sports Detroit
FSD Frame Shift Drive (spaceships)
FSD Federal Service Desk (US General Services Administration)
FSD Full Self-Driving (Tesla, Inc.)
FSD Fail Safe Defaults
FSD Fail Safe Device
FSD File Set Descriptor
FSD Full Set of Data
FSD Free Software Directory (Free Software Foundation)
FSD Family Support Division
FSD Facilities Services Division (various organizations)
FSD Foundation for Sustainable Development
FSD Fédération des Sports de Danse de France (French: Federation of Sports Dance in France)
FSD File System Driver
10 Likes

They didn’t, but they were marketed that way.

15 Likes

Tesla on autopilot

It’s worth noting that what Tesla calls “Autopilot” is different from their “Full Self Driving (Supervised)” as it’s now named. Autopilot is basically lane-keeping and adaptive cruise control like many other cars have, plus a few other bells and whistles. “Full Self Driving (Supervised)” is the inaccurately-named feature that takes full control of driving and doesn’t always do a very good job of it, so the driver has to closely supervise and be prepared to take over at any moment, which sounds to me like it would be more stressful than driving yourself!

16 Likes

25 Likes

Yet another reminder that self-driving cars do not exist yet and any claiming to be such are just corporate experiments using human beings as expendable guinea pigs.

20 Likes

Isn’t “Full Self Driving” i.e. fully autonomous an AI-complete problem? If we had that, we’d also have working (as opposed to the mess we do have): reliable and accurate auto-translation, human-language computer operating systems, and generalized problem-solving:

If/when these issues are solved, not just cunningly bypassed so investors/users are tricked, we’ll know. After all, as a species, we’ll just have become obsolete – humans can’t be overclocked and run in massively parallel processing clusters*. I hope.

*No, caffeine and teamwork don’t count.

6 Likes

From the forum, a Helpful Commenter says,

You are fully responsible as the driver when FSD is engaged.

This is, in a nutshell, the problem with getting closer to (but not yet attaining) level 5 automation: It is an established fact of human attention and vigilance that as attention demands drop, so does vigilance. It is contrary to human capability to expect high vigilance in low-attention scenarios (meaning, situations where the need for the operator to observe/decide/act is infrequent or undemanding). With fewer demands to attend to, vigilance drops off severely. This is, in fact the central problem of closed-loop automation as it gets closer to open-loop.

For Tesla (and, let’s be honest, all vehicle companies working towards, but not having yet achieved, level 5 automation), this becomes an excuse to drop blame in the human operator’s lap when automation fails.

(Telsa calling it “Full self-driving” certainly isn’t helping matters.)

24 Likes

It’s messed up that they call it “self” driving when you’re not driving yourself.

1 Like

No worries. The train isn’t bulletproof. You’ll be fine - probably plow right through that poor train.

8 Likes

It’s probably worth noting that all of those are weasel words.

12 Likes

its simulated FSD. like chatGPT is simulated AI but not called that because reasons.

7 Likes

I think the “hypothesised” in the quoted article is doing a lot of heavy lifting.

Lots of stuff we once thought needed “real AI” ended up being stuff that “merely” machine learning could do, and in particular the last time the AI hype cycle got red hot it was because ML was doing “real things” (a lot of “machine vision” tasks fell to ML!) and many people thought (and/or were fooled into thinking) that ML was the key to “real AI”. Ends up ML solved a lot of problems, or came much closer. Including voice recognition. It also fell far far short of “real AI”.

Time passed, and now LLMs are doing the rounds of “look real AI!”, and I’m not going to say “for sure no, it’ll fall over just like all the other times!”, but I am going to kind of point at the prior rounds of this hype cycle and raise an eyebrow.

I mean modern LLM powered voice recognition is amazing.

On the other hand I don’t see anything in an LLM that can bridge the gap from “fancy autocomplete” to “LLM chatbots that can actually reason abstractly and solve real problems that didn’t happen to be in it’s training corpus, or a trivial variation thereof”

I’m going to advance a theory that “AI-complete” really just ends up being a list of things we can’t really do with today’s state of the art “AI stuff”, but they will not fall as a group, but end up getting picked off in ones or twos, and in small groups.

So I think whatever AI technique (if any?) solves “full self driving” is not going to be the same thing that solves generalized abstract reasoning, and that high quality human language translation will also be a separate technique. I mean we already have Machine Vision that would have made computer scientists in the 1980s convinced we had solved all the AI stuff for reals. Being sold as $3 phone apps to recognize plants or birds. Or bundled into the basic photo app.

8 Likes

John Finnemore has the answer:

4 Likes

If it’s not safe to trust it, and it repeatedly kills people, does changing the name make it safer?

11 Likes

It makes it safer for Tesla to keep selling it, since when a car with it crashes they can claim that they warned you it needed constant supervision!

2 Likes

I’d say that the outcome of people getting into accidents or near accidents matter more than whatever magical phrases Tesla comes up with to try and mitigate their responsibility to the public.

9 Likes

If you mean between “Autopilot” and “Full Self Driving (Supervised)” then presumably they were aware of the difference since they paid extra to upgrade from the former to the latter!

1 Like