Changing a 35 MPH traffic sign to 85 MPH with a piece of tape causes Tesla to drive 85 MPH

Oh, next year. Two, tops.
I’m quite certain of it; it has been that way for the last 50 years or so.

2 Likes

IME, the fact that you actually know enough about Tesla to consider buying their stock makes you a “biased observer” and therefore makes your opinion untrustworthy.

If all you knew about Tesla was the sound bites you’d picked up from BB or Gizmodo or TechCrunch or such, you’d be just as qualified to have an opinion as everyone else here.

(Also, many seem to presume that if you own TSLA stock, of course you would lie and spin-doctor and distort, in order to boost the stock price. Because that’s what one naturally does, amirite? Frankly, I’d find that presumption offensive if it didn’t tell me more about the accuser than the accused.)

I don’t really think the Boingers themselves have a vendetta against Tesla; they just have an addiction to sensationalistic spin for clicks.

Some of the BB commentariat, OTOH, seem to have memorized every long-debunked Koch Bros/Big oil/Russian Troll Farm anti-EV talking points/disinfobites, and it’s like playing endless whack-a-mole to even engage in conversation here.

I used to fall for the concern-trolls’ “OMG, what will the stockholders think!?!?” by telling them what this actual stockholder (and others of my acquaintance) actually DOES think, but that made me an untrustworthy and biased (and presumably dishonest!) source, in the eyes of some of the more contentious commentators here.

So, f*ck that. Life’s too short.

1 Like

I’m sure they do. The question is are there manufacturing steps That reduce cost, but leave no real hunts as to what they are in the final product.

So things are absolutely not that way, like “forget making each battery cell individually armored, make them hold together but rely on being in an armored box for most of the real protection” (I think that was like a 40% reduction in cost, even after accounting for the external box).

You may know that, and I may know that, and Tesla may be forced to admit and acknowledge that every time someone dies while Autopilot is engaged, but people have far more belief in the capabilities of Tesla’s Autopilot than they should, and Tesla is absolutely culpable for that misplaced faith both by calling it “Autopilot” in the first place (no matter the technical definition of the term, the common vernacular use of the word “autopilot” is “something I don’t have to be involved in”) and by heavily promoting its “full self-driving” functionality in all Tesla vehicles (while in the fine print disclaiming that this is just hardware future-proofing and should not be used autonomously and/or does not actually exist yet). The NTSB actually made them change the ad copy on their Autopilot page because this is what it used to say:

Only as a result of federal insistence has their language around this technology gotten even remotely circumspect.

The problem with self-driving cars is that as a pedestrian (or hell, even as a fellow motorist), I am actually unable to opt out of participating in Tesla’s (or Uber’s, or GM’s, or Google’s) beta-test of the future because they are performing that beta test in a public space and not really providing any sort of notice or warning to people that they’re doing so. It may be all well and good* to blame people for getting themselves killed when their own Tesla’s Autopilot doesn’t seem to understand that left-lane exits exist on interstates, because they opted into a Beta Test of the Future™, but I don’t think anyone consulted with the woman who was run over by Uber’s pilot program vehicle in Tempe, Arizona (which, I should note, was also speeding at the time) before she went to cross the street with her bicycle.

Google (now Waymo, not Waze as I said in a previous post, I got my W-companies-owned-by-Alphabet mixed up) started this whole thing off a few years back by promising that its self-driving cars were already better and safer than human drivers, and Telsa leans super hard into this argument as well despite them having to fend off a regular stream of Autopilot-related casualties and fatalities. And yes, humans are terrible drivers because they suck at some of the very things computers excel at: constant vigilance, total situational awareness, and microsecond reaction speeds. Computers don’t have blind spots, they don’t have to “check their mirrors”, they can respond essentially instantaneously to any perceived risk, and they don’t panic. Unfortunately, computers are also still very easily punked by things that humans are inherently good at parsing, like plastic bags blowing around, spoofed speed limit signs, badly-projected lane markers, or a pedestrian pushing a bicycle across a street (and not, as Uber’s software thought, “other”, “vehicle (stationary)”, “other/vehicle”, “bicycle”, “unknown”, and “bicycle”, throwing out positional and translational data each time the classification changed so that it didn’t realize it was about to kill someone until just slightly over a second before it did so).

What this and a lot of other testing has illustrated is that autonomous driving companies (especially Tesla and Uber) are not building products that fail safely, and are only reacting defensively to threats as the real world presents them. That’s a serious problem. It shouldn’t be possible to punk a self-driving car into doing something unsafe, especially if it’s something that a human could see right through. Even putting aside active malice, these systems have to be capable of sanitizing all of their inputs and behaving in a predictable manner in potentially adverse natural conditions. The fact that a Tesla could ever be so easily punked into accelerating to 85 MPH is a clear sign that that’s not happening, and that’s really worrying.

* It’s not.

11 Likes

I’m not dead, yet!

But seriously, most speeders speed by 5 to 15 mph over, not 50.

3 Likes

2 Likes

If my cruise control system unexpectedly caused my car to accelerate to 50 miles an hour over the posted speed limit then I’d call that one helluva dangerous glitch.

7 Likes

It’s a well-concealed fact that Tesla’s AI is actually just the REAL Elon Musk (not the android who dates Grimes) hooked up to a Cerebro-like device. It’s only natural that he should make the occasional mistake.

2 Likes

Thank you for taking my (failed) joke so seriously.
Could you specify how long the middle stroke would have to be, so it becomes OK for humans to mistake as 8?

In case this jokes fails, too: It is important to think about the safety of “stupid” machines; this artifact seems like a (to me, amusingly) contrived & specific example - just like your qualifiers: “Relatively few humans”, “they just elongated the middle stroke”.
There is no way to 100% prevent malicious manipulation, for both machines & humans, and this one is yet another in a chain of exploits, which will be fixed, hopefully systematically.

Next week’s headline: this scriptkid hacked their report card & tricked the report card scanner!

This.And it’s only going to get worse because closed track conditions can’t replace real world testing. The real wold is messy and people are going to die. If you want autonomous driving you’re gonna have to break some eggs, and those eggs just happen to be humans.

The counterpoint is that people already suck at driving and kill other people on the road. This is true but most people expect machines to be better. When a driver kills someone they are generally taken off the road, when a model of autonomous cars kill someone do we take all of them off the road? If so how quickly?

3 Likes

It would be easier to change all those D’s to P’s. No wonder he got all (D? P? R? No, A, definitely A!) pluses.

1 Like

To be more accurate a self-driving car has no context if it hasn’t been trained to have any. Like you say it does rely only on its inputs, but the inputs can include relative speed of other cars, recent speed limits, visible curves, and even “all that stuff off on the left and right of the road surface”.

Only however if all those inputs are both given to it, and available in the training data, and required to make correct choices in the training data.

In some very interesting ways training an AI is like teaching a person. In some very interesting ways it is utterly unlike training a person. If you teach your kid how to drive and you fail to teach your kid what to do when a traffic light is out they are likely to watch what other people are doing at the time and copy it. An AI is going to do whatever it did in the closest bit of it’s training data. If whatever your kid did is the wrong thing (they may have chosen the wrong person to copy, or didn’t take into account all the circumstances when they copied) and nearly causes an accident your kid is likely to notice that, and try some other thing next time (or actually ask you/look it up when they get home). If the AI does the wrong thing and nearly causes an accident, it won’t learn a thing and will do the same thing next time.

(on the other hand an AI doesn’t get bored, and if you do train it how to behave correctly in unusual circumstances it doesn’t forget that training just because it hasn’t happened “for real” in “months/years/decades”, and it doesn’t panic)

I think the real solution there is once the “proof of concept can drive in clear weather and roads in decent conditions” is done is starting on training data for crappy weather and crappy roads (maybe as multiple phases).

I mean figuring out where to drive with no visible lane markers is harder, so may as well make sure you manage to get driving with kissable lane markers working. Likewise driving in rain/.snow/hail/whatnot is harder than clear so may as well make sure that driving int he clear is a solvable problem before moving on to the harder problems.

That might not actually be the plans of any of these organizations (it’s not like I speak for them or anything!), but I think it is a viable plan that works preceding from the current state. I would actually hope they are collecting training data now and maybe doing full simulation studies at the moment while they are still working on real life trials in fair weather. Again, beats me what they are actually doing or planning on doing.

As soon as it does we know the “I” in AI finally started working.

3 Likes

I keep reading AI as in Gore, not as in Artificial Intelligence

3 Likes

I quite often think of Al as in “Al Bundy” when I read stuff about how “smart” algorithms have allegedly become.

2 Likes

Pfft. When has an algorithm ever scored four touchdowns in a single game?

3 Likes

This topic was automatically closed after 5 days. New replies are no longer allowed.