It seems pretty clear from this and other incidents involving self-driving cars that the field as a whole is incapable of producing software that causes the hardware to fail safely when presented with unanticipated inputs, either through neglect, poor imagination, improper education in engineering principles, or (as was the case in the Uber incident that killed a pedestrian) a desire to look good for shareholders/stakeholders.
I’m not sure I follow. Are you saying that the reason I’m puzzled by BB’s vendetta against Tesla is because I bought some stock that has done really well over the last couple months? If that’s so, we should probably all disclose our personal holdings and the specific industries in which we are employed to run conflict checks on any comments we are making, no?
This is incredibly puzzling. No autonomous system should be taking action based on a single input (in this case a crudely modified sign).
The system has multiple data sources regarding how fast it should be going at any one time, the sign, the delta between the current sign and the last measured sign, the speed of surrounding traffic, the speed it went the last time it was in this location, the state database, the database of known speed limits of the area, the speed the driver drove when last in the area, information from other autonomous vehicles, currently or previously in the area. Jesus Christ there’s so much data there that would make the posted limit an obvious outlier. Comparing the sign to the full body of data would immediately flag it as anomalous.
Are BB’s incredibly numerous and righteous directs to news articles related to Trump, racists, corruption, etc. considered BB vendettas? If not, then BB’s much fewer posts re Tesla should not be assumed to be a “vendetta”… unless one is biased.
That was pretty candy ass.
No doubt. I’m just of the camp that compares their progress against the current state of human drivers and in that arena, there is no contest IMHO.
Ok, I’ll assume then that my interpretation of your comment is correct. We’d all better get to work on our financial disclosures, I suppose.
ETA: yeah, of course those other stories are vendettas. Justified vendettas that are completely understandable and morally correct. The Tesla stuff is just…odd.
I thought of that too, but it could be that the database isn’t (yet?) the legal baseline for speedlimits.
Something has to be the baseline, against which offending can be assessed and charged. Currently that is (I think?) whatever the signs on the side of the road are reading, which makes complete sense since the VAST majority of drivers are currently interpreting their driving environment through their eyes, rather than rushing off to google the local speedlimit every few minutes. And, given that, it therefore makes sense(?) that Tesla is trying to create an digital equivalent of our analogue behaviour.
Edit: on the other hand, this doesn’t excuse the lack of gross error checks. On the other other hand, was this test by McAfee conducted on public roads (I sure hope it /wasn’t/) or on a closed test track which either isn’t part of the database that can be checked against or has an absurdly high ‘official’ speed in the database?
I am willing to bet that both Ford and GM have more than a few engineers who know how to pull a battery apart to find out what the secret sauce inside is.
Well, that just leads to little mini Teslas running all over the place.
In my redneck corner of these here united states, there’s gonna be some seriously confused Teslas, seein’ as how every damn body shotguns the holy hell outta them signs.
I think the fundamental problem for autonomous cars, as in every other field of applied AI and ML, is that our AIs and ML algorithms are fucking stupid and have no concept or understanding of context. They’re capable of identifying things and responding to inputs, but they don’t actually have any kind of mental model for what those things are, or in what contexts they make sense. Google’s image AI that kept finding and enhancing nonexistent eyeballs in JPG artifacts was cute and maybe a little creepy, but there’s not really a whole lot of daylight between it and the stuff that’s in these cars telling them what’s going on outside.
This was a case of malicious activity by the researchers creating an unintended effect in the autopilot AI, but it could just as easily have been a branch or power line shadow that caused the software to mis-identify the posted speed limit and immediately accelerate, regardless of whatever else might have been present to suggest that this was the wrong thing to do. If this were a major road through a residential area, a human might understand through context that accelerating to 85 MPH would be inadvisable. A self-driving car, however, lacks that capacity for context and so all it can do is rely on its inputs and fail-safes.
There are companies whose self-driving tech I begrudgingly acknowledge as decent in the conditions under which they’ve been tested*, like Waze, but Tesla and Uber are both waaaaay down at the bottom with a big red 120-point “INCREDIBLY IRRESPONSIBLE” header over their entries on the list. Uber because it’s a tech-bro company that’s desperate to find a way to turn a profit before its investors finally catch onto their scheme, and Tesla because Elon Musk is borderline recklessly irresponsible with his timelines and dedicated to ideas about self-driving that other companies have moved beyond because they’ve concluded they’re impractical (like only using visible-light cameras instead of supplementing it with radar and LIDAR). Neither are conducive to the kind of quality engineering you need when propelling multi-ton boxes of metal around at lethal velocities in close proximity to other human beings without any means of separation (hell, Musk can’t even deliver on the promise of a fully-autonomous self-driving car factory, but he swears your Tesla will be able to flawlessly find itself a parking spot at the mall within the year).
* Notably, everyone’s testing in SoCal and Arizona where it’s 12-hour days and minimal chances of rain, let alone snow. I can only imagine it’s because without the ability to assemble other clues from contextual information, they’d be even worse at staying on the road than humans are when there aren’t any visible lane markers. The only “solution” I’ve seen anyone propose is re-paving every single road in the country with, essentially, giant embedded RFID markers that can be read by the car so it knows where it is. That seems practical.
Part of the test was altering the sign in a way that would still be readable to humans.
Some anecdotal evidence against this though: One time several years ago, some dickweed decided to put a 15 MPH sign in his front yard along a 45 MPH road. Being someone who lived there, I knew that the road was 45 MPH and didn’t slam on my brakes because the speed limit suddenly dropped by 30 MPH. None of the other drivers slowed down for it either, and the fake sign was removed by the next day. We’re not exactly googling the local speed limit, but we’re able to remember what speed limits are and do basic checks against that.
But it’s around the corner, not already installed on 2016 Teslas.
How far away is that corner? Hard to tell just yet.
But this doesn’t tell you anything useful about that.
I kind of find this hard to believe. (Though not enough to do a ton of research) because even my Wave Ap on my smartphone keeps track of the speed limits on the roads I drive and turns red when I go over that speed. And there are no camera’s involved.
Seems fishy to me.
Sure, but that is a very small edge case, and also you aren’t - as you noted - interrogating an external database to find the correct answer. I feel this is one of those exception-that-proves-the-rule situations that that particular aphorism was explicitly created for.
But this isn’t an autonomous car. It’s a fancified cruise control.
This is a developmental beta-test of (some, but not all) of the tech that will eventually lead to autonomous FSD. The connected fleet is gathering massive amounts of data, and finding the edge cases and corner cases that would never occur on a test track.
Want to beta-test the future? Buy a Tesla.
Or not. It’s up to you.
In other words:
if (horn->isActive()) horn->honk();
Great question: Why rely on the camera when the nav system should know?
Better question: When did TSLA become known for the highest quality?
Best question: Why does a luxury car manufacturer matter to us peons?