More automation isn’t always bad, and I don’t think that anyone has been claiming that. Certain types of cruise control and lane assist systems are just fine because they’re collaborative systems- they do some of the work to assist the driver but the driver is still remaining physically and mentally engaged with the act of driving. What makes this latest Tesla system different from previous driver assist features is that for long periods of time it does all the work of driving, and just requiring the occupant to keep a hand in contact with the wheel does not mean the driver is mentally engaged in any way, so the driver very well may not be able to react quickly and save the day when something unexpected happens. (As all the studies indicate) If understand you correctly I think you’re arguing that, even if that’s true, it’s better than the status quo because so many people are awful, inattentive drivers anyway, so using a level-2 system as if it’s a level-3 system is still a step up in safety? I don’t think that’s true, but that’s what you mean then at least we do agree that people won’t be using this level 2 system the way it’s intended.
Where in your equation do you factor in the failure rate for this level 2 system? Say it were to crash, on average after x number of hours of driving without human intervention. What’s your value of x before you figure it’s better than a human driver? And what gives you any confidence at all that this beta software, which is explicitly not designed to work without human intervention, exceeds that value?
Ask me that question six months after it is widely released. Lets see how the number of accidents from individuals in Tesla vehicles compares to the baseline.
Oh but wait, that number is already 10 times in favour of Tesla, so we’ll have to give them a hell of a handicap first - I guess: “How much less than 10x safer are Teslas than other cars after enabling full self driving?”
IMHO, People really, really underestimate how many accidents are caused by shitty human drivers.
Also, data for how often their latest system creases without human intervention in urban driving situations with pedestrians present, etc. is not available yet, but you should still be able to answer my question: what’s that number need to be before you feel that it’s responsible to release a level 2 self driving system that will clearly be used in practice as a level 3 system? My answer: if it’s going to be used as a level 3 system (as we all know that it will be) then it should be treated as such, and be able to prove that it meets testing and legal requirements that a level 3 system would be required to meet.
There’s no doubt that we’ll get good level 3 and 4 systems in the coming years. Waymo has made huge strides in that direction. But Tesla has not shown that they’re there yet, so releasing it as a level 2 system is just a cop-out that can and will endanger people.
So you are objectively trying to say that driving in a Tesla is less safe than driving another vehicle? Because if you aren’t, then IMHO this entire line of reasoning is tilting at windmills.
Sooner or later the Ableist FUD has to stop. My disabled partner would really like to be able to be independently mobile, and since just today we had an article on why US public transportation is so terrible, and the fact that we’re only in the first pandemic of this century, I think it’s fair to say that unless driving a Tesla is less safe than not driving one (and I think folks would have to manipulate the data pretty hard to come to that conclusion) then I applaud anything that gets this tech into the real world to gather data.
Because you know what people said when the current autopilot features came out? This. This exact same thing. And I’m sorry that the real-world data disagrees with that analysis. But the world needs to move forward, we owe it to those who cannot do for themselves, and unless this technology makes these cars less safe than the baseline, which I very, very, very much doubt given the already proven safety record of the existing tech (whether it’s 2x or 5x or 10x safer), then more power to them. May they make the world more accessible to more people, and hopefully save some lives in the process.
“Ableist?” Where the heck are you getting that from? I’ve never had any problem with the responsible development of true self-driving systems such as what Waymo is doing, which will surely be a great benefit for people who are currently unable to drive themselves. I think I’ve made it pretty clear at this point that my issue is with a level 2 system being treated as a level 3 system.
You don’t have to believe randos in a comment thread, but there’s an entire field of industrial process design with decades of experience in this kind of human factors work. L2/3 has a vigilance problem, which is why Google and (I think?) several others are skipping it entirely as a solution. Either it’s autonomous or it’s not. Telsa on the other hand, is going “YOLO, brah, L2 is an Autopilot”