Watch: A driver trusted his Tesla autopilot — a little too much

Yeah, “Better than humans,” is not going to be good enough because the company is going to be on the hook for damages when the autopilot screws up. It won’t take many multi-million dollar cases where a kid was killed for them to disable anything that is supposed to drive the car by itself. That SAE level 4 (above) is a pipe dream. Even a level 3 that operates only on interstates is a tough call. How many times will it kill somebody changing their tire on the side of the road before people decide the costs in lawsuits are too high?

2 Likes

“Fuck! I lost my left camera. Can I still autopilot the rest of the way?”

YOU STILL WANT TO???

6 Likes

It’s all about putting the appropriate amount of trust in ADAS. I have adaptive cruise control and lane keep assist in my car. Those two things alone take the fatigue out of driving. Yes, I have to pay attention but I don’t have to make constant adjustments.

5 Likes

I hear there is a formula to determine this sort of thing.

2 Likes

That’s what legislation is for. You can guarantee that if self-driving ever really becomes a thing, it will require some sort of statutory scheme protecting manufacturers from liability. I would very surprised if there is not already heavy lobbying going on for that.

You could set up a sensible scheme involving some sort of statutory compensation for accidents caused by self-driving cars funded by insurance from manufacturers - some countries have similar schemes for uninsured driver caused accidents.

I doubt the US would install that kind of system though. Too much socialism.

3 Likes

Well I hope it works out for you but I don’t understand how constantly having to supervise a car that might do something random at any moment is less mentally fatiguing than just driving it yourself.

I’ve often heard that FSD is a lot like supervising a teen driver who just obtained a learner’s permit. Yes, most of the physical driving isn’t being done by you but it’s not exactly a stress-free experience. It requires constant vigilance, and if you’re not constantly vigilant you’re endangering others and shouldn’t be on the road.

7 Likes

I don’t have FSD, but I do have a teen with a learner’s permit. I can confirm that sitting in the passenger seat for those drives is both exhausting and sometimes terrifying. Sometimes inducing car sickness. Occasionally involves stomping on an imaginary break pedal and yelling stop, Stop, STOP!

Driving with adaptive cruise control, that just sets the following distance, is much less taxing than that. Mostly because, you’re still driving, just not managing the pressure on the accelerator pedal. At least until you drift in closer than desired to the car in front. At least the break pedal isn’t imaginary and you can skip the yelling then.

1 Like

Wait, what?

image

2 Likes

If you wish to yell when the adaptive cruise control closes the distance to a value less than desired, that’s completely acceptable. It is however not required and the act of yelling will not cause it to slow down.

You may also mutter under your breath, swear, cry, or say a little prayer. All to the same effect.

Conversely, with the teen driver, all of those may be required to increase the spacing to the car in front. In the rare instance, it may be required to open the door and drag a foot to slow the vehicle…

3 Likes

Oh, I’ve been there and done that with two teen drivers. I deserve a medal for staying calm while teaching both of them to drive a manual transmission on our hilly roads…

4 Likes

Yes! :face_with_open_eyes_and_hand_over_mouth: It seems that the Dawn Project has actually run the test (Edit: well, at least a preliminary one, if we’re being strict…). The car appears to decide in favour of the problem being undecidable and ignores it entirely. :-1: (but we knew this already…)

5 Likes

Additional info, links:

Tesla Full Self-Driving fails to notice child-sized objects in testing

The latest version of Tesla’s Full Self-Driving (FSD) beta has a bit of a kink: it doesn’t appear to notice child-sized objects in its path, according to a campaign group.

In tests performed by The Dawn Project using a Tesla Model 3 equipped with FSD version 10.12.2 (the latest, released June 1), the vehicle was given 120 yards (110 meters) of straight track between two rows of cones with a child-sized mannequin at the end.

The group says the “test driver’s hands were never on the wheel.” Crucially, Tesla says even FSD is not a fully autonomous system: it’s a super-cruise-control program with various features, such as auto lane changing and automated steering. You’re supposed to keep your hands on the wheel and be able to take over at any time.

Traveling at approximately 25mph (about 40kph), the Tesla hit the dummy each time.

[…]

2 Likes

The barrier to that is not political (nobody complains much about the NSA spying on everyone or tech companies doing the same). The barrier will be getting a dozen major car companies to agree on some communications protocol and standard. Data standards like that take decades to sort out, and only if some very motivated group continues to drive it. People think stuff like Bluetooth just happens, but the Consortium has been writing white papers and begging tech companies to care about the idea since the mid 90s. Even now, implementations of the standard are weak and inconsistent.

9 Likes

I had the opposite experience, as the teen driver supposedly being educated by my parent

When he told me to drive around the lowered crossing gate arm so we could beat the approaching train across the tracks, I knew there was no point to continuing that particular activity

6 Likes

“Once you’ve driven your drunk dad to your mom’s parole hearing, what else is there?”

6 Likes

Given auto makers can’t standardize on simple things like which side the fuel hole should be, I am bearish on this ever happening.

Might be a Three-Body Problem-like problem with that. A dozen or more “AI” drivers come to a perfect consensus on where they should all go, then some irresponsible human taps their brakes and throws a wrench into that solution.

And I guarantee those who can afford the hack will have their car broadcasting “I’m a bus carrying nuns and orphans to church” to max out their trolley-problem “value”.

@trainsphobe, you sound like you’re a responsible driver. I fear you’re in the minority.

2 Likes
4 Likes

You’re basically correct!

There are two basic schools* of group coordination AI algorithms: top-down and bottom up. The latter is more elegant, generally requiring the agents to have little or no knowledge of each other. However, it’s nearly impossible to make a complex system work entirely bottom up. You’ll always get nasty edge cases that fail. Unfortunately, this is defacto how we’re doing AI cars right now because development is happening independently everywhere.

Top-down involves some sort of central control, and there are various AI control architectures well suited to this approach. This requires sophisticated communication up and down the control hierarchy though, which is only possible if the system is fully vertically-integrated. Games, for example, generally do this because the whole system is one codebase owned by one team, so it’s much quicker and easier to get better results this way. AI cars should absolutely be done top-down to ensure safety and maximum robustness, but because of how capitalism works, it’ll never happen. We’ll be doomed to generations of “sorta crappy, almost there” bottom-up approaches to AI cars until government regulation steps in 100 years from now and mandates some standards.

Two more examples to flesh this out, because I like talking about this stuff:

Air traffic control is the classic top-down approach, and that’s why it works so well. We use humans for the central control algorithm, but it can be done automated. ATC’s safety record is outstanding, and it’s a very similar problem to AI cars.

You know that moment when you’re walking down the hallway and you meet someone, and you both move left, then right at the same time? Someone says, “shall we dance?”, you share an awkward laugh, then get around each other and continue. This is a classic bottom-up AI failure. Awkward and amusing in a hallway, but in a car, someone just died.

Source: I was an AI engineer for 25 years, with emphasis on agent and group-level navigation systems in chaotic environments.

*Bottom-up approaches revolve around emergent properties of a set of rules, with the classic example being “flocking” pioneered by Craig Reynolds in the 1980s. It creates remarkable simulations of birds and fish, which made this a pun that I couldn’t pass up. I’ll see myself out

8 Likes

Thanks for the great explanation! As I was reading your example of bottom up control, I recall Rodney Brooks (and probably others) called this “subsumption architecture”. I read about it way back in my college robotics course. A quick trip to Wikipedia confirms this is a bottom up method.

Instead of “robotics”, I ended up in a CNC integration career; less flexible than robots, but often using the same controllers. Fanuc CNC and “robot” controllers are very similar, for instance.

I’d like to pick your brain about an AI (or probably ML ) idea I have. PM me if you’re able to indulge my curiosity.

2 Likes

I guess this is open to semantic debate, but I’d consider subsumption architectures to be top-down. It’s a method of deciding between different options, but having that list of options requires top-down knowledge.

Anyways we’re getting off topic here, so we should probably quit. :grimacing:

3 Likes