Waymo self-driving car hit bicyclist

OK, so I suppose you’re saying that we’d need to go through the process to see if something systematic in the company is at play, which makes sense.

So, here’s a followup question. Assuming nothing overtly off is going on with the company: there is no cover-up nor fraud nor anything else illegal. The system fails in some identifiable way that the designers simply didn’t account for. What is the correct amount of punishment?

I worry a great deal about this because, while in this particular case the flawed reasoning seems obvious and there’s absolutely no reason why the Waymo car couldn’t have just waited longer until it could see (in general, which is always safer), that there will always be cases in real-world settings where we could infer the reasoning could have been better. It’s sort of the nature of the problem: every reasoning system in real world settings is a beta test to some degree, and many of these cases don’t occur until they occur (just like we don’t realize we have a distracted driver on our hands until they hit someone).

I suppose in the end, we really just need to make sure that the testing benchmarks are public and reproducible and force companies to use them. Fund more public research on testing FSDs.

It’s not a question of punishment in any scenario. It’s a question of public safety and regulation regarding the vehicle type and/or fleet in question. If the operator is penalised in other ways in the course of ensuring those things, so be it.

No-one is expecting perfection in the operation of any type of vehicle – that’s just a variation on the Nirvana fallacy. Mitigating factors are also taken into consideration. But if there’s a systematic problem with either a human driver or an automated one, consequences will accrue on a case-by-case basis.

These are not new problems. The aircraft industry and pharma constantly deal with them. But tech companies think they’re somehow immune because they’re “disruptors”.

5 Likes

It would be nice to have an arrangement in place where companies were required to use public and reproducible testing benchmarks, but they were still liable for reasoning mistakes, creating an incentive for those companies to help fund public research into better benchmarks.

1 Like

I’m speaking for myself here and not @graccus, but I think that’s the wrong question. I think there needs to be someone at the company who is individually accountable for potential harms of the product. Such accountability would follow the same path as if they were operating the product. If the harm was due to some extraneous factor, then they wouldn’t be liable for it, just like an individual not associated with the company wouldn’t be held liable.

If, however, the accountable person or their peers made decisions that were directly responsible for the harm, then they would be liable at the same degree as an individual operator, per occurrance.

In the case of Waymo, if the FSD vehicle ran over a cyclist in a crash that would result in a driver of the vehicle to face fines, jail time, or loss of license, then so would, for example, the Waymo CEO - as an individual, not distributed to the company.

ETA: and I think this should apply to all product liability law, not just FSD vehicles. It should apply to Boeing when they cheap out on safety parts, it should apply to Jeep when they knowingly go to market when their parking brake randomly disengages, it should apply to J&J when they continue selling talcum powder after testing shows it causes cancer, etc etc etc. It’s a complete rethinking of how product liability works.

2 Likes

Agreed, but that’s bad ol’ government regulation. Musk and Andreesen and their ilk see it as onerous and unfair.

2 Likes

You’d think these Galtian heroes of future generations would cement their status by dropping the corporate shield. [Narrator: you wouldn’t]

Personal Reaponsibility for thee and arm’s length liability for me…

1 Like

Yeh, completely broken culture there. Eff that sh*t. It’s really unfortunate that the US government has given up so much involvement in research recently, particularly in AI.

1 Like

When the snow melts I’ll grab a photo of a roundabout, and a mini roundabout.

Mini roundabouts tend to be a blob of paint, and about as respected.

Our cyclist guidance is to avoid going on a roundabout, but it is allowed. If I’m not feeling a break right then I’ll leave the cycle lane to join the main road to use the roundabout

Our Denver mini roundabouts were intentionally placed on bike routes, removing what were previously 2 ways stop signs. Example: 35th Ave is a “city bikeway” going east west. It doesnt even have bike lanes painted, but has bikelane markers painted in the roadway. It is 1 lane each direction, no yellow or white line down the middle, speed limit 25, cars parked on both sides, skinny road. At intersections, it either had the right of way and the cross streets stopped, or there was a 4 way stop. Denver removed the cross street stops and placed the mini roundabouts. At first they left the cross street stop signs, and that was a great improvement on the road. Cross traffic stopped, and through traffic Autos had to slow considerably for the roundabout, but on a bicycle you could easily maintain your chosen speed through a roundabout.

Then the city took out the cross street stops, made the roundabouts ‘all yield’, and since then they are truly scary. I’ve been run off the road road there, run onto the sidwalk, cut off by cars that they stopped, had cars come at me head on on the wrong side of the roundabout… all since they made them ‘all yield.’

1 Like

We change road rules all the time and introduce regulation to what people have to wear (seat belts, helmets) all the time in the interest of making the roads safer for each other.

Negating a solution because ‘freedums’ is about the worst counter argument.

You’re right. That said, it would not be easy to compel all cyclists and pedestrians to tag themselves with a radio location beacon.

A better scheme might be to set up 3D millimeter wave radar everywhere, so there’s a continuous feed of real-time location information of all objects on and near roads to self-driving vehicles. It could be done with a cell based system like mobile phones.

Just inject the tags, under the description of a vaccine perhaps?

1 Like

Yo! Given that I’ve been aissckd by hit by cars in the past and that I’m still not

Freedoms aren’t the counter argument and I think my posting history here shows I’m perfectly comfortable with regulation of road use and design related to road use. My argument is that putting the burden on all other road users is bad policy and risks even worse outcomes.

As a general rule regulatory policy works best when it directly targets the person performing the action. In this case we want to reduce self driving cars killing people. You sometimes flip to regulations aimed at other parties in the transaction, but that is generally because some other need is in play. Testing some of those considerations makes it even more clear that the right side to regulate is the car.

  1. Sometimes you will regulate an infrequent activity to allow a more common one to proceed easier. Think construction noise in a residential neighborhood. Self driving cars are the tiny minority road users. They represent a tiny fraction of all vehicles. They will remain so, because of one very simple fact, everyone who gets out of their self driving car becomes a pedestrian. This isn’t a useful exception for self driving cars.

  2. Sometimes you will regulate one set of actors to protect a more vulnerable class. Self driving car owners and developers are largely adults who chose this. There’s no special reason to assume additional protections.

  3. Sometimes you’ll target a third party because it is more technically feasible to regulate that way. Creating a transponder, that can be carried, recognized, and located within a noisy EM spectrum, in areas with high reflectivity isn’t a small task. Creating one cheap enough to require of every child, pet, old car, cyclist, and garbage can, while maintaining a charge is a monumental task. That task is paired with creating a standardized reading, and logic system to deal with it. That isn’t an easier option.

The locus of regulation belongs on the experimental road users not everyone else they are dragging into their beta test.

6 Likes

Yeah air tags as they currently stand use a button battery that lasts for around a year - recharging isn’t really the issue. If wearing one made me visible to cars I’d never leave home without it. Given air tags already work in a noisy em spectrum without a specific commercial frequency using ‘consumer airspace’ I imagine that’s also not an issue.

Your argument about ‘self driving cars being people who chose this’ is really silly - you can make the same sophistry about bicycles. If you assume for even a moment that self driving cars get approval (it’s hard to imagine they won’t make it there) just the commercial applications alone are going to make them a large part of our road traffic, we aren’t seeing billions of investment being tossed into them because someone wants one on a lark.

To a certain extent we already put the onus of being safe onto pedestrians and non-four wheel traffic - people are expected to not run into traffic - and for the most part follow laws about road usage (such as crosswalks and not crossing against lights). Solutions that might solve the problem can’t be shot down just because it’s not completely self contained - which is what your argument was. I didn’t even suppose that it was the ultimate fix, your response was to reject it out of hand because it would require more than existing for other traffic on the road. That’s not a fair stance to take at all.

And the laws for drivers are “if there’s a pedestrian in the crosswalk YOU STOP” in many places.

An interim solution that puts the cost and onus on the people not involved in the testing, including ones that may not know it’s going on?

sloth no GIF

5 Likes

Air tags have a 1 year battery life, assuming the use case is comparable to Apple’s benchmark spec, one precision find per day. The use case for something that acts as a constant beacon is much more power hungry. Yes, if your solution is required for cyclists it is also required for pedestrians. If you can’t see one you can’t see the other.And again I don’t currently need one as a pedestrian or cyclist, so I get the extra cost of the beacon, the regulatory burden of requiring it, but the person in the self driving car gets all of the benefits.

4 Likes

We aren’t even talking about pedestrians - we are talking about bicycles - and probably motorcycles.

The use case for something that could be powered in the same manner as a bike light that might make them 99% safer from self driving cars - the benefit is not being hit even if you make a mistake as a human.

Going back to the start bike? Yep - cheap/free? Yep - the idea that the self driving car doing the testing would have to pay for it- Required - NOPE - meaning it would be self selecting.

I highly appreciate that you took the argument to carrying one as a pedestrian when that wasn’t even my idea. Strawman somewhere else.

You are looking at this from a point of extreme privilege.

Let’s rephrase the implications a little crudely to make it obvious: “Poor people don’t deserve to be protected from self driving cars”

4 Likes