Waymo self-driving car hit bicyclist

As a cyclist in an area with a lot of Waymos. They are disconcerting to ride around because they actually follow all traffic laws. Uncanny valley of driving.

4 Likes

The whole purpose of a company has become to mitigate accountability. Humans typically do require some form of organization to achieve things that individuals cannot.

If self-driving cars can be made significantly, statistically safer than human driven cars, is that a collective achievement that would reduce the need to blame an individual? I’m not defending Waymo or CEOs. This is more a philosophical question about our need to blame individuals.

Using made up numbers in a hypothetical future, if human drivers are collectively responsible for 100,000 deaths a year, and an infrastructure of self-driving cars could be made that reduced that number to, say, 1000, should a CEO be personally culpable for the 1000 deaths or lauded for the saving of 99,000 lives?

Does a human-built self-driving car infrastructure need to be perfect to be acceptable or just demonstrably better?

I’m in no way suggesting we’re there yet or even will be in our lifetimes.

1 Like

Not to mention the traffic circles here in Seattle where (always an air hauling truck) cut the circle on the left turn to avoid going right around it (the legal way). My friend lost a few months work (and almost his life) to someone cutiing the circle when he was riding…

2 Likes

It became that a long time ago. The liability shield is effectively the raison d’etre for the corporation in any society running a Western form of jurisprudence.

The philosophical problem here is that many of the individuals in question here subscribe to some variation of “effective accelerationism”, and are happy to sacrifice thousands of lives now in exchange for a glorious utopian future for hundreds of millions in the future (pay no attention to the billions of dollars they expect to take in for themselves in the short and medium terms).

They like playing numbers games about noble sacrifices but can’t even bother to honour the names of those killed or maimed in the public beta tests they foist on everyone else.

ETA: As far as I know, these great benefactors of humanity aren’t even using their beta tests to offer free rides for all in areas underserved by public transit.

8 Likes

Let’s compare to air travel, which is significantly safer than travel by car. But greed and bad decisions can still cause needless death and injury. I’m not sure we will ever reach a point where individual accountability can reasonably go away (not that it exists in that space now - it should!). Otherwise, companies will make decisions based on greed and those decisions will cause harm to the public.

4 Likes

Yes. Corporations are inherently liability shields. Companies are not necessarily.

We’re talking about corporations here, which in the modern business and legit context are indistinguishable from companies. They’re colloquial synonyms.

Try to run a company of any sort in America without incorporating it and no amount of semantic hair-splitting or appeals to philosophy or to the history of collaborative enterprises will prevent the tax authorities or courts from coming down hard on you.

2 Likes

And yet philosophy persists and is valuable. Hence my response in the form of a hypothetical to Duke Trout. I don’t think you and I are engaging in the same conversation. And that’s fine.

I didn’t say otherwise. And I understand your reluctance to address the techbro philosophy I described above.

2 Likes

HAHAHAHHAHAHAH!!! Try disinterest rather than reluctance.

That much is obvious, even (or especially) because it has more relevance to the discussion of a public beta test harming people here and now than does hair-splitting about @duketrout’s using “company” instead of “corporation”.

3 Likes

Were the 1000 deaths the result of skipping four bolts that were designed into the vehicle for safety, in order to save $10 per vehicle? Then unquestionably YES, they should be held accountable.

It’s like you’re trying to build a hypothetical construct made of some kind of dried grass, easy to rhetorically defeat, and substitute it for my actual position…

6 Likes

Sounds like what Mitchell and his co-counsel Alito have been doing in the Supreme Court today.

2 Likes

I’m interested in this subject of accountability and recompense in the case of automation-based injury. Let’s assume (and I know there’s some strong disagreement and feelings here, but it’s completely fair to make this rational argument) that at some point in the near future FSD exceeds human safety. That is, my any measure, FSD cars cause injury or death to humans at a frequency that is less than humans.

Some notable reasons that this might be the case:
The FSD system technically follows the law, but is unable to reason correctly about human behavior, e.g., reason about the possibility a cyclist neither stopping at a stop sign nor following at a safe distance.
The human gets distracted by their phone and rear ends a cyclist. An angry commuter after having to wait behind a pair of cyclists occupying the lane (as permitted by law) passes them at a close distance to express their anger, misjudges the distance, and strikes one of the cyclists.

What are people’s thoughts on accountability and recompense in the case of the FSD system? What is “fair”, given the service is measurably safer?

I’m not sure Just Asking Questions is better…

2 Likes

The operator of the vehicle, whether a human or corporate person, should be liable either way. No free passes just because automated vehicles statistically gets into fewer accidents than human-operated ones, and certainly not because a corporation says they’re operating the vehicle in order to make the world a better place.

I can see insurance companies applying a distinction in terms of rates on the basis of stats, but not the courts in terms of decisions.

3 Likes

Right, but how does accountability play out in the FSD case. In the human case, we can take away their license, put them in jail, ultimately removing them from the pool of drivers but also ultimately not changing the statistics. In the FSD case, removing the system from deployment fundamentally changes the statistics (it makes roads less safe). Or is there a way to leave the system in place but place strong incentives to improve the system in some reasonable amount of time?

It depends on a number of factors, including frequency of incidents. If there’s fraud or a cover-up involved, the executives or (in a closely held company) major shareholders may face direct personal consequences. Otherwise, you might see a recall of the vehicles from the streets, financial penalties, removal of fleet operation licenses, etc. and perhaps the corporation shutting down. As long as there’s strong regulation and a relatively uncorrupt system, though, no court is going to shrug it off or give the automated vehicle operator special treatment, even on the basis of stats that are claimed to signify a greater public good. Look at what’s happened to Boeing recently; they knew better than to argue that those jets had to be kept in service for the greater good.

As noted above, actuaries look at the stats in a different way. Ultimately the insurers may be the final arbiter of whether a company can operate an automated fleet legally and in a financially sustainable way.

tl;dr: longtermist philosophical defenses aren’t going to have much of an impact in terms of liability, as much as the techbro CEOs and investors (and their fanbois) wish they would.

4 Likes

Quite the opposite, as you asked those questions in response to my post and even quoted the relevant sections you were querying. If you were just submitting some philosophical maunderings for consideration, it would have been better to respond to the OP.

1 Like

Apologies if I didn’t make it clear enough. I did include multiple caveats to my comments about what I wasn’t saying.

1 Like