Waymo self-driving car hit bicyclist

Air tags have a 1 year battery life, assuming the use case is comparable to Apple’s benchmark spec, one precision find per day. The use case for something that acts as a constant beacon is much more power hungry. Yes, if your solution is required for cyclists it is also required for pedestrians. If you can’t see one you can’t see the other.And again I don’t currently need one as a pedestrian or cyclist, so I get the extra cost of the beacon, the regulatory burden of requiring it, but the person in the self driving car gets all of the benefits.


We aren’t even talking about pedestrians - we are talking about bicycles - and probably motorcycles.

The use case for something that could be powered in the same manner as a bike light that might make them 99% safer from self driving cars - the benefit is not being hit even if you make a mistake as a human.

Going back to the start bike? Yep - cheap/free? Yep - the idea that the self driving car doing the testing would have to pay for it- Required - NOPE - meaning it would be self selecting.

I highly appreciate that you took the argument to carrying one as a pedestrian when that wasn’t even my idea. Strawman somewhere else.

You are looking at this from a point of extreme privilege.

Let’s rephrase the implications a little crudely to make it obvious: “Poor people don’t deserve to be protected from self driving cars”


I’m going to be a pessimist here and show an absolute lack of faith that this can be accomplished under the current US legal and judicial systems. The self-driving car may be the service that Waymo provides, but that service is just a byproduct of whatever proprietary software makes the thing work. I can see government regulations getting to the point of making sure the car has functioning turn signals. But I think any regulations past that are going to get hit with a double-whammy of “protecting trade secrets” and being generally incomprehensible by anyone trying to impose said regulations.

1 Like

You don’t necessarily need to regulate the software, just the effects.

Require incident reporting, with full version info. Hammer the company for violations, especially persistent ones.


Was anyone here responsible for this? /sarcasm


This is a reasoning machine, so we need to ensure it behaves correctly. In order to do this before deployment it needs to be openly tested. These tests need to be comprehensive, robust, reproducible, and open, with rigorous open review. The companies who hope to deploy these systems need to be required to help fund these benchmarks.


2 posts were merged into an existing topic: Griping about moderation, bias, et cetera

In the sense that a flatworm is a reasoning animal. Maybe. Flatworms actually learn to avoid negative experiences, though, and I don’t think this does so it’s probably too strong a word.


When a person throws up or urinates in a self driving car; who cleans it up before it picks you up?


Well clearly the reasoning machine would be self-cleaning… just got to get the companies to test that out first… /s


Sorry I couldn’t resist responding to something so flat-ly wrong :sweat_smile: (sorry couldn’t resist)

Learning and reasoning are different though related concepts. Reasoning is more directly related to planning, inference, belief, and decision making. Learning is related arriving to how those components are processed. The goal of learning, which these systems have certainly done with the goal of maximizing a set of objectives including mitigating risk, is to minimize the need to learn later.

The problem with this system isn’t that it isn’t capable of learning. Nor is it that it can’t reason: it can plan, make inferences about the objects it can see, maintain a belief state, and make complex decisions in a way that far exceeds a flatworm. The problem is it doesn’t reason well enough and needs to learn more as evident from its behavior in a real system.

You don’t want these things figuring it out post hoc after they’ve hurt someone: you want them to reason about risk well enough to avoid it 99.999% of the time.

1 Like

I’m a little surprised that it took this long before angry citizens destroyed one of these cars. But trying to get through a rowdy and dense crowd made up of both Lunar New Year and Superbowl celebrants sure seems like a bad choice.


I thought they had 1.5 mechanical turks monitoring the cars?

Any human would have told it to turn around and take a really wide route around. (Or previously set the whole area as a no-go zone in their system for the celebration.)


This topic was automatically closed after 5 days. New replies are no longer allowed.