I’m going to be a pessimist here and show an absolute lack of faith that this can be accomplished under the current US legal and judicial systems. The self-driving car may be the service that Waymo provides, but that service is just a byproduct of whatever proprietary software makes the thing work. I can see government regulations getting to the point of making sure the car has functioning turn signals. But I think any regulations past that are going to get hit with a double-whammy of “protecting trade secrets” and being generally incomprehensible by anyone trying to impose said regulations.
You don’t necessarily need to regulate the software, just the effects.
Require incident reporting, with full version info. Hammer the company for violations, especially persistent ones.
Was anyone here responsible for this? /sarcasm
This is a reasoning machine, so we need to ensure it behaves correctly. In order to do this before deployment it needs to be openly tested. These tests need to be comprehensive, robust, reproducible, and open, with rigorous open review. The companies who hope to deploy these systems need to be required to help fund these benchmarks.
2 posts were merged into an existing topic: Griping about moderation, bias, et cetera
In the sense that a flatworm is a reasoning animal. Maybe. Flatworms actually learn to avoid negative experiences, though, and I don’t think this does so it’s probably too strong a word.
When a person throws up or urinates in a self driving car; who cleans it up before it picks you up?
Well clearly the reasoning machine would be self-cleaning… just got to get the companies to test that out first… /s
Sorry I couldn’t resist responding to something so flat-ly wrong (sorry couldn’t resist)
Learning and reasoning are different though related concepts. Reasoning is more directly related to planning, inference, belief, and decision making. Learning is related arriving to how those components are processed. The goal of learning, which these systems have certainly done with the goal of maximizing a set of objectives including mitigating risk, is to minimize the need to learn later.
The problem with this system isn’t that it isn’t capable of learning. Nor is it that it can’t reason: it can plan, make inferences about the objects it can see, maintain a belief state, and make complex decisions in a way that far exceeds a flatworm. The problem is it doesn’t reason well enough and needs to learn more as evident from its behavior in a real system.
You don’t want these things figuring it out post hoc after they’ve hurt someone: you want them to reason about risk well enough to avoid it 99.999% of the time.
I’m a little surprised that it took this long before angry citizens destroyed one of these cars. But trying to get through a rowdy and dense crowd made up of both Lunar New Year and Superbowl celebrants sure seems like a bad choice.
I thought they had 1.5 mechanical turks monitoring the cars?
Any human would have told it to turn around and take a really wide route around. (Or previously set the whole area as a no-go zone in their system for the celebration.)
This topic was automatically closed after 5 days. New replies are no longer allowed.