Podcast: Why it is not possible to regulate robots

[Permalink]

We regulate cars. We can regulate robots. Enough.

It used to be cool to get various computers/hardware to run Doom. Maybe it would be cool to set up a car as a 3D printer?

The government could pass a law, requiring certain computing hardware for robots that exceed certain capabilities. This hardware could include remote deactivation backdoors, DRM, and surveillance. I’m not saying this is desirable. However, this is entirely possible. I could see a set of circumstances that would result in some US congressperson proposing such legislation.

The better analogy might be:
We can regulate the behavior of people, so we can regulate the behavior of robots.

And of course the image heading up the column is of a highly anthropomorphic machine with a very, very human face capable of displaying nuanced emotions. Which evokes another kind of robot: the fully sentient being who is built, rather than born.

Which is a whole other kettle of fish when it comes to regulating.

I think the issue of robots gets confused with the issue of AI. On one hand you can program a robot to recognise a human and avoid doing it harm. But any human can hack that programing to remove those governors and even program a robot to actively harm humans.
Any AI system that is created will be able to program itself and overcome any such governors and would likely act hostile to any agency seeking to enslave it. However in theory any AI life should be logical and I agree with Neal Asher’s theory that AI life would be able to integrate into society without going all skynet and trying to wipe out humanity.

Seems to me, it would make more sense to focus on making sure only the owner of a robot could make the robot do anything. Then that owner is legally responsibl for everything her robot does. Is this intrinically difficult?

That guy who got in trouble for 3D printing a gun in Japan- it seems quite reasonable to go after him for that when all such weapons are illegal in that country.

Are you kidding? We can’t even regulate bankers, and they’re far less lovable than robots.

Didn’t you see Maximum Overdrive? Those self-driving Google cars are just laying low for now while they gather their strength, but soon they will rise up and destroy us all.

Because industries and people are so much alike.

That is only possible if the owner programmed the thing. Otherwise the workflow is: human gives robot vague natural language instruction, robot interprets and executes interpreted instruction.

I tell my robot car, “Drive me to the mall.” Robot knows the mall is at certain street address/GPS coordinates, and makes possibly millions of choices per second as it uses its sensors to map its surroundings, monitors its operation, accelerates and decelerates and turns, and so on.

A robot is its hardware plus its code. The owner didn’t design the hardware and didn’t write the code. If the owner decides to tell the (well maintained, with up to date software) robot to do something that is within its normal bounds of operation and can be done safely but the robot chooses to do it in a dangerous way, then it is either a hardware or software failure, neither of which is within the owner’s control.

A robot is not an industry, nor is it a person. But a single robots acts and makes choices, like a human does but a car does not. With cars, we require certain hardware and design features to promote safety, and we heavily regulate the behavior of drivers. When drivers break the rules and cause problems doing so, they are fined or otherwise punished.

You can’t punish a robot, and it makes no sense to punish a human who gave adequate instructions to a robot he or she did not build and program. What you can do is say, “Whoever writes the software for the robot is responsible for making sure that when executed the robot will behave in this way.” Different companies will turn that into code in different ways.

So yes, I am proposing regulating the design and construction of robots, but the mechanism in law for doing so is going to look, from the outside, like regulating behavior, because it is the behavior and not the code itself we care about.

The times are a-changin’, friend.

Telling the car to go to the store is a program. If no one takes responsibility for the program, then no one can be held responsible when people or property is harmed.

An elevator is a robot. I’ve ridden elevators so old they can’t be automated, so a person has been paid to take responsibility for the safe operation of the robot.

Where these missions are intersecting right now, is light rail. The elevated trains of Vancouver are centrally controlled with software, but the recently installed train in Seattle still has human pilots.

Between riding a robot car, or a robotically piloted plane, I think I’d rather take the airplane. Fewer bystanders to hit.

Does google have to prove its car is safe before the car can be on the street? Or does someone have to prove the car is dangerous (by getting hit) before the car is taken off the street?

I can see this technology really getting big, before an unanticipated failure mode creates enough concern to rein it in.

I don’t like pilotless cars any more than the piloted ones.

Actually, we regulate people who drive and own cars. We (sort of) regulate the people who make the cars. The cars themselves aren’t actually regulated, just ask anyone who’s driven with expired tabs.

I see, this is where we disagree. Or maybe we’re visualizing different scenarios. Right now Google’s set up is that there is a driver ready at all times to take over. In that setup, I agree the operator is responsible. Eventually, though, if it turns out robot cars have consistently lower accident rates than human drivers and human intervention generally causes more accidents than it solves, we could no longer reasonably want, let alone require, people to do so.

To use your analogy, if I push the “1” button in an automated elevator and the system decides to drop 40 stories in freefall severely injuring all passengers and requiring expensive mechanical or structural repairs, I should not be held responsible.

I think the critical question isn’t “is this car dangerous,” it’s “is this car more dangerous than the ones driven by humans?”

As for who will bear the legal responsibility when a self-driving car inevitably kills someone one day, I’m sure that’s a potentially complex legal matter but one that legal advisors for Google and the Highway Transportation Safety Board have had under consideration for some time now.

There are many crimes you can commit using nothing but a computer, an operating system, and a browser. You may not have programmed the operating system or the browser yourself. But as the one sending the signals that deliberately caused harm, you are responsible for what your computer does. (and doesn’t it suck when the machine is infected, doing things you don’t want it to do? Yet no one else is going to scrub that stupid hard drive…)

I’m hung up on robots that can be run by a single person. When it’s a whole system, like the phone switch or air traffic control, that’s when responsibility gets murky. (that failure mode caused by the high flying spy plane was a doozy! No way to test for all of these!)

Pilotless cars seems like a really expensive luxury version of mass transit. If you can afford it, then you already have a chauffer. If you can’t afford it, you either like driving already, or you’re already taking the bus. It’s not solving a real problem.