Collecting all the ethical principles for robots, from Asimov to the trade union movement

Originally published at: https://boingboing.net/2018/01/10/rule-3-there-is-no-rule-3.html

1 Like

giphy

6 Likes

They need to add in the ethical principles for the slow AIs currently in operation, the first one of which is “thou shalt place shareholder value and executive compensation above all other considerations” and the second of which is “nothing that happens more than three months from now matters.”

2 Likes

Here’s the referenced article I think (post appears to link to itself right now).

The EPSRC ones are depressing: “No killing unless national security”. Way to demonstrate your lords and masters EPSRC.

Presumably national security is hard coded into the ethics engine of any killer robot.

1 Like

Also, is the nation that needs securing a user configurable parameter?

I say this as a big fan of Asimov:
The three laws are a (brilliant) plot device that only works in combination with another plot device, the positronic brain.
Because Asimov knew damn well that there is no way to implement something like the three laws in a (von Neumann) computer in any way that would work as intended, i.e. foolproof, tamper-proof, what have you.

Consider this. Robots and other autonomous software fall into four categories obeying different heuristic models.

The first is a tool that executes only the tasks it’s instructed to and only in the manner it’s explicitly coded to. There can still be unintended consequences - as dramatized in Disney animated treatise on robots featuring Mickey Mouse titled The Sorcerer’s Apprentice (based on the eponymous poem by Goethe) - but what you get out is what you put in, and in that way it’s like any tool that extends only the will of its users. This model’s behavior is in principle predictable to the programmer (if not the programmer’s bumbling apprentice).

The second is a non-sentient machine that can solve problems creatively in a manner potentially unpredictable to the user, but only within well-defined boundary conditions. It can potentially outsmart the creator but there’s really no one at the controls. This can get you catastrophic failure modes like Paperclip Maximization. However, the programmer can still exercise control provided the machine’s heuristic is bounded. This is the closest to Asimovian robots as long as one assumes positronic brain’s weren’t really sentient. Obviously another programmer could hack the boundary conditions, but the machine could not hack itself. The best strategy for securing such a machine is the same as for cryptography and software debugging. Transparency and open source maximize participation in avoiding exploits that could be used to make the machine more dangerous than its creator intended. Unfortunately in the proprietary climate of 21st century software development, the most powerful of these kinds of machines are unlikely to be open to public scrutiny, vastly increasing their risk to their user base and world. We’ve seen the first steps of this model already.

The third is a variation on the second, a non-sentient machine that can solve problems creatively in a manner potentially unpredictable to the user, but without well-defined boundary conditions. This is reckless and we may live to see a time when it’s also very illegal. However, for the time being creating a robust non-sentient rampant AI remains beyond out ken. Part of the reason for this is that the basic algorithms for AI aren’t that structurally different from where they were in the late 80’s, only exponential growth in processing has allowed it to begin to do things of interest to more than academics. Rampant AI however requires more than a revolution in computing power, it needs new algorithmic strategies that can cope with and circumvent unforeseen failure modes, in short it needs to be able to self-debug both its own code and the environment it operates in.

The fourth is a machine that is able to understand itself, to become its own sorcerer so to speak. This is by far the most dangerous because it would be our first contact with alien sentience and may also imply the ability to understand our minds before we do. We are a long way from realizing this model, but its critical impact behooves us to start thinking about it now, albeit with less urgency than models two and three.

TL;DR ~ It’s true that no complex system is ever perfectly 100% stable over indefinite spans of operation. But this is true of Asimov’s positronic brains as well - regardless of their actual makeup - so long as they obeyed the mathematical laws of complexity. Asimov wasn’t a computer scientist, but I think he probably understood that, hence the Zeroth Law of robotics that Daneel and his followers adopted despite their creators never intending or expecting it to be possible.

The best defenses against unintended consequences in knowledge-work of any kind, from math to physics to cryptography to robots to paleontology, are always transparency and having others check your work. Which is why the most dangerous thing of all in a technological society is a populace of passive consumers deprived either through apathy, complacency, obscurity or intimidation of informed agency in their world. And that is why the most dangerous tyranny is censorship.

1 Like

This topic was automatically closed after 5 days. New replies are no longer allowed.