Ten recommendations to make AI safe for humanity

Originally published at: https://boingboing.net/2017/11/01/no-black-boxes.html

1 Like

AI safe for humanity? Where is the post apocalyptic singularity fun in that?


A lot of these seem like the sort of perfectly sensible precautions you would take in supervising any new employee.


Even if all those recommendations are followed successfully, it won’t stop AI from taking all our jerbs, leading to an inevitably rough transition period where we work out a new wealth distribution system. Based on history, I don’t see it going well.

Anyone got an answer for that?

Good article, if you haven’t seen it:


Mandatory SMBC references:

I came here to make this same comment.

I leave satisfied.

Put another way: The bigger threat of AI is the quickly-approaching utter destruction of our economic system at a rate that our current governing bodies are not equipped to deal with.

1 Like

Core public agencies… should no longer use ‘black box’ AI and algorithmic systems.

There’s the full-employment clause for you engineers.


Ten? Asimov only needed three.

Misunderstanding: it’s actually two rules, they’re just numbered in binary.


I’m in favor of not having AI be ruinous to humanity; but this report casts a rather grim pall over the prospect.

The advice seems to boil down to:

  • “Don’t do this profitable and convenient thing” (1, 2, 3, 5, 10)

  • “Deliver better quality and reliability from incomprehensible masses of neural-network spaghetti than you currently bother with in even the simplest areas of software engineering.” (1, 2, 3, 5, 6, 7, 9, 10)

  • “Don’t use technology to indulge or automate the abuses you already enjoy inflicting” (1, 2, 3, 4, 5, 6, 7, 8, 9, 10)

That…won’t be fun to obtain compliance on.


Team Neurology might need to start trying harder if we want to be able to hire engineers despite a ban on black-box expert systems. Perhaps a new breed of civil-service nematodes? OpenWorm might have those figured out in time…

How is ANYTHING in this article species-specific to humans? Avoiding black boxes and biases is done specifically to prevent framing such systems to the advantage of one group over others. If done thoroughly, it means that your species is also not privileged in comparison to others.

Cool. Now nationalise Google and fucking Facebook.

Let them have the jerbs. When they get good at them then we’ll see what else they can do.

This topic was automatically closed after 5 days. New replies are no longer allowed.