What would a "counterculture of AI" look like?

Open source software is absolutely affordable and grassroots. Most people today feel like programming is some sort of skill that only certain people can possess, but that’s how a lot of people used to think about reading and writing, too. The success of the organization Free Geek shows that you can teach both hardware and software skills to people, even if you have to teach them those skills along with basic reading and writing. (Free Geek’s tag line is “Helping the needy get nerdy”.)

AI is not rocket surgery at this point. It may have been difficult to figure out how to create neural networks and machine learning tools with magical maths of amazing brain bendiness, but now those of us who aren’t mathematicians can take advantage of the work that’s already been done and build on that using existing tools. (We are all script kiddies here…)

The question, I think, is what do we want AI’s to do? No, seriously, we’ve got an idea of what we don’t want them to do (make the world worse with their inherent biases built on top of our inherent biases and their own misunderstandings regarding correlation and causation), but what do we want from them instead?

I for one think we need to stop thinking of AI in terms of robots who do things perfectly because algorithms will always be right, and instead think of this in terms of parenting. How do we shape our artificial offspring in such a way that we can live with them once they reach maturity? What makes for good artificial parenting? What makes for good community support for the artificial intelligences? I think that people are too busy being tied up with the insistence that MACHINES ARE NOT SENTIENT to realize that sentience is not the issue. The issue is whether we are providing an appropriate learning environment for the learning machines so that they can become capable of doing things which fit our hopes for them.

We don’t want more Tays. Tay was an toddler who was sent to the local pub to sit and chat with the townsfolk by herself, without any adult supervision, and when the drunken a-holes of the town showed up to abuse her, we all acted surprised about how quickly she turned bad. OK, so hopefully we figured out that was a bad idea, but it’s time we take the next step in extending the metaphor of AI as child.

How do we teach a machine about critical thinking? How do we teach a machine that correlation is not causation? How do we teach a machine how to adjust for the fact that the data they are handed about the world already has biases in them that have been baked in by the human cultures that produced that data?

You don’t have to write code to be able to contemplate these issues. You don’t have to learn how to train a neural net to come up with a philosophy of machine parenting. But if you do know how to code or know how to play with existing scripts and datasets, you can test out your ideas and report back on your findings.

1 Like