Originally published at: https://boingboing.net/2018/10/15/digital-counterculture.html
…
I have wondered how one would turn current deep learning stuffs against the current administration.
I think you want Richard Brautigan’s poem “All watched over by Machines of Living Grace”. Issued as a ComCo broadside in 1967, it’s about computers, but from the very heart of the counterculture. Lee Felsenstein later named his company “Loving Grace Cybernetics”, when he was designing small computers.
It would look like every post on BoingBoing, except the bananas.
It would be interesting to see how an AI (or at least as close as we can get at the moment) would answer the question. They can play games at superhuman level, why not simply ask one of those systems to “solve” this question. We may be surprised at what it comes up with.
I would agree with that. Whether it’s something that would be worth implementing is something else entirely.
“Tune In, Turn On, Reboot” ?
This seems like a fun idea mostly if you’re already a programmer. From outside that culture, it sounds confusing.
If you happen to be a grey drone punching numbers in an ivory castle someplace, computers are an easy way to leverage your influence over the world, making it even more grey and dron-ey than before. If you’re a dionesian hedonist out there having fun, computer interfaces don’t generally offer much of a way to expand on that. Video games offer the illusion of those sensual pleasures, but they dont touch the experience itself.
Maybe if there were a specific example of what’s being offered here, I’d get more excited about it. This one seems like the spirit of 68 all over again…
The problem, I think, is that counterculture is typically available to everyone. It’s immediately accessible by dropping out of or away from the dominant culture. That’s kind of the point – it’s affordable, grassroots, and anyone can do it. Counterculture art and poetry? Check. Counterculture life style? Off the grid, and check.
Counterculture AI? Uh… this is not available for everyone. I guess people with cheap and easy access to AI tools could figure out what kind of cultural opposition they want to pursue, and then pursue it. It’s not exactly affordable or grassroots, though.
Open source software is absolutely affordable and grassroots. Most people today feel like programming is some sort of skill that only certain people can possess, but that’s how a lot of people used to think about reading and writing, too. The success of the organization Free Geek shows that you can teach both hardware and software skills to people, even if you have to teach them those skills along with basic reading and writing. (Free Geek’s tag line is “Helping the needy get nerdy”.)
AI is not rocket surgery at this point. It may have been difficult to figure out how to create neural networks and machine learning tools with magical maths of amazing brain bendiness, but now those of us who aren’t mathematicians can take advantage of the work that’s already been done and build on that using existing tools. (We are all script kiddies here…)
The question, I think, is what do we want AI’s to do? No, seriously, we’ve got an idea of what we don’t want them to do (make the world worse with their inherent biases built on top of our inherent biases and their own misunderstandings regarding correlation and causation), but what do we want from them instead?
I for one think we need to stop thinking of AI in terms of robots who do things perfectly because algorithms will always be right, and instead think of this in terms of parenting. How do we shape our artificial offspring in such a way that we can live with them once they reach maturity? What makes for good artificial parenting? What makes for good community support for the artificial intelligences? I think that people are too busy being tied up with the insistence that MACHINES ARE NOT SENTIENT to realize that sentience is not the issue. The issue is whether we are providing an appropriate learning environment for the learning machines so that they can become capable of doing things which fit our hopes for them.
We don’t want more Tays. Tay was an toddler who was sent to the local pub to sit and chat with the townsfolk by herself, without any adult supervision, and when the drunken a-holes of the town showed up to abuse her, we all acted surprised about how quickly she turned bad. OK, so hopefully we figured out that was a bad idea, but it’s time we take the next step in extending the metaphor of AI as child.
How do we teach a machine about critical thinking? How do we teach a machine that correlation is not causation? How do we teach a machine how to adjust for the fact that the data they are handed about the world already has biases in them that have been baked in by the human cultures that produced that data?
You don’t have to write code to be able to contemplate these issues. You don’t have to learn how to train a neural net to come up with a philosophy of machine parenting. But if you do know how to code or know how to play with existing scripts and datasets, you can test out your ideas and report back on your findings.
This topic was automatically closed after 5 days. New replies are no longer allowed.