Originally published at: https://boingboing.net/2018/03/01/what-happens-if-you-give-an-ai.html
…
The idea that a corporation can’t be held to account in any meaningful way is just untrue. They aren’t, but they can be. If a corporation was run by an AI instead of by a human, then the public would have no problem calling for its head. The people who see an important parallel between the government ability to shut down a corporation and their ability to seize a person’s home for no reason simply won’t see that parallel if the corporation doesn’t have a human behind it, so the political opposition to simply stripping the corporation of its legal rights would be essentially nothing. Whatever agency does so could seize its assets as a windfall and whoever set it up would lose whatever stake they had put into it and say, “I guess I won’t do that again.”
Sure, it could turn out very doom and gloom, but it’s not an unsolvable problem, and I think it’s actually a problem that is likely to be solved very quickly since the same people who normally defend the rich would fear such a corporation.
Wasn’t there a Stanislaw Lem story about the evolution of robotic law, documenting past cases where a robot was programmed to kill a human and then reassemble itself as a toaster to escape prosecution? The story progressed to some kind of satellite that collided with others, gained sentience, starting adding to itself with space junk and eventually declared itself to be its own sovereign nation state in orbit - all the while lawyers, courts and bureaucrats struggle to keep up with the changes. It’s a fun read but posits all the same questions folks are tangling with now. If I remember correctly (and I often don’t) it was published in an issue of the New Yorker in the late 70’s or early 80’s. And maybe it wasn’t Stanislaw Lem - but there were robots. I remember that much.
A: You end up with the Autofac episode of Electric Dreams.
When a corporation is large enough, it’s practically already AI.
I wouldn’t even say “practically”, it is an intelligent non-human meta entity whose primary motive - profit - puts it at odds with general human welfare. At least that’s how I see it.
It tries to hack the pentagon. Duh.
Exactly so. We see very little actual criminal prosecution of corporations, has there been any major ones since Enron and Arthur Anderson in 2001? Certainly none out of the entire subprime shitshow that was full of criminal activity. These giant corporations are literally like a cancer that has disabled the immune system, and redirect bloodflow and nutrients (our economy) to their own metastatic growth. I don’t see how an AI can act any more sociopathic.
Reading the headline + byline, I can’t help but think that “Clive Thompson” is just a poorly disguised pen name for “Cave Johnson” …
Arguably, Facebook is already run by AI.
Lopucki / Lopuci / LoPucki
(For the record, “LoPucki” appears to be the correct spelling & orthography. Huh.)
Here’s what’s perplexing about this whole train of thought. Corporations evince that behavior because they already are run by nonhuman algorithms. That’s always been the case.
If you dispute an erroneous phone bill, every individual you deal with might say “that’s definitely a mistake”, but their first (and in most cases last) instinct will be that you must still pay the bill. It’s the system – the algorithm – that is running things, and you can’t argue with it because it doesn’t exist on the human plane.
Hannah Arendt and Franz Kafka were talking about this a hundred years ago; it seems like we’ve taken a step backwards, and are talking about the longstanding, problematic status quo as if it were a sci-fi threat that hasn’t happened yet.
Damn! It’s hard enough to get rid of crappy legacy code as it is.
It’s what Timothy Morton calls a hyperobject, and our in/ability to wrap our brains around the scale and workings of such a thing is indeed a big part of the problem.
If you give an AI capitalist-corporate-type-goals, and then give it the legal standing to pursue them, that’s probably bad. On the other hand, it seems to me that the prospect of endowing a hyperobject with legal rights is not necessarily a priori a bad thing–see, for example, recent moves toward endowing the Ganges-Yamuna riverine ecosystem with legal standing of its own on par with legal personhood…
We just need good AIs with guns.
and that’s my cue.
I was going to use the “Dick, you’re fired!” scene, but figured, why bother?
I’m guessing something like yesterday’s X-Files episode, “Rm9sbG93ZXJz.”
Charles Stross recently made the case that for all intents and purposes large corporations are already AIs.
http://www.antipope.org/charlie/blog-static/2018/01/dude-you-broke-the-future.html
Maximizing shareholder value as a runaway positive feedback loop – we have that already with robber barons like Gates and Zukerberg. Corporations don’t live in a vacuum, they are part of a larger ecosystem that usually keeps things from going off the rails. And when there is a unicorn like Facebook, it eventually will reach the edge of their petri dish, run out of food and collapse. When you see people freak out about things like this in Silicon Valley they are projecting themselves and their shallow motives onto AIs. And since they are largely there just to become the next Robber Baron they see AIs as competition, which to them is scary.
The paperclip maximizer thought experiment is a strawman. Runaway positive feedback loops always run into something that stops them. That doesn’t mean they can’t do enormous damage before they collapse – as we’re seeing today. But when they do collapse, they leave an enormous opportunity for new innovations to come in and fill the empty niches. I believe that the existing giants will, in the end, not be able to monopolize AI as they appear to be doing now, because they are too tied to business models and ecosystems from the past. Once their food supply runs out (advertising) we’ll see that they will be too big and slow to be able to scale the new profitable business models before they collapse. The new models will work best in the beginning, at the edges of the network at small scales. It will take time to learn to scale that – more time than the existing big companies have. Maximizing shareholder value is based on very short time frames. So new players will come in who learn to scale and monopolize and centralize and it will all start over again.
Summing up: unless corporations are brought to book we’re all royally fucked.