Ted Chiang: Elon Musk's fear of runaway AI is a projection of his repressed terror of runaway corporations

[…] some have proposed that we ensure that any superintelligent AIs we create be “friendly,” meaning that their goals are aligned with human goals. I find these suggestions ironic given that we as a society have failed to teach corporations a sense of ethics, that we did nothing to ensure that Facebook’s and Amazon’s goals were aligned with the public good. But I shouldn’t be surprised; the question of how to create friendly AI is simply more fun to think about than the problem of industry regulation, just as imagining what you’d do during the zombie apocalypse is more fun than thinking about how to mitigate global warming.

This is so dead-on. The vast majority of us are already ruled over by inhuman machines that serve the interests of an elite class of humans. When Musk says he wants AI to “serve humanity”, he really means he wants it to serve him the same way Tesla Inc does. Either way, the rest of us are fucked. The battle line isn’t between humans and machines, it’s between all of us and the power elite.

The goal of making AI accessible to everyone is good, but in our current socioeconomic condition his efforts will not accomplish it.

If access to AI does not impart tremendous, world-changing power, then Musk is wasting his time and resources. If it does make its owners hyper-empowered, then we can expect that the first people to become hyper-empowered will have a tremendous first-mover advantage in molding the world.

Who are those owners likely to be? They will be the owners of current capital, since the early development of AI requires vast resources.

Actually, this is already happening. Major tech companies leverage their surveillance and computational resources to produce far more intelligent machines than any of the rest of us can. To say nothing of the US Military, which exceeds them in both funding and information.

These institutions stand to benefit first and most from AI research, which is why they’re so invested in it. And sure, they’re open-sourcing their research, but only because that’s an effective way to do research. Open source is not inherently liberatory.

An open-source AI project might someday lead to me having an actually smart smartphone, but not until Google and DARPA have systems capable of supernatural feats of social management and control. That is to say: it will not increase my relative power, but will be used to exercise more power over me.

If we really want universal access to cutting-edge AI, we need to change the sociopolitical conditions which currently prevent people from accessing the resources needed to construct and maintain cutting-edge AI systems - which coincidentally are the same resources needed to clothe and feed themselves: plain old-fashioned money.

It’s somehow no surprise that Musk isn’t interested in “disrupting” wealth distribution, though.

1 Like

I like to think that The Matrix dropped a big opportunity, and it should have been revealed in sequels that the Matrix was originally an amusement park gone awry, the AIs fixated on keeping “customers” in the fantasy world as long as possible. That where Neo woke up was Disney World, and the pill signaled that his ticket was no longer valid, that’s why the security drone ejected him.

It wasn’t AI rebelling.
It was Disney growing out of control, the AIs only following the directives that the board of directors made.

1 Like

This is why I hope AI is more like Mike from Moon is a Harsh Mistress than some financial industry monster.

The original speculation comparing a human organization to an AI was Leviathan by Thomas Hobbes. “What is the heart, but a spring?”

The State is out there! It can’t be bargained with. It can’t be reasoned with. It doesn’t feel pity, or remorse, or fear. And it absolutely will not stop…

This topic was automatically closed after 5 days. New replies are no longer allowed.