Corporations were the first AIs. Unfortunately parasites took them over and using them to smash non-corporate entities
Heh, didn’t see that your Locus article came to the same conclusion. Great article!
My money is on Amazon to give birth to Skynet.
The main reason we’re still talking about them is because they flow out of our media faucet like tap water.
Other movies and fiction have done the same thing much better. But we don’t see those on channel 3 repeated 8 times in the weekend.
Which others would you suggest?
Not Disney then?
Which is generally the problem - an AI that doesn’t think like us is unlikely to have any empathy for us. Give it the responsibilities of SkyNet and it’ll wipe us out in a heartbeat. It doesn’t hate us, it just projects a 14.2% improvement in efficiency if we’re not around.
Whereas the Facebook example suggests that they’re actively training their systems to be evil…
Have you considered the possibility that the movies are repeated because they are popular, and not the other way around?
A) Who’s still talking about The Matrix (other to complain about the disappointment of the sequels)?
B) Why are you trying to find deeper meaning in something as base as cool sci-fi, awesome robots and essentially monsters?
Soap box syndrome…
Several thousand people in the past 24 hours:
A lot more than that, even:
Nice article, but I think you quoted the wrong part.
Can this be a self-reinforcing feedback loop, where things that seem popular get shown more and therefore become more popular and people remember them (and they don’t suck enough to be unpopular on their own merit) and as they are known enough to be remembered in the sea of also-runs, they are popular…?
There are no sequels. Repeat after me, there are NO sequels.
People keep wasting time trying to find meaning where it is not. The most egregious example is the existence itself.
I was dragged to see the new Terminator movie, it was OK.
Then looking online I realized it’s supposed to be part of a trilogy WHY?. Enough is enough.
I think that a more generic response than ‘corporations’ is ‘loss of control of our lives’. Many SF stories invoke the evil corporation as the villain since villains (and their minions) need to be embodied in order to provide a target for the hero. A corporation is something we all know and understand to have selfish motivations, so it provides a quick shortcut to the setup.
An AI designed by a corporation to maximize profits while reducing expenses at any cost would indeed be a fearful thing if given free range to do so, but we actually do still have a tenuous grip on the cutoff switch. Corporations are creations of the government, and can be modified or destroyed at will. The real line to be crossed into hopelessness occurs when we all give up on the idea that we have any influence on the government at all.
Governments are instruments of the ruling class. I wouldn’t put much hope there. I can’t even see them responding to existential threats to everyone including the ruling class: nuclear weapons, global warming, ocean acidification, etc.
Our would-be corporate overlords as would-be killer AI is great casting: The AI is often fiddling with controls to avoid losing control. They have complicated relationships with women. They pose as all powerful but later seem hollow, mythologized and disfunctional. All are fixed by shared knowledge and a few lines of code.
So … would Terminator sequels stop if voters voted to limit limited liability, socialize public costs and appoint receivers for the assets of misbehaving corporations?
Tenuous. Does a legislative cutoff switch work against a constitutionally protected entity?
We don’t need artificial intelligences that think like us, after all. We have a lot of human cognition lying around, going spare – so much that we have to create listicles and other cognitive busy-work to absorb it. An AI that thinks like a human is a redundant vanity project – a thinking version of the ornithopter, a useless mechanical novelty that flies like a bird.
We need machines that don’t fly like birds. We need AI that thinks unlike humans.
Quoted for truth! It just isn’t very practical to have machines look or think like people. I’d even go so far as to say that it would be grotesque.
What’s the difference? Machines completely lack empathy for us now, but don’t engage in genocide. Emotions are an emergent property of animal biology, so applying them to machines would only be a cheap approximation for something that doesn’t fundamentally apply to them.
The real rationalization for killer AIs is much simpler. That humans are tempted to train AIs to kill humans deliberately. It decreases risk to the attackers, and provides even more deniability than police or soldiers. As far as making machines act upon human behaviors, there is a lot more money and resources being directed at teaching them how to kill. So solving the killer AI problem is merely an extension of the killer human problem.
We are building the damn thing as we speak.
The crystalization point isnt too far off…