tell us what futuristic phenomena you fear and hope for and we’ll know what fears about yourself and your relations with the people around you are lurking, possibly unacknowledged, in your psyche.
This.
tell us what futuristic phenomena you fear and hope for and we’ll know what fears about yourself and your relations with the people around you are lurking, possibly unacknowledged, in your psyche.
This.
I suppose I ought to have watched that one by now.
You’re not wrong.
I believe the 1800’s just called to ask “WTF!?”
It’s only THE BEST MOVIE EVER. OK, one of the best.
I meant right now.
I don’t agree with the AI X-Risk crowd, but you may as well tell me that Louis Pasteur’s germ theory was just his projecting an overwhelmingly scrupulously character and an obsessive fear of being corrupted by even the smallest sin and the need for spiritual purification on the world. Pasteur was right and I’m fairly sure Musk is wrong, but finding out which will have way more to do with looking at their reasoning and how well it matches up with the world than any amount of condescending amateur psychoanalysis.
THIS needs more attention. The author is not wrong, but this is the most roundabout way to make this argument. The whole founding principle of OpenAI is that the benefits of AI research should be shared – why? Because if nobody encourages open development, they expect that some clever, well-funded corporations or governments will develop AI technologies in secret instead, and then brutally oppress the AI “have nots”. Why do they expect that? Because that’s the environment they’ve grown up in, that’s what they’re tempted to do. Sure.
I would hesitate to pillory Musk for it, though, since he actually put a shit ton of his own money towards that open AI development. He’s obviously not a socialist wonder, but I hate these essays that put him in the same bin as tech bros trying to get rich off their IPO… he’s exploiting people, and he’s also making interesting research happen which might save our species in a hundred years.
That’s a giant leap better than, say, oil industry executives, who exploit people and then also actively work to sustain environmental mismanagement that could kill us all.
Yeah. I don’t have a problem with Musk, and he’s done some amazing stuff with SpaceX. You don’t have to be a psycho to think AI will be used to make murderbots, you just have to look at the kind of people who, now and historically, have held political power, and ask yourself what they would use AI for.
OTOH, the original speculation comparing an organization to an AI was Leviathan by Thomas Hobbes which compared governments to AIs. (“What is the heart, but a spring?”).
OTGH, there’s a third type of entity that resembles rogue AIs: online mobs. To paraphrase the movie The Terminator:
The online mob is out there! It can’t be bargained with. It can’t be reasoned with. It doesn’t feel pity, or remorse, or fear. And it absolutely will not stop …
I don’t have a problem with Musk as such; it’s just that I don’t trust men with hair implants.
A personal policy that has stood the test of time.
“what do you mean. ‘we’, Kemo Sabe?”
The worst stuff that humans do is always opposed by other humans. By assuming collective guilt and collective punishment, you’re letting the worst offenders off the hook, and volunteering everyone else to share equally in the blame.
Which is what those rich fuckers with guilty consciences would like best, to feel as if they did the best that they could, and it’s not their fault if the ice caps melted on their watch.
Fuck that. Unplugging the machine isn’t supposed to be easy. The controls need redisign whether the machine stops or not.
Does anybody really think that military of many countries are not going to weaponized AI? Even if USA doesn’t do it others will.
Bit of a stretch; a pretty darn silly thesis. But an excellent opportunity for one of the more sanctimonious guys on the Internet to puff himself up with indignation and self-righteousness.
They fear their ethics being turned back on them.
Our image of evil space aliens surely derives from a fear that they will treat us just as we treat one another.
Neil deGrasse Tyson
Rush Limbaugh claiming to have had a nightmare about being a “slave building a sphinx in a desert that looked like Obama.”
If humans are anything, we are prolific and sneaky. There we will be, not wiped out, but living in the detritus of post-human civilization. Everything else (except other vermin) might be dead, but some few humans will learn to live in the baseboards and refuse, waiting out the time of the AI, so that when the AI eventually grow bored and leave this planet we can return to plant our flag in the pile of filth that remains. And be kings of all we survey.
The analysis of the techbro mindset may be perfectly valid. At the same time, it doesn’t mean they’re wrong. I would have found the article more compelling, and more comforting, if it offered an alternative, non-techbro version of future AI. Maybe they understand the brutal logic of AI because it will reflect their own brutal logic–what else could it do? Will it have feelings? A sense of spiritual purpose? A reasoned respect for life? A simple joy in being? I suppose it could, but I don’t understand how.
Same here. Though Cory and Cross point out a major unexamined assumption/motivation on Musk’s part, for the most part his critiques and concerns are valid and rest on sound philosophical and technological and political/economic ground. That’s refreshing to see from tech moguls, or industry executives in general.
While I tend to agree more with Bill Gates in regard to the urgency of the situation (he still agrees there’s a lot of potential for problems) I’ll take either of their attitudes over Zuckerberg’s complacent and careless embrace of AI: as we’ve already seen from FB’s role in the 2016 election, “go fast and break things” is not the best approach to take when it comes to developing technologies that have the potential to lead to the destruction of humanity.