Originally published at: http://boingboing.net/2017/04/27/singularitarianism.html
…
Kurweil makes the same arguments in his writings that you do. You might be referring to the mass media’s take on AI, but researchers and designers have a clear understanding of what AI is and will be able to bring forward. Even the definition of intelligence makes the case for limitations.
in·tel·li·gence
inˈteləjəns/
noun
1.
the ability to acquire and apply knowledge and skills.
"an eminent man of great intelligence"
synonyms: intellectual capacity, mental capacity, intellect, mind, brain(s), IQ, brainpower, judgment, reasoning, understanding, comprehension; More
Has the original article been taken down? I get a 404 on the link.
I’m telling the AI he said that.
I’ve been saying these things for years, and every nerd just smugs and says, “see you in the singularity.” But in this world, unlimited growth is impossible, and the one sure thing is that extrapolating current trends will not predict the future. The surest thing about the future is that it can’t be predicted. Wall Street has known this for a long time.
While there are some good points, and I agree that there’s more “woo” regarding AI than is healthy, I do wonder about the following:
“Emulation of human thinking in other media will be constrained by cost.”
Anyone who claims that because it’s too expensive to do something with software today means it will always remain so hasn’t been paying attention to pretty much every damn piece of technology for more than thirty years.
“Intelligence is not a single dimension”, “Dimensions of intelligence are not infinite”, " Intelligences (sic) are only one factor in progress"
All of these things being true doesn’t preclude an AI of being either exceedingly dangerous or far beyond our ability to solve certain problems because of their limitations and differences from human intelligence.
Those are some pretty weak arguments. It’s true that what we call intelligence is just proficiency in a finite number of intellectual fields. AI researchers and writers know this though. Nick Bostrom defines artificial superintelligence as "an intellect that is much smarter than the best human brains in practically every field, including scientific creativity, general wisdom and social skills."
Machines are already equal or superior to us in many subfields. If intelligence is just a collection of abilities as Kevin says (and I agree), I wonder which ability he thinks is one that a machine can never possess.
I’ve also never seen anyone claim that intelligence would or can be unlimited. In the wilder predictions, Von Neumann probes scavenge resources in the cosmos, but even then, intelligence is dependent on resources.
I got the same. Clearly the AI are on to us.
Maybe the superhuman AI won’t even want to solve all our problems. Sounds like a tedious task for something with a brain the size of a planet.
Me, too. Looks like all of backchannel.com is 404. Cory linked to another article there, this morning, on how losing net neutrality hurts rural areas, also 404, and all the comments on either of these seem only to refer to material in the posted pull quote, so I don’t know if anyone is getting through.
Maybe backchannel is one of those darkweb things that one needs to be on TOR to access?
If we were able to create super-intelligent AI and then proceeded to make the largest installation for one i would wonder what it’s usefulness and purpose could be. I’m getting Matrix flashbacks here.
Whaaaat?! I’ll have to mull that over as I go for a 1000 mph joyride in my steam-powered car.
On top of everything you mention, there’s that we can’t define what “human intelligence” means, therefore the idea of “human intelligence, but more so” is nonsense.
I do wonder how the end of Moore’s Law - which has already happened - will impact these kinds of narratives which are explicitly premised on it.
Cory Doctorow constantly declaring things as bullshit is bullshit. Seriously can you please write less clickbait-y headlines?
The whole model of intelligence == human-like is not very interesting or applicable, and needs to go. Marketing hype about AI is reliant upon the whole sci-fi cargo-cult mentality that most people associate AI with artificial humans or human-like intelligences. But since when are computer engineering terms determined by pop-culture usage rather than technical considerations?
I prefer that areas like machine learning, swarming/boids, artificial life, etc are mostly unencumbered by the semantic cul-de-sac of bickering over what does or should count as intelligence. It is like the technologists version of debating about whether or not a given work is really “art”, and as such is a waste of time.
I wouldn’t be too certain about “the end” of Moore’s Law…
…not just yet.
I have been spending an inordinate amount of time recently exploring the current boundaries of AI and mostly what I see in the media is a bunch of hype and half-assed assertions that are based on our associations with sci-fi rather than science.
One area that AI is largely ignoring is the whole definition of intelligence and what it means to call something “intelligent”. Current AI systems are incredibly good at being computers - they are wicked fast doing computations and cognitive analysis of large datasets. It’s even very good at certain predictive analysis such as forecasting and pattern correlation dealing with a huge number of variable. Things that humans could never do on their own.
But a computer will never be a human. Never. So artificial human is not even in consideration among serious researchers. So artificial intelligence = human intelligence is exactly the wrong comparison.
If you really want to explore some cutting edge shit, check out what the MIT Media lab is doing in the area of artificial emotional intelligence. Emotions are on the other side of the human coin with cognition so current AI is only looking at half the picture. Making computer systems emotionally aware will fundamentally change the way humans interact with computers. This is actually happening right now.
But can it enjoy life as well as human? Or be as lazy as I can?
I recall reading a paper that pointed out not only do we not really know what consciousness is, but there may be more than one kind or it may be an illusion. Try not to think about thinking.
The You Are Not So Smart podcast covers (neuro)psychology concepts like these it’s really interesting.
I’d love to read a transcript, but I’m too impatient for podcasts.