Kevin Kelly: "superhuman" AI is bullshit

If they can make it work, it only theoretically extends Moore’s Law for a few years, and that’s assuming the transistors are not just smaller but that they can also fit more in a given area. That’s been the problem with silicon chips in recent years - as transistors are getting smaller, the heat dissipation issues get worse (they hit theoretical limits of heat dissipation), so even when they halve the size, they can’t fit twice as many in the same space.
The larger problem is that we’re about at the point where, whatever the chips are made of, it’s physically impossible to make it any smaller because of quantum tunneling and the impossibility of making transistors smaller than one atom in width. It’s entirely possible someone will come up with a new way of doing computing involving, say, bouncing photons around so we don’t have the same size limitations, but Moore’s Law is very much about the physicality and practicalities of chip production, so there’s no particular reason to believe that there would even be anything analogous with new computing architectures.

Oh man, consciousness is a whole other, horribly tangled, sticky thicket. Whatever it is, it’s almost certainly not anything like how it’s traditionally been viewed, and our perception of our own is most definitely some sort of illusion/delusion. But yeah, if you’re talking about truly “human equivalent” AI, you’re really dealing with two problems - intelligence and consciousness, both of which are highly problematic to even define.
I can recommend Rita Carter’s book Exploring Consciousness as a primer on all the issues, evidence and history of consciousness.

3 Likes

I’m getting Colossus: The Forbin Project 1) 2) flashbacks.

1) Think IoT with nukes.
2) This kinda dates me, doesn’t it.

2 Likes

There’s a difference between “unlimited growth” and “surpassing humans” though. Humans might be intelligent, but it’s fairly easy to think up something that’s more intelligent. Really if you just imagine a being that’s capable of doing the sort of things computers do easily, like storing lots of information, complex calculations, etc, and apply human intelligence to the ability to do amazing mental gruntwork, and there you have it. Say you make a machine that can do the same sort of learning that humans do, but quickly enough to, say, read and understand entire online journal databases in a few months. Or, hell, a few years. It’d be able to cross reference and cite literature in ways no humans could, perform better meta-analysis of past data, and integrate theories from multiple fields to come up with workable real-world solutions humans never could.

You could say that’s a machine that’s as smart as a human, but with larger capacity to store knowledge, and increased speed. But the distinction doesn’t make as much of a difference at that point.

I honestly don’t know enough about the topic to make any sort of informed argument, but it’d be nice if you could expand on this. I mean, obviously a computer will never be human in that it won’t be made of meat and blood and bone. Are you saying it could never think like a human? (Or that it’s hypothetically possible a computer could think like a human, but it’s so unlikely as to say it’s impossible) Or it could never think like a sapient non-human?

And by never, do you mean “we’ll never meet alien life” kind of “never” where it’s theoretically possible, it’s just not going to happen, or do you mean “never” as in “We will never see a 50 foot spider, because it would crush its own legs” kind of never? If the former, I’ll basically take your word for it that it’s a practical impossibility. If the latter, though, I’d be interesting in hearing why.

There is no reason to fundamentally preclude that option. Organic matter can be grown artificially after all.

I thought about saying “Won’t be made of meat and blood and bone, and have come out of the vagina of a human woman” but I figured someone would point out that some babies in the future won’t come out of vaginas either.

Anyway, while it may one day be possible to do that, it sounds like a very expensive way to do something not worth doing in the first place.

MACDUFF:
Despair thy charm,
And let the angel whom thou still hast served
Tell thee, Macduff was from his mother’s womb
Untimely ripped.

I’d take that with a large grain of salt. The overwhelming conclusion of the psych research community is that “emotional intelligence” does not exist [1].

[1] To be more precise, it’s: people differ in their ability to understand and manipulate emotion, but this is not a unitary thing that can be meaningfully expressed on a linear scale, and “intelligence” is a completely inappropriate metaphor to use in this area.

The general neuroscience-based conclusion these days is that “consciousness” is not the driving force of the mind. Instead, it’s a post-facto rationaliser that invents plausible justifications for why we behaved in the manner that we did.

The non-conscious mind is in the driver’s seat; consciousness is just the narrator. Free will is an illusion.

1 Like

Silicon was already a pretty old-school substrate thirty years ago. But it’s relatively cheap, and there are a lot of sunk costs.

The first part, yes, definitely. The second is a semantic issue, as to whether or not one associates free will with sense-of-self. The meta-self can still have a sort of free will which is separate from one’s feeling of personal identity, but without an ego as such the meta-self is not concerned about it one way or another.

Not speaking for @anon81034786, but I think it is a matter of different “signal processing” domains. They just don’t work similarly. It is easy-ish to make a binary approximation of something, but it is only a static model. Much more involved is real-time modelling of an approximation of an analog physical system - modern computers still struggle to implement even fairly simple ones in real time.

It is kind of like making a 1:1 scale map of some vast complicated landscape. Even succeeding for an instant would be terribly expensive and complicated (as well as arguably unnecessary!). And then there’s the fact that the landscape is a changing ecology, an emergent output of larger chaotic meta-systems, which one would also need to model. So to get a functional 1:1 map which changes over time you either need to make a system much larger and more complex than what you hope to model - or else drastically simplify it somehow. That’s problematic when dealing with a quality such as consciousness. Most handwavium about human-level AI (as if that were A Thing) is to realistically model the living human brain. Yet there is little consensus about how - or even IF - a brain is actually conscious.

When we instead just try to figure out what might be complex and yet balanced enough to make an interesting information system, things get more promising. But less likely to resemble human. The efficient way to go about it is to exploit binary circuits and algorithms for what they are good at, instead of trying to replicate billions of years of squishy happy accidents. The results are unlikely to resemble anything like even life/biology. It doesn’t need to maintain its own body, or survive, or reproduce. It is probably the first “alien” we will really “meet” - if we are even capable of perceiving each others existence.

You underestimate the potential market value of meat and bone sexbots.

When I was a kid, BBC2 ran a series of classic Sci-Fi movies at 6pm (I think every weeknight), so I’d watch them after tea. Colussus was one of them, and I thought it was excellent (and impressively bleak). Others I can remember were The Forbidden Planet and When Worlds Collide, and I think The Day the Earth Caught Fire.

This topic was automatically closed after 5 days. New replies are no longer allowed.