Our Generation Ships Will Sink

Yeah, no.

5 Likes

We can’t replicate a pancreas, let alone a human. We can’t transplant a head (err, a body?) at all. Have you met someone with transplanted organs? It is amazing but still almost barbaric (apologies to surgeons here, but I doubt you disagree).

Do we even know how to make liver tissue, let alone tell nanobots how to make it? Heck, lets talk about teeth. How would you tell a tiny, 8bit nano bot to print a working tooth?

It isn’t impossible. It will all happen. But not in our lifetimes. Or our children’s lifetimes X 100.

3 Likes

Did we hit a zeitgeist moment with this post?

2 Likes

Yeah, I’m gonna step out for a bit on this one, I’ve said what I can say. Very little is impossible, but I just don’t believe in magic :smiley:

4 Likes

“I know it when I see it”.
Seriously. Humanity has failed at providing an exact definition of what “pornography” is, but we have succeeded at producing it (or so I hear). We know of many traits that an AI should have that present-day software doesn’t have. And while we might overlook a few and place too much importance on others, we’ve got our work cut out for us. Being able to define an exact threshold is entirely irrelevant.

Actually, a C64 has as much memory as a single not too thick paperback book without illustrations. The cover illustration won’t fit unless you compress it and leave out the books contents.
Which, I would argue, is nowhere near enough intellectual input for any kind of “brain” to become any kind of “intelligent”.
A natural intelligence (let’s take a cute newborn kitten as an example) - processes way more information before it has gained enough experience to make any kind of sense at all of the world.

Yeah, no.
(I finally got to use it, too!!!)

Are you suggesting that there might possibly an algorithm for AI that is so simple that a human can play around with pencil and paper and then know whether that algorithm would produce something meaningful on real-world amounts of input? Because, I contend, that would be a serious limitation on how to think about these problems.

Any serious attempts at AI require experimentation, and that requires efficiency. As long as you don’t have the efficiency, you will never figure out the “efficacy” part.

But you won’t be able to figure out whether your model applies to reality using just a chalkboard, because reality is too damn big.

Which reminds me of my favourite BeOS system calls.

Not necessarily. I believe that an AI will have to deal with amounts of data similar to what our brain hardware does (storage requirements). I also believe that to figure this out, many experiments need to be run. If you need to wait for ten years to know whether version 97 of your attempt at rat-level AI found the cheese in the maze, you won’t make much progress. In fact, that research would go much more quickly if I could simulate rats at above real-time speed.
Also, I believe that emulating brain hardware is a promising method to better understand the low-level aspects of the brain.
Thirdly, I believe that emulating brain hardware is a worthwile goal in itself, at least provided the progress of nanotechnology keeps up.

Transistors are just our current best method for building general computation devices. Intelligence is a computational process. Before you accuse others of having blind spots, maybe you should just tell us what you are seeing that others aren’t?
Of course, I know that many people object to the idea that intelligence is a computational process, but I have yet to see a shred of non-theological evidence that it isn’t.

Now children’s lifetimes X 100, and I’m hoping that infant mortality doesn’t strike, is an awfully long time to make any “won’t happen” claims about. “Not in our lifetimes” seems a very safe bet for nano-assembly of entire humans, though.
The important question seems to be, when will we be able to digitize the current state of a human brain. I still think this is a not-in-our-lifetimes thing, but if it happens, it will definitely rank among the most important human inventions.

1 Like

I am not citing anything, merely working from what little I know, which still goes beyond the relevance of the topic.

Human brains seem to function more as analog computers. Whatever “switching” there is takes place in spark gaps which change their structures as they fire. And rather than a simple two state on/off, they are mediated by a complex stew of compounds. If the brain could be said to ever occupy a discrete state, it would be easier to understand what it is doing, and/or reproduce it. But brains have, so far as I am aware, never been successfully modelled in such a way.

It seems this way because some people in AI assume that emulating brains is the way to achieve machine intelligence. But this itself is an unfounded assumption.

1 Like

Okay, I get to break my promise and reply, cause I honestly laughed. I enjoy contextual humor and spirited debates :smile:

I will Cherry pick one response though, and then we will probably just have to take this to a different thread (mainly so I don’t annoy more people).

The bottom line is that the brain processes information using a representation strategy that is neither analog nor digital. It is a different type of computation, involving circuits and networks composed of spiking neurons. One of the central tasks of neuroscience is to figure out how this information processing paradigm works.

Source: quora (take that as you may)

While your argument that binary logic is the best we have right now–which I am very sympathetic too–it takes a quick CS101 class to figure out that logic gates do not define intelligence.

And your mention of porn–know it when you see it–is staggeringly telling. We not only hold the black box (the emergent behaviors), but also the white box (every execution instruction) and it still doesn’t add up. We are missing something that more computation and storage is unlikely to find.

A ton of Nobel prizes are waiting to be claimed.

2 Likes

I got the impression from GEB that once you could model high level conscious processes (which are informing the ‘switching’ of the aggregates out of which they are built), you could assemble a consciousness-generating mechanism out of any physical (or non-physical I suppose) substrate. I think the examples Hoffstaeder uses are billiard balls knocking into one another and a series of buckets pouring water into one another… so long as those processes can contain (ah!) glyphs of themselves in a meaningfully non-abstract fashion.

I’ve really been meaning to re-read it, bought ‘I am a Strange loop’ in order to dip my toes back in gently.

2 Likes

Diaspora had a few digitized people in it, but most of the characters were, IIRC, original digital lifeforms - not copied from living people. The book even begins with the instantiation of one, follows it through to self-awareness, and then to socialization.

I share (what I assume is) your skepticism on the project of digital copies of analog brains. But I do not doubt that something like life could exist on such systems. But it wouldn’t be us, or anything like us.

2 Likes

It might not be anything appreciably like us but that doesn’t preclude the possibility that it could emerge from and share a discrete, connected history with one or more of us. Whether it would be you kinda becomes irrelevant. It was you and now it’s something else.

1 Like

Taking this line of thought to Questions :smiley:

2 Likes

Yes. I know, so what? Computers can simulate lots of things that are analog, digital, or some mixture of the two. There is no reason to change the computational substrate of our tools just because the human brain uses a different one. Every computational substrate can trivially emulate every other computational substrate.

I took your “we don’t know what sentience is” as a claim that we cannot even accurately define what those emergent behaviours are. Which is true, but irrelevant. We have a rough idea of how a neuron works. And we have an intuitive grasp of what emergent behaviours we are looking for. The rest is an incredible amount of complexity, which might take a very long time to understand.
My point was that more computation and storage (than we had 15 years ago) is a necessary precondition for understanding that complexity. I was not saying “now that we have the computing power, our work is done”.
And I also contend that what happens in our brain is just computation, no magic sauce added.

ha ha ha.

Sure, don’t hold your breath. Nanotechnology in the way most people think of it has been vaporware since I was in high school at the end of the 80s.

3 Likes

Room Temperature Superconductors too.

2 Likes

Straight up. Speaking as someone who works in real nanotechnology (figuring out how to mas-produce transistors with gates <30 atoms wide) self-replicating nanomachines aren’t even on the horizon. S far as emulating brains, we don’t even know enough about them to have any idea what it would require, much less be able to build whatever it is if we did know.

8 Likes

Adding to what you wrote @yaphro:

When we approach ~90% knowledge of the human biochemistry, then we’ll be able to start playing with artificial life. I have a few friends who are biochem/physiologists. They all say our current understanding of the molecular catalog within the human being is ~10%. We have sequenced the genome, but the way that genome controls the proteome, and the proteome affects the metabolome, are around ~10% known.

To put that in perspective, it took us about 50 years for that 10%. Assuming some kind of Moore’s law of biological discovery is at play, that puts us at 20% in 50 years, 40% in 100 years, 80% in 150 years. And the 90% in probably around 175 years from now. Optimistically. With no wrinkles such as political upsets or nuclear wars that wipe out research facilities and personnel.

Maybe. There’s a lot of ifs in there. Lots of biochemists and physiologists would disagree and say I’m being too optimistic. I won’t be around to find out if I was right or not.

4 Likes

Based upon what? Life has never worked by starting with the most complex organism you can think of, and working back. What are the chances that this is simple cognitive bias of humans once again assuming that they are the standard of life and intelligence? I expect better from scientists.

People have made a lot of progress with regards to custom genes and single-celled organisms. This will achieve the goals of artificial life in my lifetime, if they haven’t arguably already done so.

Sure, but so what? Human beings are trivially easy to make the old-fashioned way. Knowing more about how humans work will no doubt revolutionize modifying ourselves, but is a difficult, needlessly baroque way to creating other organisms. You work with the strengths and weaknesses of what you have, rather than picking an arbitrary ideal.

These discussions seem to get swamped with 95% or so about copying humans, but no matter how much I ask about it or point out more sound alternatives, hardly anybody seems to be willing to even consider that human functioning and properties might not have anything even remotely to do with artificial life or intelligence. Which seems obviously to me to be the case

1 Like

I was responding specifically to the conjecture about artificial human life and trying to put that into perspective. Not artificial life in general, or in its lower forms, such as an artificial bacterium. Preposterous to think that we need to understand the entire human before we understand the bacterium, and I wasn’t suggesting that, so don’t make that leap. We have soft AI already, and it’s only growing in its power. We will probably have interactive robots with some kind of adaptable, conversant AI in just a few years. Perhaps even have stuff like artificial bacterium on the sooner side.

But artificial human-like life? I would hold that to a much higher standard. It has to be self-sustaining and can reproduce and interact in the world without human or other intervention. It has to be human enough to pass as human if it wants to.

That’s what I was responding to.

Now, you want to talk alternatives, so, dude, let’s talk about some alternatives!

2 Likes

Yeah, no.

/s

3 Likes

Great. A digital computer is perfectly capable of simulating an analog computer, up to arbitrary precision. So, what you are saying boils down to the well-known fact that a single neuron is damn complicated, but not at all beyond the scope of simulation by a digital computer.

I adressed that in my answer to @albill’s comment earlier. Emulating brains is one way of approaching the problem. It is not the only approach being tried. In fact, all the “practical” results in AI (rebranded as “machine learning”) come from approaches where no brain was simulated.
I do not believe, however, that general human-level intelligence can be achieved with significantly less processing power than an emulation of a human brain. An artificial intelligence running on a C64 cannot be much smarter than any being whose total accumulated life experience is less than 64KB, or about one paperback novel.