Our Generation Ships Will Sink

E. Coli is able to assemble organic material. Why nanobots and not just, you know, the boring yet enormously useful library of tools we already have? Or better, why not both?

Second, if shoveling organic matter is the basis for the understanding of AI that you seem to propose, why isn’t my garden sentient?

Is a brain dead person sentient? Is a sleeping cat sentient? Is a bacteria sentient? Is a virus sentient? Is a prion sentient? Is a strand of rna sentient?

There aren’t concrete answers to those questions, because we don’t know what intelligence–artificial or natural (seems like a weird distinction)–is in the first place.

But we will figure it out. It’s just gonna be (muuuuuuuch) longer than decades.

1 Like

Quite so. My mistake.

So you disagree with the idea of mathematics underlying all computation, and the universal “Turing machine”?

1 Like

How do you do any form of terraforming without sending life? Terraforming isn’t just bulldozing the sand around, we’ve got to send grasses, trees, microbes, the animals the trees require — basically the whole ecosystem, or none of the plants we send will survive.

I think sending a colony-worth of squirrels to collect the nuts, as needed by the trees, is probably around the same order of magnitude of difficulty as sending humans.

I guess you could start with plant life that doesn’t require any animals (if such plant life exists on earth), but even those plants still require a huge ecosystem of connected organisms.

3 Likes

I think a good choice would be species of lichen, some of them can thrive on barely cooled down volcanic rocks. But even such robust lifeforms have many needs and require a very earthlike planet.

2 Likes

I think the key on both interstellar travel and terraforming is you need a way to make time not an issue.

Spend 20,000 years getting there. Spend 500,000 years terraforming.

A technology that can endure as an intelligence like that is plausible to me.

But, as said above: no monkeys in cans. They can’t deal with the time issue.

2 Likes

Trying reading that again. Hint: the tree DOESN’T MOVE.

What? Why? What gives you that impression?
Are you drawing some conclusion from theoretical computer science that contradict anything I have said? (If you answer this, you may assume that I am familiar with the subject). Or anything anybody else here said, because I haven’t noticed anything.

By the way, I took “computing power” above to refer to execution speed and available memory, not to computability.

Eexcution speed has nothing to do with anything being computable – it has to do with the speed of computation.

You can run Doom with pencil and paper. It’s going to take a long time to frag anyone, but it’s still Doom.

AI is algorithms, not speed.

6 Likes

2 Likes

You typed faster than me.

If you can’t model it on a chalkboard, execution speed and capacity is meaningless. Now, I won’t place any limits on the number of chalkboards, but

sub isThisGoodOrBad($thingy) { #stuff }

Is different than computation.

4 Likes

modern software design - “the interfaces are defined, we’re nearly ready to ship”

4 Likes

The method always returns True, so how could there be any bugs???

2 Likes

chosen by fair coin toss

3 Likes

The biggest part of a terraforming project is chemical/mechanical, not biological. Smashing comets into planets, burning/converting/processing atmosphering gases, that kind of thing.

I said you may assume I know the subject of theoretical computer science :wink:

First, you neglect storage space. You cannot simulate anything that has the complexity of a human brain on a Commodore 64, no matter how long you wait. In fact, as far as theoretical computability is concerned, any computer with finite memory (or any finite stack of paper, for that matter), is equivalent to a finite automaton. Only when you’ve got more memory than you need for a particular problem can you think of a real-world computer as a reasonable substitute for something Turing-equivalent.

Second, as we don’t know the right algorithms for AI yet, execution speed does matter. After all, we need to experiment with various methods. And it’s only about now that enough execution speed and storage space are available that experiments can be run on anything but “toy problems”.
Earlier AI research was hampered by the fact that it was just infeasible to try out an algorithm on any realistic amount of data. This is relevant both for big-data based machine learning approaches and for experiments in neuronal simulation. If you have a theory about how the neurons work and how they are wired, you can now try it out. This wasn’t always the case.
And I assume this is what @corwin_zelazney bases some of his optimistic claims on.

But this places a very serious artificial limitation on how to even think about these problems. You are arguing efficiency, not efficacy. A c64 has as much memory as pencils and paper you have at office depot. Which is * a lot * if you choose to use it.

This is a blind spot so many of my robotics/modeler/mathematics friends have. They eventually see a transistor as the hammer for all their problems, when really they should be thinking about problems orthogonally. Which ironically is a measurement of natural intelligence :smiley: (did I mention we don’t even know what intelligence or sentience even is?)

4 Likes

They aren’t even alive! They’re just digital copies!

1 Like

So you think an AI must emulate our brain hardware and at real time speed to be an AI? Interesting.

1 Like

When we have nano-assemblers we will be able to “record” a full grown human, and replicate that person elsewhere.

And that technology is NOT that far off.