E. Coli is able to assemble organic material. Why nanobots and not just, you know, the boring yet enormously useful library of tools we already have? Or better, why not both?
Second, if shoveling organic matter is the basis for the understanding of AI that you seem to propose, why isn’t my garden sentient?
Is a brain dead person sentient? Is a sleeping cat sentient? Is a bacteria sentient? Is a virus sentient? Is a prion sentient? Is a strand of rna sentient?
There aren’t concrete answers to those questions, because we don’t know what intelligence–artificial or natural (seems like a weird distinction)–is in the first place.
But we will figure it out. It’s just gonna be (muuuuuuuch) longer than decades.
How do you do any form of terraforming without sending life? Terraforming isn’t just bulldozing the sand around, we’ve got to send grasses, trees, microbes, the animals the trees require — basically the whole ecosystem, or none of the plants we send will survive.
I think sending a colony-worth of squirrels to collect the nuts, as needed by the trees, is probably around the same order of magnitude of difficulty as sending humans.
I guess you could start with plant life that doesn’t require any animals (if such plant life exists on earth), but even those plants still require a huge ecosystem of connected organisms.
I think a good choice would be species of lichen, some of them can thrive on barely cooled down volcanic rocks. But even such robust lifeforms have many needs and require a very earthlike planet.
What? Why? What gives you that impression?
Are you drawing some conclusion from theoretical computer science that contradict anything I have said? (If you answer this, you may assume that I am familiar with the subject). Or anything anybody else here said, because I haven’t noticed anything.
By the way, I took “computing power” above to refer to execution speed and available memory, not to computability.
The biggest part of a terraforming project is chemical/mechanical, not biological. Smashing comets into planets, burning/converting/processing atmosphering gases, that kind of thing.
I said you may assume I know the subject of theoretical computer science
First, you neglect storage space. You cannot simulate anything that has the complexity of a human brain on a Commodore 64, no matter how long you wait. In fact, as far as theoretical computability is concerned, any computer with finite memory (or any finite stack of paper, for that matter), is equivalent to a finite automaton. Only when you’ve got more memory than you need for a particular problem can you think of a real-world computer as a reasonable substitute for something Turing-equivalent.
Second, as we don’t know the right algorithms for AI yet, execution speed does matter. After all, we need to experiment with various methods. And it’s only about now that enough execution speed and storage space are available that experiments can be run on anything but “toy problems”.
Earlier AI research was hampered by the fact that it was just infeasible to try out an algorithm on any realistic amount of data. This is relevant both for big-data based machine learning approaches and for experiments in neuronal simulation. If you have a theory about how the neurons work and how they are wired, you can now try it out. This wasn’t always the case.
And I assume this is what @corwin_zelazney bases some of his optimistic claims on.
But this places a very serious artificial limitation on how to even think about these problems. You are arguing efficiency, not efficacy. A c64 has as much memory as pencils and paper you have at office depot. Which is * a lot * if you choose to use it.
This is a blind spot so many of my robotics/modeler/mathematics friends have. They eventually see a transistor as the hammer for all their problems, when really they should be thinking about problems orthogonally. Which ironically is a measurement of natural intelligence (did I mention we don’t even know what intelligence or sentience even is?)