Yeah, no.
We canât replicate a pancreas, let alone a human. We canât transplant a head (err, a body?) at all. Have you met someone with transplanted organs? It is amazing but still almost barbaric (apologies to surgeons here, but I doubt you disagree).
Do we even know how to make liver tissue, let alone tell nanobots how to make it? Heck, lets talk about teeth. How would you tell a tiny, 8bit nano bot to print a working tooth?
It isnât impossible. It will all happen. But not in our lifetimes. Or our childrenâs lifetimes X 100.
Yeah, Iâm gonna step out for a bit on this one, Iâve said what I can say. Very little is impossible, but I just donât believe in magic
âI know it when I see itâ.
Seriously. Humanity has failed at providing an exact definition of what âpornographyâ is, but we have succeeded at producing it (or so I hear). We know of many traits that an AI should have that present-day software doesnât have. And while we might overlook a few and place too much importance on others, weâve got our work cut out for us. Being able to define an exact threshold is entirely irrelevant.
Actually, a C64 has as much memory as a single not too thick paperback book without illustrations. The cover illustration wonât fit unless you compress it and leave out the books contents.
Which, I would argue, is nowhere near enough intellectual input for any kind of âbrainâ to become any kind of âintelligentâ.
A natural intelligence (letâs take a cute newborn kitten as an example) - processes way more information before it has gained enough experience to make any kind of sense at all of the world.
Yeah, no.
(I finally got to use it, too!!!)
Are you suggesting that there might possibly an algorithm for AI that is so simple that a human can play around with pencil and paper and then know whether that algorithm would produce something meaningful on real-world amounts of input? Because, I contend, that would be a serious limitation on how to think about these problems.
Any serious attempts at AI require experimentation, and that requires efficiency. As long as you donât have the efficiency, you will never figure out the âefficacyâ part.
But you wonât be able to figure out whether your model applies to reality using just a chalkboard, because reality is too damn big.
Which reminds me of my favourite BeOS system calls.
Not necessarily. I believe that an AI will have to deal with amounts of data similar to what our brain hardware does (storage requirements). I also believe that to figure this out, many experiments need to be run. If you need to wait for ten years to know whether version 97 of your attempt at rat-level AI found the cheese in the maze, you wonât make much progress. In fact, that research would go much more quickly if I could simulate rats at above real-time speed.
Also, I believe that emulating brain hardware is a promising method to better understand the low-level aspects of the brain.
Thirdly, I believe that emulating brain hardware is a worthwile goal in itself, at least provided the progress of nanotechnology keeps up.
Transistors are just our current best method for building general computation devices. Intelligence is a computational process. Before you accuse others of having blind spots, maybe you should just tell us what you are seeing that others arenât?
Of course, I know that many people object to the idea that intelligence is a computational process, but I have yet to see a shred of non-theological evidence that it isnât.
Now childrenâs lifetimes X 100, and Iâm hoping that infant mortality doesnât strike, is an awfully long time to make any âwonât happenâ claims about. âNot in our lifetimesâ seems a very safe bet for nano-assembly of entire humans, though.
The important question seems to be, when will we be able to digitize the current state of a human brain. I still think this is a not-in-our-lifetimes thing, but if it happens, it will definitely rank among the most important human inventions.
I am not citing anything, merely working from what little I know, which still goes beyond the relevance of the topic.
Human brains seem to function more as analog computers. Whatever âswitchingâ there is takes place in spark gaps which change their structures as they fire. And rather than a simple two state on/off, they are mediated by a complex stew of compounds. If the brain could be said to ever occupy a discrete state, it would be easier to understand what it is doing, and/or reproduce it. But brains have, so far as I am aware, never been successfully modelled in such a way.
It seems this way because some people in AI assume that emulating brains is the way to achieve machine intelligence. But this itself is an unfounded assumption.
Okay, I get to break my promise and reply, cause I honestly laughed. I enjoy contextual humor and spirited debates
I will Cherry pick one response though, and then we will probably just have to take this to a different thread (mainly so I donât annoy more people).
The bottom line is that the brain processes information using a representation strategy that is neither analog nor digital. It is a different type of computation, involving circuits and networks composed of spiking neurons. One of the central tasks of neuroscience is to figure out how this information processing paradigm works.
Source: quora (take that as you may)
While your argument that binary logic is the best we have right nowâwhich I am very sympathetic tooâit takes a quick CS101 class to figure out that logic gates do not define intelligence.
And your mention of pornâknow it when you see itâis staggeringly telling. We not only hold the black box (the emergent behaviors), but also the white box (every execution instruction) and it still doesnât add up. We are missing something that more computation and storage is unlikely to find.
A ton of Nobel prizes are waiting to be claimed.
I got the impression from GEB that once you could model high level conscious processes (which are informing the âswitchingâ of the aggregates out of which they are built), you could assemble a consciousness-generating mechanism out of any physical (or non-physical I suppose) substrate. I think the examples Hoffstaeder uses are billiard balls knocking into one another and a series of buckets pouring water into one another⌠so long as those processes can contain (ah!) glyphs of themselves in a meaningfully non-abstract fashion.
Iâve really been meaning to re-read it, bought âI am a Strange loopâ in order to dip my toes back in gently.
Diaspora had a few digitized people in it, but most of the characters were, IIRC, original digital lifeforms - not copied from living people. The book even begins with the instantiation of one, follows it through to self-awareness, and then to socialization.
I share (what I assume is) your skepticism on the project of digital copies of analog brains. But I do not doubt that something like life could exist on such systems. But it wouldnât be us, or anything like us.
It might not be anything appreciably like us but that doesnât preclude the possibility that it could emerge from and share a discrete, connected history with one or more of us. Whether it would be you kinda becomes irrelevant. It was you and now itâs something else.
Taking this line of thought to Questions
Yes. I know, so what? Computers can simulate lots of things that are analog, digital, or some mixture of the two. There is no reason to change the computational substrate of our tools just because the human brain uses a different one. Every computational substrate can trivially emulate every other computational substrate.
I took your âwe donât know what sentience isâ as a claim that we cannot even accurately define what those emergent behaviours are. Which is true, but irrelevant. We have a rough idea of how a neuron works. And we have an intuitive grasp of what emergent behaviours we are looking for. The rest is an incredible amount of complexity, which might take a very long time to understand.
My point was that more computation and storage (than we had 15 years ago) is a necessary precondition for understanding that complexity. I was not saying ânow that we have the computing power, our work is doneâ.
And I also contend that what happens in our brain is just computation, no magic sauce added.
ha ha ha.
Sure, donât hold your breath. Nanotechnology in the way most people think of it has been vaporware since I was in high school at the end of the 80s.
Room Temperature Superconductors too.
Straight up. Speaking as someone who works in real nanotechnology (figuring out how to mas-produce transistors with gates <30 atoms wide) self-replicating nanomachines arenât even on the horizon. S far as emulating brains, we donât even know enough about them to have any idea what it would require, much less be able to build whatever it is if we did know.
Adding to what you wrote @yaphro:
When we approach ~90% knowledge of the human biochemistry, then weâll be able to start playing with artificial life. I have a few friends who are biochem/physiologists. They all say our current understanding of the molecular catalog within the human being is ~10%. We have sequenced the genome, but the way that genome controls the proteome, and the proteome affects the metabolome, are around ~10% known.
To put that in perspective, it took us about 50 years for that 10%. Assuming some kind of Mooreâs law of biological discovery is at play, that puts us at 20% in 50 years, 40% in 100 years, 80% in 150 years. And the 90% in probably around 175 years from now. Optimistically. With no wrinkles such as political upsets or nuclear wars that wipe out research facilities and personnel.
Maybe. Thereâs a lot of ifs in there. Lots of biochemists and physiologists would disagree and say Iâm being too optimistic. I wonât be around to find out if I was right or not.
Based upon what? Life has never worked by starting with the most complex organism you can think of, and working back. What are the chances that this is simple cognitive bias of humans once again assuming that they are the standard of life and intelligence? I expect better from scientists.
People have made a lot of progress with regards to custom genes and single-celled organisms. This will achieve the goals of artificial life in my lifetime, if they havenât arguably already done so.
Sure, but so what? Human beings are trivially easy to make the old-fashioned way. Knowing more about how humans work will no doubt revolutionize modifying ourselves, but is a difficult, needlessly baroque way to creating other organisms. You work with the strengths and weaknesses of what you have, rather than picking an arbitrary ideal.
These discussions seem to get swamped with 95% or so about copying humans, but no matter how much I ask about it or point out more sound alternatives, hardly anybody seems to be willing to even consider that human functioning and properties might not have anything even remotely to do with artificial life or intelligence. Which seems obviously to me to be the case
I was responding specifically to the conjecture about artificial human life and trying to put that into perspective. Not artificial life in general, or in its lower forms, such as an artificial bacterium. Preposterous to think that we need to understand the entire human before we understand the bacterium, and I wasnât suggesting that, so donât make that leap. We have soft AI already, and itâs only growing in its power. We will probably have interactive robots with some kind of adaptable, conversant AI in just a few years. Perhaps even have stuff like artificial bacterium on the sooner side.
But artificial human-like life? I would hold that to a much higher standard. It has to be self-sustaining and can reproduce and interact in the world without human or other intervention. It has to be human enough to pass as human if it wants to.
Thatâs what I was responding to.
Now, you want to talk alternatives, so, dude, letâs talk about some alternatives!
Yeah, no.
/s
Great. A digital computer is perfectly capable of simulating an analog computer, up to arbitrary precision. So, what you are saying boils down to the well-known fact that a single neuron is damn complicated, but not at all beyond the scope of simulation by a digital computer.
I adressed that in my answer to @albillâs comment earlier. Emulating brains is one way of approaching the problem. It is not the only approach being tried. In fact, all the âpracticalâ results in AI (rebranded as âmachine learningâ) come from approaches where no brain was simulated.
I do not believe, however, that general human-level intelligence can be achieved with significantly less processing power than an emulation of a human brain. An artificial intelligence running on a C64 cannot be much smarter than any being whose total accumulated life experience is less than 64KB, or about one paperback novel.