Very big thinkers ponder: "What do you think about machines that think?"

Also, two books that touch on the theme:

Killer Robots: Legality and Ethicality of Autonomous Weapons
https://books.google.com/books?isbn=140949912X

What Should be Computed to Understand and Model Brain Function?
https://books.google.com/books?isbn=9812810307

Also, apparently there is a firefighting bot with self-preservation instinct modelled after an insect.

My university tutor, good old Captain Cyborg, wrote a book on this (inevitably).

March of the Machines: The Breakthrough in Artificial Intelligence
http://books.google.com/books/about/March_of_the_Machines.html?id=RZJ8i6ZhWV8C

This is my point exactly. Thinking in your head is not supernatural either. Old houses were built by carpenters with hammers. Hammers arenā€™t doing any thinking for us (except, as the old adage says), but they are extending the capabilities of our bodies. Nail guns extend those capabilities further. Computers think and make decisions in exactly the same way a hammer pounds in a nail. A big part of thinking is just remembering things, and computers definitely remember a huge amount of stuff for us (though written word has been remembering things for us for millennia).

When does something stop being ā€œjust a tool?ā€ Is a prosthetic arm just a tool? If so then how is your living arm not just a tool? How is your visual cortex or the speech center of your brain not just a tool? Couldnā€™t we accomplish the same thing through other means?

Being 20 years behind doesnā€™t matter as long as we remain 20 years behind. If there is a gradual drift to further and further behind then you can start arguing about how the rate changes, but ex is not all that different than ex-1, especially when you are living on the curve and donā€™t know there is another place to be. I compare the computer in my pocket to the computer I had on my desktop in the 1980s and I note that it does not appear to be ā€œmore of the sameā€.

Yes, itā€™s a misrepresentation. So letā€™s leave Moore out of it. Pick any definition you like. Has computing power been increasing exponentially?

In the two years between 1980 and 1982 the amount of memory on my VIC 20 was either multiplied by 3.2 on my commodore 64 or 44k was added to it. Itā€™s now been another 32 years and this cheap computer Iā€™m working on has 4GB of memory. At a rate of times 3.2 every 2 years I get 7.3 TB, at a rate of 44K every 2 years I get 768K. One of those seems a good deal more orders of magnitude off than the other.

At any rate, we appear to fundamentally disagree about whether the source of innovation and advancement is the human mind or whether it is the totality of our circumstances. If we both maintain our positions on that issue then I donā€™t think we have anything further to convince one another of.

Capability does not necessarily equal power. I tend to define power as the means to accomplish a task. The ease of computer design and production have mostly been solutions looking for problems. The ubiquity of mobile phone tech is a perfect example of this. (Incidentally, you could barely give ARM processors away in the 90s, because portable low-power computers were ā€œirrelevantā€). Economy of scale has involved making computers ubiquitous, yet the average user has no interest in mathematics nor logic processing. So, in some very real ways, contemporary computers are an exercise in doing far less with far more. So, power increases exponentially only for so long as the applications do - and they never have.

Do we? I wasnā€™t aware we were disagreeing or trying to convince each other of anything. I think of such exchanges as exercises in ā€œcomparative realitiesā€.

Iā€™d only typed out four pages before I realized that Blade Runner had already been made. :wink:

2 Likes

Oh, no he wasnt!

2 Likes

Perhaps. To extend the analogy, one could also scarcely build anything with a hammer as their only tool. Programmable digital computers are remarkably efficient at certain kinds of symbol manipulation, but they hardly cover the same range of cognitive types that humans and likely other organisms do as well.

Iā€™d say that it never starts nor stops, it is merely an arbitrary frame of reference. Just witness the diverse notions of autonomy/utility in human relations. And it could be argued that even whatever seems to be a powerful specimen of human is merely a tool to spread DNA around.

What seems qualitatively different about them to you?

A zygote is a gameteā€™s way of producing more gametes. This may be the purpose of the universe.

  • Lazarus Long
3 Likes

The pocket computer weighs less, but is far more wieldy and has a narrower edge. All in all, itā€™s probably better for cracking heads.

But zygotes and gametes came out of a universe that had no zygotes or gametes, and itā€™s quite possible that ā€œthinking machinesā€ will continue to build themselves in some distant future with no zygotes or gametes around. More like DNA is one of the many scaffolds which will be discarded once the future has been made out of it,

1 Like

Well, sure. Heinlein was largely kidding. Itā€™s about as sensible a ā€œpurpose of the universeā€ as most theologies can offer.

This is one of the main reasons I never bought into Ray Kurzweilā€™s notion of achieving some kind of machine consciousness/AI singularity once processing power matches that of the human brain (as in his chart below):

Comparing a brain to a computer based on calculations per second is like comparing a jet plane to a sparrow or a race car to a horse. By many measures the machines are far superior, but we donā€™t think of a plane as an ā€œartificial birdā€ or a car as an ā€œartificial horse.ā€

1 Like

This is debatable. Just as zygotes and gametes can easily be considered potential stages of a later, fully-developed organism, a universe which inevitably leads to gametes and zygotes can be said to have them implicit within it. They can be considered an unfolding of an existing set of conditions.

Of course, it is possible. As is the converse. Perhaps we will find that naturally-occurring mineral semiconductors have been engaged in such processes independently of human technology.

The scaffolding discarded for making the future is what we would call, the past. And DNA is only ā€œthe pastā€ if it has been succeeded in goals, or means, or purpose. Or that DNA gave up on defining the time in which it exists. People have already made synthetic genes. Non-standard organic compounds are probably not far behind. Organisms are nanotech, and fairly sophisticated. How much corrective and augmentational molecular hardware can you work into your DNA before you decide that it is ā€œsomething elseā€? Life and technology seem rather convergent to me.

I always thought question at hand was something more like: ā€œWhatā€™s going to happen to [humans/society/humanity/economics] once robots can do literally everything a human could (and probably a lot of things a human couldnā€™t) possibly do, but much better?ā€

In which case Iā€™d say that the humans who control those machines (if they could be controlled) wouldnā€™t need real people at all, but would still want to make sure the rest of society is powerless and economically and socially completely immobile, because why would they want to make themselves poorer (or rather, less comparatively rich) by sharing?

But I just read this on my artificial bulletin board :computer: , and later tonight Iā€™ll feed my artificial friend :dog2:, then listen to my artificial orchestra :iphone:, then watch a movie at the artificial playhouse :tv:.
:smiling_imp:

1 Like

Are any of these true? Or most people merely slow at adapting to their tech? So-called ā€œrealā€ bulletin boards and orchestras are artifice also. Everything is a real something, you just need to not be duped and know what it actually is. Do modern phones have dials? No - they have sensors. Is tofu fake meat? No - it is real bean curd. The culture of industrial mass-production does not require people to know what things are or how they work, just how they are supposed to fill some traditional or novel lifestyle function. And apparently nothing sells like familiarity. To sell something novel, it is made out to be a spin on something familiar.

It is easy to scoff that some people would think of a plane as an artificial bird, etc. It is a cargo-cult mentality! But it happens. I am just happy that I finally have computing interfaces without any faux brushed metal. What was the classic 50s explanation of what computers are? ā€œItā€™s a sort of an electronic brain.ā€ But are they really? I havenā€™t encountered much consensus between cognitive scientists and computer engineers about this. They donā€™t work the same way, or even process information similarly. But either could be described in loose terms as ā€œinformation systemsā€.

Fer cryin out loud, I should have just said nothing instead of trying to push @Brainspore 's buttons.

Holy crapola! What a large chunk to process.

Still working my way through but my initial thoughts are to roundly dismiss the opinions of everyone who hasnā€™t imputed into their considerations, artificial prostheses for super intelligence.

The strong AI problem may very well be too hard. For us. We idiots. Barely representing a semblance of intelligence.

As for motivation. An argument from economics? I think the motivations of a super intelligent human will be fairly opaque to us but I suspect theyā€™ll want someone interesting to talk to.

1 Like

Youā€™re assuming the 99% are by definition poor because they donā€™t make what the 1% makes. I suppose they are relatively poor by the 1%'s standards, but in absolute terms they are anything but. Consider that anywhere from 25% to 66% of Amerians are middle class: http://en.wikipedia.org/wiki/American_middle_class.

Why do you assume billionaires are the only people who need orthodontists?

To maximize conspicuous consumption it actually doesnā€™t have to be orthodontists who sweep the helipads of the rich. Any formerly upper middle class, highly skilled and educated worker will do. I just picked orthodontist at random.