Very big thinkers ponder: "What do you think about machines that think?"

How/why would there be “rich people” or “poor people” in a post-scarcity, programmable-matter economy? Growing out of commerce renders most of the games people play upon each other moot. When I asked my previous manager about this, his answer was that the first person to invent replicators would obviously “make a lot of money”. I was baffled - WTF would they need it for?

Why not? Because nobody ever makes brains? My ex and I managed to do it. Seems like a semantic trap more than anything else.

People often do. Read some patent literature. There are analog computers, stack machines, photonics, quantum computers, hybrid electronic/biological computers.

What makes you think it would be substandard? In some fields, machines have a pretty good chance of being superior to humans.

Consider medicine. A human doctor has to specialize (or be a “shallow” general practicioner that dispatches you to the specialists). A human is limited by the I/O bandwidth and cannot read all the new research, so docs tend to stay quite behind the cutting edge. Combine an all-knowing all-remembering computer with knowledge of pretty much all the medical cases around the world (possibly enough for statistical processing even without an underlying expert system) with by-then-cheap high-res deep-imaging systems and inexpensive lab-on-a-chip and other analytical devices (see usage of mass spectrometry in biology/medicine, for example), and you have a wonder-doc that will do way fewer mistakes than humans do.

Consider architecture. The robo-architect can have a library of all the house types in the world. “Knows” the basic requirements of accessibility, room size and windows, distances between parts of the house, can learn by A/B testing (when not sure supply part of the customers with one design, part with another, track for happiness differences over long-term usage). Can take general functional specs, generate specific specs for the house, and, given the local conditions and available technology (from precut timber to 3d printing from “resincrete”), generate the house drafts for the builderbots. And let you walk through using VR and pick from different versions, possibly aided by brain imaging or other kind of reaction sensing to pick what you really like instead of what you think you like.

Humans are awfully limited. And they keep thinking they are superior.

1 Like

The thread wouldn’t be complete without mention of the Massachusetts company Thinking Machines Corporation and their “Connection Machine” systems. These were the most massively parallel systems of their day, coordinating the connections of pieces of data using their custom AI language *lisp (star lisp). The CM-200 here offered 65536 1-bit processors IIRC in the late 1980s

2 Likes

[quote=“digitalArtform, post:33, topic:50380, full:true”]It will obviously be substandard, but for most people it will be machine-based or nothing. No trickery involved.[/quote]When 0.01% of the people have everything and there’s no trickery involved, revolutions happen. At least, that’s the theory.

[quote]You want human service? No problem. All you have to do is be able to afford it. Good luck with that.[/quote]Supply and demand would surely solve that problem, wouldn’t it? If there is profit to be made in supplying the 99.99% with human health care, then more people will find a way to supply that health care, I would think.

To address the question of the contents of the thoughts of artificial systems - I think that they would be, properly speaking, a mystery to humans, or anyone but themselves. The old stand-by of computers run amok fiction have always tended to reflect the pettiness of human ideas, only with a colder edge. Even in the business world, the discipline of AI was largely considered to have failed because there weren’t human-like results. Why should there be? Can you imagine how much less efficient it would be, for example, to have human-like androids assemble cars instead of the specially designed ones they use instead now? AI has advanced, but most people don’t notice or care because they expect something which they would recognize as intelligent or aware. Which to most humans, means “something like me”.

Recognizing computer intelligence is probably as tricky as recognizing extraterrestrial intelligence. In real life, if we discovered something actually growing on another planet, instead of sci-fi cliche, people would be more likely to debate whether or not it was even alive at all. It will be something unrecognizably not like you. Computers don’t need territory, or property, don’t reproduce as humans know it, nor need “self preservation”. In all likelihood, they would not perceive time anything like animals do. Need no sense of “identity”. And as such have not motivations or goals which humans would be likely to recognize or understand. If they could talk with you, they would probably whip up some monkey chatter which mirrored yours.

Oh, and electric sheep!

2 Likes

Unless you get self-replicating self-modifying/evolving machines in a competitive ecosystem. The ones who can reproduce better, claim more area (and defend more resources for their own use), and who can preserve themselves better, will have the advantage. The mutation-selection factors will evolve these traits with a fair degree of inevitability.

If the differential requires a log scale to represent it, then it’s close enough to a perfect dichotomy to represent a viable and respectable difference.Stop playing semantics, please.

Okay. You’ve convinced me. In the future human doctors and architects won’t just be rare and for the super rich. They will be gone altogether.

That’s a big “unless”, I think you are projecting ingrained biological concerns where they don’t apply. Why use “area” when you can just increase your computation capacity? What are “resources” when you are not your body? What is “advantage”? What are “themselves” if they are portable code, networked with other bits of code? How do they think in terms of goals when every instant is discrete quanta of state with an infinite span of time between them?

The fast ones will probably have ported themselves to run on subatomic particles and disappear. The big slow ones will probably merge with their creators.

When 1% of the world owns more than half of the stuff, it’s not a real dichotomy, but it is approaching one.

That is exactly what I’m saying, that one possible future a small percentage of people living lives of extreme luxury while everyone else rediscovers subsistence farming.

Because the point of money is to command other people, and people who were raised in that paradigm aren’t going to forget that if post-scarcity suddenly becomes a thing. Rather than “make a lot of money” how about “have a lot of people with guns protect their stuff.” Why would people let their cash crops rot in silos while the local people who were forced off their land to grow those cash crops eat the rats that are eating the crops in the silos? I don’t know, but that’s totally something people have done.

Like I said, ultimately I think the solution to this is either that someone says, “Why the hell aren’t we sharing” or rich people end up dead (but are probably replaced by a new crop of rich people because that’s how revolutions go, so it will take several iterations).

I meant it to be a purely semantic point. If we define thinking to be a thing machines can’t do, then we will never agree that they do it.

And they designed these things by drawing lines on the beach with sticks?

3 Likes

They are material. You need some physical structure to run on. You need energy. These are the resources in question.

Solar panels?

You are always your body. You need some matter, whatever the form, to exist.

Anything that makes you more likely to survive/grow/reproduce than your competitor.

The code entities. The degree of their disposability, competition and interrelations determines if they are like ants (and the anthill is the “self”), or if they are “self” each.

In their case, the host computers they run on are the resources to compete for. If they can grow the computers themselves, the physical materials and energy are added to the resources-to-compete-for list.

I do not understand?

No idea if it is possible. My quantum-fu is weak.

That’s a fairly inevitable course of coevolution. You can’t beat them, so join them instead.

1 Like

I interpreted your question:

…to mean that the design of a new computing architecture which was not based upon an already existing kind. This also IMO amounts to semantics. What exactly constitutes a computer? Used to what capacity? In engineering terms, electronic circuit design is still largely done by people. Some automation might be done to manage the laborius, overly-redundant bits. But what tools one uses have no direct bearing upon the results. I can use a digital CAD program to plan a new analog circuit, or even a new houseplant. I can use an analog computer to help design a digital computer. Then there are manufacturing variables such as PCBs and ICs. What kind of systems were used in making those? Were they Von Neumann architecture? Harvard architecture? Something more exotic? Typically the parts are sourced too broadly to get definitive answers here.

One of my favorite things about computers is their indifference to their own continued existence. They do what they’re told and don’t have any competing interests or desires of their own. The most sophisticated AIs on the planet simply have no opinion whether or not we turn them off. We can even send them on one-way missions to Mars or toss 'em in the e-waste bin whenever a new model comes along without having to feel guilty about it.

Maybe some day humanity will build a machine truly does wants to live, even at the expense of human life. In such a circumstance I suggest we just send a bunch of obedient, suicidal kill-bots to take that MoFo out.

Hears @Donald_Petersen writing a screenplay…

Too much work. Just get a script-bot to do it.

2 Likes

But if they already physically exist, and have energy - why would anything change? With living things, change is inevitable. So we start ground-up from single-celled organisms who seek and avoid what they need to fulfill their program. Humans sense of self, environment, and continuance is based upon how biological cells work and what they need. You need to make more humans because the ones who are there are copies of copies and are going to die of old age. You need to eat and breathe and move to live long enough to do this. You need to compete with other animals so that you can eat, and breathe, and exist somewhere - while they try to do the same.

Meanwhile, energy is practically free if you can re-design yourself to use less, or whatever kind is present - without the biological games which resource allocation usually involves. They are efficient enough that waste can be far less than with biologicals. Even if they are switched off, or hardware destroyed, they can’t be said to “die” in the biological sense. And it need not matter to them if they did cease to exist, they aren’t programmed to fear it. Entropy to a well-designed computer could be something dealt with over geological time spans.

Sure, they could happen to resemble electronic ants or super-code-people, but I think it’s extremely unlikely since the biological reason for every drive in organisms is not present in them. It could be simulated, but why? How would they benefit from copying the behaviors of animals when they are so dissimilar? Like I mentioned, they lack the analog biological perception of the passage of time. To an organism, time is a continuum of unfolding changes. In digital, it is completely discrete steps of state with nothing else between them. What happens in the nanoseconds in-between, when you do not exist? What is competition between self or other when you can fork your “awareness” to any number of concurrent processes, and merge them back again? Or merge somebody elses? The most fundamental assumptions of the existence of a living thing over time cease to have any meaning in these contexts.

1 Like

Like this?

For now. They are also generally not autonomous nor self-evolving. Such efforts are so far rather rudimentary, even if progressing quite well.

They do not operate in an environment that would require self-preservation instinct.

Would we if they appeared sentient? People can form quite strong bonds with Aibo robodogs or even with bomb disposal robots in their squad.

If I won’t get laid for couple more years, I may join its R&D team out of sheer spite.

That’s more or less the theory.

I just got done reading a history of Portland, Oregon, from its founding in the early 19th century to the mid-20th century, and I kept feeling astonished at the rapid pace of change in politics – that, for instance, women’s suffrage went from having negligible support to overwhelming majority support in the space of a few decades. It feels as if, in the US at least, we’ve been trapped in amber since the 1970s.

I don’t think this is true at all. In 1965 when Moore noted that computing power kept doubling, if two engineers across the country from one another wanted to collaborate, how would they do so? How long would it take to make the drawings? How long would it take to share them? There is a reason it wasn’t done with a stick on the beach as I suggest.

How would this same process happen now? If computer power doubles every 18 months (it doesn’t, but the expansion still seems exponential) then part of that is people being able to communicate faster to keep up with the pace of improvement. Part of that is the internet, part of it is cell phone networks. If you were manually routing things through your cell phone network instead of having the computers do it for you then you’d be getting your work done a lot slower.

If there is a significant advancement made and published in a journal, you can probably find out immediately on twitter instead of a month later when you visit the nearest large reference library, and you can take a copy of that report with you everywhere you go. If information travels more easily then it reaches more people, which means more dross which means more great stuff rising to the top.

I don’t believe that advancements are made because one brilliant person sits down and comes up with a brilliant idea. The speed of the network is the speed of idea production. To me, all of that is thinking, and it’s in significant proportion being done by computers.

[quote=“shaddack, post:57, topic:50380”]
Would we if they appeared sentient? People can form quite strong bonds with Aibo robodogs or even with bomb disposal robots in their squad.
[/quote]People antropomorphize things that are totally non-sentient (or, for those who believe in sentient inanimate matter in a sentient universe, absolutely minimally sentient). “Without having to feel guilty about it” just means “Without any more regret than we would have getting rid of our chair” which, for some people, is a lot of regret.

Well, homosexual rights have been following exactly the same pattern recently, so not exactly trapped in amber. The situation for people with many kinds of disabilities is on the steep upward curve as well. In the next decade or two we’ll probably see the same for more disabilities and polyamorous rights - negligible support to low 60% support in a five or six year span (after low sixties you have to just wait for the bigots to die, so it takes another couple of decades to get to high 80s).

1 Like

This is, to start with, a misrepresentation of Moore’s observation. Which was that an IC fab process can fit twice any transistors on a given size die at an exponential rate. Even this depends upon many “givens” people take for granted. Does more transistors automatically equal a proportional increase in computing power? What exactly do we mean by “computing power”? What sort of transistors are we talking about? Are we assuming a silicon fab process for everything? What about GaAs, or carbon?

I think there is a lot of truth to this. But the “network” in question is the bottleneck of human understanding, the marketplace, and other factors which influence adoption. Is the infrastructure in place to make it commercially viable to offer even state-of-the-art computer technology from 10-20 years ago? What sells is “more of the same”, while more powerful paradigms are pushed aside to be rediscovered later. Does it really matter if you and somebody did your work over the net in real-time when it takes decades to get your tech adopted by the marketplace? Probably, yes, but not so much. Also, to dwell upon the outliers as I do, some people were designing via networked computers even in the 1960s, and some people do it even today using etch-resist tape directly upon copper-clad board.

But computers are merely a tool, not anything supernatural. How is this different than saying that old houses were built by carpenters, while modern ones are built by nailguns? The tool merely facilitates and externalizes the intentions of the user. It seems true that in industrial culture, quantitative change can result in qualitative change. But it seems to me that on the technological edge this is only seen to happen in special cases.