Very big thinkers ponder: "What do you think about machines that think?"

I rest my case.

Right, except we are talking about a hypothetical future time when robots can do those tasks, including programming. At least I am talking about that scenario.

So you acknowledge this isn’t a new phenomenon. “What’s gonna happen when technology replaces the majority of current jobs?” has been something humans have been struggling with for centuries, but we always seem to find new tasks to keep us busy.

The question is, why would that happen? That has always happened in the past because the poor are part of the rich world. The scenario posits a technology that actually removes the poor from the world of the rich. I don’t know how realistic that is, because probably rich people don’t invent this technology for themselves, and people need to understand it, and someone is just going to say, “Well, hell, we might as well share” because it will only take one person to do so with replicators. But the basic question of, “what happens if the rich don’t need the poor anymore?” is what makes people nervous (even if they don’t put it that way). It hasn’t happened yet isn’t quite the same as it could never happen.

1 Like

Wait until AI starts kicking in. Assuming it does.

AI kicked in years ago, it’s just that machine intelligence is different than human intelligence.

If you measure “intelligence” based on how quickly a man or machine can solve a math problem or find a list of relevant search results or play chess then computers have been whooping our asses for a long time. As such, they’ve replaced or augmented careers in mathematics and archiving and countless other fields. But if you measure intelligence in terms of things like “designing a house someone actually wants to live in” or “educating children” or even “navigating unpredictable road conditions” then humans are still solidly in the lead.

To correct myself, there are a huge number of responses and I think quite a few of them have good points, but yeah, half are just magicthink about what thinking is. Also, a lot of the time when people talk about AI they should be talking about AE - artificial ego, because that is what they really mean. I think we take it on faith that consciousness is the definition of, or at least necessary to, intelligence. So the ego thinks its the most important part eh? What a surprise that is!

This is a good example for me. People have driven cars while unconscious. We need to separate our idea of “thinking” from our idea of “being the big shot of the brain.” Consciousness, in the best case for consciousness, is the CEO. It’s not the one who does the actual work. The work (i.e., “thinking”) can be done by our brains, computers, slime mold, or Rube Goldberg machines, and the CEO will pick between the thoughts and take all the credit.


I think AI is defined as “that thing we can do that computers aren’t better at (yet)”.


The world I’m worried about, which is the one I’m talking about, is the hypothetical future world defined by the characteristic that humans are not solidly in the AI lead when it comes to things like “designing a house someone wants to live in.”

[quote=“digitalArtform, post:29, topic:50380, full:true”]The world I’m worried about, which is the one I’m talking about, is the hypothetical future world defined by the characteristic that humans are not solidly in the AI lead when it comes to things like “designing a house someone wants to live in.”[/quote]But what does that mean, exactly? That humans would be more willing to trust an imperfectly designed model rather than considering their own intuition? Because we lost that race a few centuries ago. I think my favorite example will probably remain that thing two years ago, when people decided to design sweeping austerity policies around a spreadsheet error.

And so it goes with “thinking machines”. Decades from now we may very well have more advanced computers capable of doing many more things than they do right now, and some people will still be fretting over whether they are thinking or not. Sometimes I suspect that a lot of these “very big thinkers” would benefit from sitting down and taking a few programming courses.


It means that if you are rich you might live in a house designed by a human, or be diagnosed by a human, or be legally represented by a human, but most people won’t be able to afford that because they won’t be employed as things like doctors and lawyers anymore, unless they happen to be one of the few human doctors employed by the 0.01%

Well, then the 0.01% just has to convince everyone else they don’t need a human doctor or human house designer, and make sure the illusion is good enough that no one notices. They could probably get away with that with even pretty terrible “AI” as long as the marketers do a sufficiently good job to make up for the deficiencies.

It will obviously be substandard, but for most people it will be machine-based or nothing. No trickery involved.

You want human service? No problem. All you have to do is be able to afford it. Good luck with that.

I thought we were talking about a future where machines were empirically better at these things than humans. Why would the rich pay more for worse outcomes? If the rich are paying doctors then the doctors will have money, which means they will want houses and TVs and cars, which means someone will build those, which is the society we have now.

The future I’m thinking of is one where medical advice would be dirt cheap if only the doctors weren’t spending all their time subsistence farming on the lawns of the former suburbs.

1 Like

I’m talking about a world in which the human lead on these things could not be described as solid.

First, there are plenty of people between “rich” and “poor” so it’s not quite the perfect dichotomy you’re making it out to be. Second, it sounds like you’re saying if the rich no longer need the poor, the poor will somehow starve to death or somehow mysteriously disappear. That simply won’t happen as long as it’s possible to obtain food the old fashioned way: farming. Third, the rich will always need the poor, if for nothing else as a source of cheap manual labor. Nothing says “rich” like having a bunch of human servants, especially when it would be cheaper to use a machine.

1 Like

They must have been delicious

1 Like

Robots will never be able to do that type of programming. Human whims are way too subjective for computers to be able to figure them out with a degree of accuracy and reliability that they will displace humans significantly. Humans will always be necessary to bring things to the next level.

–The curve reaches $1 million (a 40 inch high stack of $100 bills) one foot from the goal line.

–From there it keeps going up…it goes up 50 km (~30 miles) on this scale


There are only about 500 billionaires in the US. How many former orthodontists will 500 people need to sweep the helipad?

1 Like