I rest my case.
Right, except we are talking about a hypothetical future time when robots can do those tasks, including programming. At least I am talking about that scenario.
So you acknowledge this isnât a new phenomenon. âWhatâs gonna happen when technology replaces the majority of current jobs?â has been something humans have been struggling with for centuries, but we always seem to find new tasks to keep us busy.
The question is, why would that happen? That has always happened in the past because the poor are part of the rich world. The scenario posits a technology that actually removes the poor from the world of the rich. I donât know how realistic that is, because probably rich people donât invent this technology for themselves, and people need to understand it, and someone is just going to say, âWell, hell, we might as well shareâ because it will only take one person to do so with replicators. But the basic question of, âwhat happens if the rich donât need the poor anymore?â is what makes people nervous (even if they donât put it that way). It hasnât happened yet isnât quite the same as it could never happen.
Wait until AI starts kicking in. Assuming it does.
AI kicked in years ago, itâs just that machine intelligence is different than human intelligence.
If you measure âintelligenceâ based on how quickly a man or machine can solve a math problem or find a list of relevant search results or play chess then computers have been whooping our asses for a long time. As such, theyâve replaced or augmented careers in mathematics and archiving and countless other fields. But if you measure intelligence in terms of things like âdesigning a house someone actually wants to live inâ or âeducating childrenâ or even ânavigating unpredictable road conditionsâ then humans are still solidly in the lead.
To correct myself, there are a huge number of responses and I think quite a few of them have good points, but yeah, half are just magicthink about what thinking is. Also, a lot of the time when people talk about AI they should be talking about AE - artificial ego, because that is what they really mean. I think we take it on faith that consciousness is the definition of, or at least necessary to, intelligence. So the ego thinks its the most important part eh? What a surprise that is!
This is a good example for me. People have driven cars while unconscious. We need to separate our idea of âthinkingâ from our idea of âbeing the big shot of the brain.â Consciousness, in the best case for consciousness, is the CEO. Itâs not the one who does the actual work. The work (i.e., âthinkingâ) can be done by our brains, computers, slime mold, or Rube Goldberg machines, and the CEO will pick between the thoughts and take all the credit.
I think AI is defined as âthat thing we can do that computers arenât better at (yet)â.
The world Iâm worried about, which is the one Iâm talking about, is the hypothetical future world defined by the characteristic that humans are not solidly in the AI lead when it comes to things like âdesigning a house someone wants to live in.â
[quote=âdigitalArtform, post:29, topic:50380, full:trueâ]The world Iâm worried about, which is the one Iâm talking about, is the hypothetical future world defined by the characteristic that humans are not solidly in the AI lead when it comes to things like âdesigning a house someone wants to live in.â[/quote]But what does that mean, exactly? That humans would be more willing to trust an imperfectly designed model rather than considering their own intuition? Because we lost that race a few centuries ago. I think my favorite example will probably remain that thing two years ago, when people decided to design sweeping austerity policies around a spreadsheet error.
And so it goes with âthinking machinesâ. Decades from now we may very well have more advanced computers capable of doing many more things than they do right now, and some people will still be fretting over whether they are thinking or not. Sometimes I suspect that a lot of these âvery big thinkersâ would benefit from sitting down and taking a few programming courses.
It means that if you are rich you might live in a house designed by a human, or be diagnosed by a human, or be legally represented by a human, but most people wonât be able to afford that because they wonât be employed as things like doctors and lawyers anymore, unless they happen to be one of the few human doctors employed by the 0.01%
Well, then the 0.01% just has to convince everyone else they donât need a human doctor or human house designer, and make sure the illusion is good enough that no one notices. They could probably get away with that with even pretty terrible âAIâ as long as the marketers do a sufficiently good job to make up for the deficiencies.
It will obviously be substandard, but for most people it will be machine-based or nothing. No trickery involved.
You want human service? No problem. All you have to do is be able to afford it. Good luck with that.
I thought we were talking about a future where machines were empirically better at these things than humans. Why would the rich pay more for worse outcomes? If the rich are paying doctors then the doctors will have money, which means they will want houses and TVs and cars, which means someone will build those, which is the society we have now.
The future Iâm thinking of is one where medical advice would be dirt cheap if only the doctors werenât spending all their time subsistence farming on the lawns of the former suburbs.
Iâm talking about a world in which the human lead on these things could not be described as solid.
First, there are plenty of people between ârichâ and âpoorâ so itâs not quite the perfect dichotomy youâre making it out to be. Second, it sounds like youâre saying if the rich no longer need the poor, the poor will somehow starve to death or somehow mysteriously disappear. That simply wonât happen as long as itâs possible to obtain food the old fashioned way: farming. Third, the rich will always need the poor, if for nothing else as a source of cheap manual labor. Nothing says ârichâ like having a bunch of human servants, especially when it would be cheaper to use a machine.
They must have been delicious
Robots will never be able to do that type of programming. Human whims are way too subjective for computers to be able to figure them out with a degree of accuracy and reliability that they will displace humans significantly. Humans will always be necessary to bring things to the next level.
âThe curve reaches $1 million (a 40 inch high stack of $100 bills) one foot from the goal line.
âFrom there it keeps going upâŚit goes up 50 km (~30 miles) on this scale
There are only about 500 billionaires in the US. How many former orthodontists will 500 people need to sweep the helipad?