So… Any sufficiently fed cow is indistinguishable from AI?
Assuming it’s spherical.
It’s good to be reminded of that!
Good points. I guess I was using it merely in the sense that no one understands society, it’s beyond our control, and it may destroy us. Indeed our models of it are lacking. Why is anyone worried about an AI that might take off, go past our understanding, and do bad things, when it’s already happening on a vast scale in a much bigger system, that could be seen as an AI.
I just worry that we might no longer be here to gather the data for the new models you suggest.
FTFY good sir.
While I definitely think its a good thing to hear opposing viewpoints, I squarely disagree with this one.
Not being able to make sense of something that has never played out before is not a sign that you’re irrelevant in your field. In fact, the concern that AI’s potential problems be uncovered and understood, both acknowledges the possibility that our collective judgement might be lacking, and if one is to be responsible for that, necessitates some postulation about the subject, if only to frame a conversation that can become constructive. It takes both dismissive pieces by sci-fi authors like this one, and thoughtful, if sometimes scary, analysis by the so called 'Smart people" (who also tend to be the most knowledgeable in the subject matter,) for humanity to understand what we want our relationship to be with new technologies like AI. Dismissing a knowledgeable segment of the population for a belief you disagree with, rather than acknowledging its part in the greater conversation, is like shoving your way through the “Alarmists” trying to reason with you, to the very front of the first atom bomb test. It makes for great clickbait, or more book sales perhaps, but isn’t thought leadership.
I think the author is mistaking AI “Alarmists” for AI proponents, or other futurists. It is the alarmists who are urging caution, not bowing at the altar of some new “All-In” invented religion.
PS. A self-cleaning toilet has nothing to do with AI.
I guess you discount any notion of embodied cognition and don’t regularly clean your toilet.
True, but…
-
Who has the most incentive and ability to put serious resources into artificial general intelligence? The incentives of a large corporation, or a military, are not aligned with humanity as a whole.
-
If the system is smarter than you, it can trick and deceive you. If it learns, you cannot assume its behavior when in a testing environment will match its behavior in the real world.
Also, from the linked talk: Premises 4 and 5 are unnecessary, 1 and 3 should be uncontroversial, and 2 should seem at least very likely to most of us. 6 is definitely unclear, and it’s that uncertainty that causes the worry.
I think it’s interesting that the speaker is complaining about the many assumptions “AI alarmists” make when the unfriendly superintelligence argument boils down to, “Our assumptions about minds are based on humans and won’t necessarily hold for AI, and the potential downside might be really bad and possibly unrecoverable so let’s be careful.” And by “really bad” I don’t mean human extinction. I mean human extinction + AI controls the whole Earth forever preventing evolution of other intelligent species here + it begins expanding outward into space and also prevents evolution of intelligent life elsewhere.
I kind of like how this thread has kind of devolved into competing comic strips…
This topic was automatically closed after 5 days. New replies are no longer allowed.