This Podcast Was Written by an AI

Thanks for the thought provoking comment! What I have written here in response I wrote with a bit of passion in parts, and I fear it may not come across nearly as cordial as I would like. I say this because I want you to know that none of the emotionally charged content is directed at you, and I don’t want you to feel that is. I’m sorry for not taking more time to ensure that you won’t. I love engaging in friendly debate, and I hope my rhetorical style here hasn’t spoiled our discussion. Anyway, I don’t expect I can convince you to change your mind, but I’ll try!

So because we aren’t omniscient we shouldn’t extend to non-human intelligent beings the respect that we (ought to) give to other humans? And for that matter, since I’m not omniscient, I guess I shouldn’t respect anybody, since I don’t know whether or not they’re interested in it as a concept? The golden rule just doesn’t apply to non-humans of comparable intelligence? That’s great news for asshole like those in the slave trade, who find it okay to perceive other intelligent beings (like people they’ve labeled as sub-human) as objects to be used, abused, raped, tortured.

Excuse the hyperbole, but my point is about not being a douche: it doesn’t matter whether an intelligent being is capable of feeling comfort: if we don’t know, why not err on the side of comfort rather than discomfort? If it can’t feel comfort, no harm done, but if it can, yes harm done. Making newborn post-humans uncomfortable is a pretty shitty thing to do, in my opinion.

Rather than make it “based on brain scans,” maybe we could take a cue from this podcast and ask the intelligent being about its preferences, its feelings (I tend to feel comfort rather than conceptualize it, but that may be only me and my embodied existence, I don’t know). Now, you may object, stating it could be the case that we can’t communicate with it. But if that’s the case, I’m not so sure we could even assess its intelligence in the first place, and I would question the purpose of a research project that led to creating an intelligent being incapable of demonstrating its intelligence to those who made it (be it an IQ test or a Turing test, communication and interaction are essential in finding out whether something is or isn’t intelligent).

So, assuming we can communicate with our hypothetical human-made intelligent being, it is most certainly a good policy to ask how it would like to be addressed, and especially to ask it for consent before doing anything that might make it uncomfortable or bring it harm. That’s just me though, I’m big into consent and not treating intelligent beings that are different from me like objects just because our differences overshadow our similarities.

Similarity is the thing here. You say “[w]e’re talking about alien minds here.” What I’m arguing for is that we emphasize the “mind” not the “alien”, because that is what we have set out to create with the whole AI endeavor: minds (not algorithms for generating podcast scripts), and that is what we will have in common: the having of minds—unless there is nothing universally common to intelligent processes (which, if you’re a Computational Theory of Mind kind of person [or Plato or Noam Chomsky], seems rather doubtful).

How am I so sure human minds and the minds that humans may create with technology will have things in common? Well, I am not, but I am sure that it’s meaningless to speak of something being intelligent if it is in no way comparable to humans. That’s because the whole concept of intelligence is tied to humans, who conceived the concept too (by using intelligence to do so!). That is to say our intelligence is the standard against which we compare the intelligence of other beings, and to do so is to attribute to a non-human a trait humans strongly associate with themselves more than with any other living thing. To put it one way, intelligence is a human construct initially applied only to humans, and slowly widened in application over time. What I mean is that the very act of ascribing intelligence to anything but a human is an act of psycho-anthropomorphizing. So while much of my semi-silly rant posted earlier could be called unwarranted, psycho-anthropomorphizing is the name of the game here, and thus not only warranted but impossible to avoid.

(As to whether I assumed psycho-anthropomorphizing: I don’t think it qualifies as an assumption, much like how “laughing” and “dancing” and “cheese” can’t be assumptions. But I could be wrong about that and I’m really curious to know if I am)

1 Like