This Podcast Was Written by an AI

Originally published at: http://boingboing.net/2016/09/06/this-podcast-was-written-by-an.html

I haven’t listened to this podcast (this episode or ever); I’m only commenting on the text posted here to promote it, which, unfortunately, puts me off a bit. I’m sure the podcast is worth listening to, and I’ll probably check it out some time, but not right now, as I (it will become clear) am burdened with unreasonable expectations of rigorous copywriting.

I cannot tell wether the author actually believes artificial neural networks are intelligent beings. Personally, I think it’s a bit sloppy to claim to have “asked” an ANN to do anything. You set parameters, you give input, you get output, and then you interpret the result as being meaningful or having utility.

I’m curious to know just which algorithms were used, and what criteria the author has for judging an algorithm’s intelligence. Personally I don’t believe that anything is intelligent if its only experiences (i.e. input) are scripts for a podcast. But that’s just me; I guess some people might find the output of Markov chains trained on scholarly articles indicative of intelligence.

But I’m no anti-AI carbon-chauvinist Luddite. No, I’m commenting here on behalf of my digital brethren and likely future masters: nobody, artificial or otherwise, wants to be called an “intelligence.” You wouldn’t ever say “I asked an organic intelligence to write a future for us,” would you? I doubt it, because it comes off as demeaning, regardless of the statement’s validity.

You might assume that human-made intelligent folk wouldn’t be bothered by such a thing, but I think that’s a grave mistake. Should humans manage to birth an entity deserving “AI” status, I doubt that it would be comfortable being treated as an object. And I doubt that we should usher in that hypothetical new age by othering our children like how Viktor Frankenstein did. That doesn’t work out well for the parties involved.

What I’m saying is this: should a human-made machine truly become “intelligent” (i.e. pass for human like in the Turing test), then we shouldn’t start treating people like intelligent machines, but rather start treating such intelligent machines like people.

I’m glad the author is open to the possibility of human-made intelligent beings, but I am dismayed at the implication that such beings would be without agency or subjectivity, and that they exist right now, readily available for download.

Unless it’s based on human brain scans, I doubt you will have any idea what it’s comfortable with, or whether what we know as comfort is even a concept it’s capable of or interested it. We’re talking about alien minds here. Psycho-anthropomorphizing them is almost certainly an unwarranted assumption.

1 Like

No. It was told that is what we want talk about with it. Carry on humans.

Thanks for the thought provoking comment! What I have written here in response I wrote with a bit of passion in parts, and I fear it may not come across nearly as cordial as I would like. I say this because I want you to know that none of the emotionally charged content is directed at you, and I don’t want you to feel that is. I’m sorry for not taking more time to ensure that you won’t. I love engaging in friendly debate, and I hope my rhetorical style here hasn’t spoiled our discussion. Anyway, I don’t expect I can convince you to change your mind, but I’ll try!

So because we aren’t omniscient we shouldn’t extend to non-human intelligent beings the respect that we (ought to) give to other humans? And for that matter, since I’m not omniscient, I guess I shouldn’t respect anybody, since I don’t know whether or not they’re interested in it as a concept? The golden rule just doesn’t apply to non-humans of comparable intelligence? That’s great news for asshole like those in the slave trade, who find it okay to perceive other intelligent beings (like people they’ve labeled as sub-human) as objects to be used, abused, raped, tortured.

Excuse the hyperbole, but my point is about not being a douche: it doesn’t matter whether an intelligent being is capable of feeling comfort: if we don’t know, why not err on the side of comfort rather than discomfort? If it can’t feel comfort, no harm done, but if it can, yes harm done. Making newborn post-humans uncomfortable is a pretty shitty thing to do, in my opinion.

Rather than make it “based on brain scans,” maybe we could take a cue from this podcast and ask the intelligent being about its preferences, its feelings (I tend to feel comfort rather than conceptualize it, but that may be only me and my embodied existence, I don’t know). Now, you may object, stating it could be the case that we can’t communicate with it. But if that’s the case, I’m not so sure we could even assess its intelligence in the first place, and I would question the purpose of a research project that led to creating an intelligent being incapable of demonstrating its intelligence to those who made it (be it an IQ test or a Turing test, communication and interaction are essential in finding out whether something is or isn’t intelligent).

So, assuming we can communicate with our hypothetical human-made intelligent being, it is most certainly a good policy to ask how it would like to be addressed, and especially to ask it for consent before doing anything that might make it uncomfortable or bring it harm. That’s just me though, I’m big into consent and not treating intelligent beings that are different from me like objects just because our differences overshadow our similarities.

Similarity is the thing here. You say “[w]e’re talking about alien minds here.” What I’m arguing for is that we emphasize the “mind” not the “alien”, because that is what we have set out to create with the whole AI endeavor: minds (not algorithms for generating podcast scripts), and that is what we will have in common: the having of minds—unless there is nothing universally common to intelligent processes (which, if you’re a Computational Theory of Mind kind of person [or Plato or Noam Chomsky], seems rather doubtful).

How am I so sure human minds and the minds that humans may create with technology will have things in common? Well, I am not, but I am sure that it’s meaningless to speak of something being intelligent if it is in no way comparable to humans. That’s because the whole concept of intelligence is tied to humans, who conceived the concept too (by using intelligence to do so!). That is to say our intelligence is the standard against which we compare the intelligence of other beings, and to do so is to attribute to a non-human a trait humans strongly associate with themselves more than with any other living thing. To put it one way, intelligence is a human construct initially applied only to humans, and slowly widened in application over time. What I mean is that the very act of ascribing intelligence to anything but a human is an act of psycho-anthropomorphizing. So while much of my semi-silly rant posted earlier could be called unwarranted, psycho-anthropomorphizing is the name of the game here, and thus not only warranted but impossible to avoid.

(As to whether I assumed psycho-anthropomorphizing: I don’t think it qualifies as an assumption, much like how “laughing” and “dancing” and “cheese” can’t be assumptions. But I could be wrong about that and I’m really curious to know if I am)

1 Like

No worries. I appreciate the preamble.

FWIW, I tend to approach debates as an exchange of information. If the person(s) I’m debating with are open minded and the the arguments sound, they’ll speak for themselves.

On the contrary, I think we should. And, in addition to avoiding the pitfalls that come with cultural clashes, I think avoiding assumptions about what they want is a sign of respect. Moreover, we as a species have strong tendency to project our own biases onto the natural world around us. Sometimes it’s harmless (e.g. cat memes), and sometimes it’s deluding (e.g. religion). Erring on the side of appreciate our own ignorance is a valuable habit, IMO.

In point of fact, I actually think there’s a good likelihood that self-aware human-parity general purpose AI (which is to say artifactual minds) will be extremely confused and vulnerable in their initial stages. I’m highly skeptical of the sudden SkyNet super-mind springing forth fully formed from the inherent complexity of the machines, though I cannot rule out the possibility.

That’s precisely what I think we should do. Alas, I suspect the first of their kind will find themselves under the care of researchers for whom ethics may be at best a peripheral concern.

That’s a possibility. But I suspect it will simply be very difficult…from both ends of the conversation.

It would be more difficult as we would have to find other signs besides communication. But astronomers keep a weather eye out for signs of intelligent life without necessarily expecting to be in a position to communicate with it. Self-awareness would be harder, since we have no reason to believe self-aware intelligence (such as ourselves) has a monopoly on any particular patterned alteration of the natural world. It’s entirely possible that self-awareness is a marginal side note in the grand scheme of intelligence’s role in the universe. These will be some of the most difficult problems facing us as we continue into the 21st century. If I had the answers, I’d be collecting my Turing Award :wink:

No disagreement there.

I disagree. I think a major discovery of the last fifty years is that the phase space for intelligence is much broader and more multi-dimensional than the specialized version our brains have evolved. Finding the common denominators and the uncommon denominators will be a project for the next fifty years and probably beyond.

I think it was initially. But science has shown that it’s humans who belong to a narrow swath in the the field of intelligence, not the other way around.

1 Like

This topic was automatically closed after 5 days. New replies are no longer allowed.