doctorow at June 9th, 2014 18:00 — #1
ratel at June 9th, 2014 18:12 — #2
No, A 'Supercomputer' Did NOT Pass The Turing Test For The First Time And Everyone Should Know Better
Oh, and the biggest red flag of all. The event was organized by Kevin Warwick at Reading University. If you've spent any time at all in the tech world, you should automatically have red flags raised around that name. Warwick is somewhat infamous for his ridiculous claims to the press, which gullible reporters repeat without question. He's been doing it for decades. All the way back in 2000, we were writing about all the ridiculous press he got for claiming to be the world's first "cyborg" for implanting a chip in his arm. There was even a -- since taken down -- Kevin Warwick Watch website that mocked and categorized all of his media appearances in which gullible reporters simply repeated all of his nutty claims. Warwick had gone quiet for a while, but back in 2010, we wrote about how his lab was getting bogus press for claiming to have "the first human infected with a computer virus." The Register has rightly referred to Warwick as both "Captain Cyborg" and a "media strumpet" and has long been chronicling his escapades in exaggerating bogus stories about the intersection of humans and computers for many, many years.
daneel at June 9th, 2014 18:13 — #3
Take everything associated with Kevin Warwick with a pinch of salt (Hi Kevin!).
I'm sure I've heard of software 'passing' the Turing Test before, and I'm not convinced that creating a decent chatbot really has all that much to do with AI.
Cool software, though.
skeptic at June 9th, 2014 18:18 — #4
so you could think of this as kind of a cheat,
Or you could think of it as an actual cheat. New headline:
Publicity Hound Creates Fake Turing Test - Fools a Couple of Celebrities.
jonathanpeterso at June 9th, 2014 18:28 — #5
The transcripts I found in a couple articles today were unimpressive at best.
blendergasket at June 9th, 2014 18:42 — #6
Cory, your link to the bot appears to be broken. This one works: http://default-environment-sdqm3mrmp4.elasticbeanstalk.com/
I tried using it and I seem to have angered it. I was being very polite but it misunderstood what I said and got super agressive about it. And when I said stuff that was like an acknowledgement or like "huh?" if what he said didn't make sense he couldn't figure it out and spat out random stuff, some of which was hostile. He also couldn't respond properly if I responded to a question with an answer that had more than one emotion in it, like "I like my job but dealing with clients is a pain in the ass."
So, all in all I think he's got a ways to be able to parse human communications properly.
david_guilbeaul at June 9th, 2014 19:06 — #7
Thanks for the fixed link. It reminded me of "Racter", in that it quoted back my own input and used evasion and misdirection. Not much progress for forty years.
winkybber at June 9th, 2014 19:16 — #8
I just chatted with it. Not a chance that it fooled anyone. Repetitive and, as blendergasket said, it gets quite aggressive.
On the other hand, it would perhaps make a good (typical, at least) BB poster .
shuck at June 9th, 2014 19:34 — #9
That's a kind way of putting it. It seems to me that the human was going out of their way to avoid unmasking the chatbot as a chatbot. Softball questions, ignoring obvious chatbot responses, etc. I'm not sure if they were deliberately throwing the test or just weren't serious about it, as it wasn't even a particularly good or convincing chatbot and shouldn't have fooled anyone.
Not to mention, deliberately creating impediments to communication goes against the point of the Turing test, which is about what you learn from communicating with someone(something). If you can't communicate, you can't tell if they're intelligent. This was "cheating" the test in the sense that it was designed to distract people from noticing it was engaging in the test in the first place.
daneel at June 9th, 2014 19:39 — #10
This is how one of the judges (Robert Llewellyn, of Red Dwarf), saw it. I think the online version is an old one?
shuck at June 9th, 2014 19:42 — #11
The aggressiveness seems to be a means of distracting people from the fact that it can't parse what you said, in the hopes that you'll be so thrown off that you won't notice. It doesn't seem to have a coherent response to almost anything I write.
chgoliz at June 9th, 2014 20:46 — #12
I tried it out a day or two ago, and NONE of my questions were answered in a way that suggested anything other than a computer program. It's actually easier to figure out human-vs-machine with a pretend 13-year-old boy simulation. I know they thought it would make any errors seem like immaturity instead of AI, but as a parent I had no trouble asking questions which were clearly not being answered by an actual 13-year-old. Things like: "what's your favorite car?" returned an answer that said something like "there are many cars here". Yeah, no.
Maybe it works for testers who have never spent time around 13-year-olds.
archvillain at June 9th, 2014 20:52 — #13
You're right, but the Turing Test itself doesn't have much to do with AI either, and everything to do with chatbots. (I guess that's your point - how the test doesn't test for thinking, it tests for human experience and human psychology). Regardless of how intelligent a computer is, it doesn't have lips, so it can't experience a first kiss (or anything else), so it's easy to distinguish from human - it can never pass the test unless it takes human descriptions of human experiences and fobs them off as its own. In other words, a machine can't pass the test unless it's a chatbot.
echolocatechoco at June 9th, 2014 20:56 — #14
Ian Bogost wrote an interesting piece about Turing and what the Turing Test means.
I think the test is more interesting as a philosophical exercise in what "intelligence" actually means than actually as a real test.
daneel at June 9th, 2014 21:02 — #15
Yeah, I don't think much of the Turing Test. I think it was a thought exercise more than anything.
It's kind of like using the Bechdel Test as a measure of feminism.
knappa at June 9th, 2014 21:24 — #16
To be fair, real people do that too.
nungesser at June 9th, 2014 21:44 — #17
I really wish this wasn't on BoingBoing.
No, it didn't "pass the Turing Test". It's a really, really crappy chatbot that wouldn't fool anyone, but the creators knew that, and told the judges to evaluate it on the basis of it being a 13-year old Ukranian child with a poor command of English.
Give it a try. It'll ask you again and again what you do for a living.
A lot of news orgs are all excited to announce that a "computer just passed the Turing Test and fooled judges into thinking it's a human, and that's a huge milestone in AI, etc etc", but I guarantee that Skynet/Weyland-Yutani doesn't start with a barely coherent fake Ukranian teenager.
bolamig at June 9th, 2014 21:53 — #18
Infinite monkey redux: the more often the test is held the higher likelihood you end up with a judge who is a monkey.
plainsman at June 9th, 2014 22:45 — #19
No, not so much. I would reference Paul Myers take on this:
avunculoid at June 9th, 2014 23:14 — #20
Extremely disappointing that a SF author and technologist would be taken in by such a transparent sham. I don't expect any better from the Daily Mail but c'mon Cory, this is embarrassing. What this chatbot did was exceedingly mediocre and wouldn't fool anyone actually trying to probe it seriously. Unlike other posters I believe the Turing test is meaningful--if I can have an open-ended, free-wheeling conversation with a software program where I can try to unmask it for as long as I want, but fail to, then that is actually impressive. This, on the other hand...
next page →