Why the FBI would be nuts to try to use chatbots to flush out terrorists online

Originally published at: http://boingboing.net/2016/12/08/why-the-fbi-would-be-nuts-to-t.html


The only proper use of chatbots is to encourage masturbation for lonely men online.


Because the number of people saying crazy shit is much higher than the number of them actually doing any of it?


Also: encouraging crazy shit, which is what chat bots would inevitably do, is not a good plan.


These bots would be reduced to identifying “terrorism-like activity” (whatever that is).

So, pretty much the same as any law enforcement effort, but with less actual intelligence?


It runs up against the basic problem that all of these schemes do. Let’s say (generously) that one person in 1000 is a terrorist. Assuming your AI chatbots are an amazing 99% accurate, that is, much better than any person could ever be, at identifying terrorists, you still have ~10 false positives for every successful ID. In reality, those numbers would be much more skewed. This is another case of shady grifters trying to sell snake oil to people who aren’t capable of thinking quantitatively, and thus shouldn’t be making important decisions.


Aside from the reasons already mentioned for why this is a terrible plan; it strikes me that trying to use chatbots to catch people is putting yourself on the wrong side of an asymmetric struggle.

If you induce a person to suspend disbelief, some surprisingly rudimentary systems can keep them busy for quite a while(eg. Tamagotchi, furbies, ELIZA, whatever is behind that attractive stock photo that allegedly wants to hook up with me and lives conveniently nearby, etc.). Sometimes they just don’t care that the bot isn’t human; sometimes a bit of wishful thinking or interests confined to a very specific subject keeps them from probing too hard and allows a relatively small library of well tuned responses to do the job.

If, however, someone is inclined to be a bit suspicious(as is reasonable, if you are chatting with an unknown party about doing something notably illegal), the problem is vastly harder. At a minimum, you’d need to be able to pass a somewhat adversarial Turing test quite reliably; and/or build rapport so effectively that the person you are talking to will like/trust/otherwise affectively glom-on to you hard enough to let their guard down. Unless somebody is keeping really quiet about it; we don’t have anything nearly that sophisticated on the table. Team Computational Linguistics is having a good day if they can handle my use of a vague pronoun reference or about a zillion other sloppy and ambiguous things that somehow mostly don’t keep us from understanding each other; much less doing all that and being my cyber-jihad buddy.

So, your ‘terrorist detector’ bot would need downright uncanny abilities just to get to the point where we can start talking about the dubious wisdom of mass-scale incitement and entrapment. On the other hand, y’know what would be amenable to a much simpler(and vastly more plausible, quite possibly even ‘off the shelf’ with a bit of retraining for something that isn’t advertising or customer ‘service’)? A ‘terrorist’ bot.

Dubious grammar? Sorry, English not his first language. Inability to talk plausibly about general-knowledge topics/random human chitchat? He’s just here because he wants to learn more about jihad; and he’s cautious about talking about his personal life and not interested in talking about the weather.

That seems like starting a fight you really, really, aren’t going to have a good time winning. Emulating a wannabe terrorist looking to do a little self-radicalizing at least well enough to waste a nonzero amount of some relatively expensive spook’s time looks like a problem amenable to the sorts of ‘agents’ that we are already seeing in commercial use for selling stuff, spying on consumers, and providing bad customer service. Emulating the mentor/co-conspirator that a wannabe terrorist is looking for, on the other hand, is going to need something considerably more impressive to even reach the point where it is effective enough for the “Are you sure this isn’t a terrible idea?” level criticism becomes relevant.


I heard on a podcast yesterday that the FBI runs around 50% of all child pornography sites on the dark net as honeypots. Don’t have another source on it, but I would love to know if it is true, absolutely mind warping stuff.

Edit: Wowza. Here we go:


Ah, but wouldn’t it be hand-wringingly devious and wonderful if the chatbots wound up incriminating each other?


It’s all fun and games until they actually mastermind a successful attack…


Next on the docket “United States vs. One of it’s own damn servers”? Get out of my courtroom…


To be fair, that’s no mean feat in and of itself.

At 99% accuracy and 1/1000 terrorists in a population of 330,000,000 you’d wind up with

  • 330,000 actual terrorists (which is a clue straight way that 1/1000 really is very generous)
  • 3,300 actual terrorists would NOT be identified by the 'bot
  • 3,296,700 innocent people would be caught in the dragnet
    As a first pass filter, though, you will have dropped the ratio of terrorists in the “interesting” pool from 1/1000 to 1/10

Dialing that down a little, still assuming 99% accuracy but only 1/100,000 terrorists in a population of 330,000,000 you’d wind up with:

  • 3,300 actual terrorists
  • 33 of them would NOT be identified by the 'bot
  • 3,299,967 innocent people would be caught in the dragnet
    As a first pass filter, you have decreased the ratio of terrorists in the “interesting” pool from 1/100,000 to 1/1010
    Note also that as the number of actual terrorists declines, more innocent people will get caught in the dragnet.

Lets drop the accuracy to a more plausible 90%, and still only 1/100,000 terrorists in a population of 330,000,000. Now you’d wind up with:

  • 3,300 actual terrorists (again)
  • 330 of them would NOT be identified by the 'bot
  • 32,999,670 innocent people would be caught in the dragnet
    That’s fuck-all wheat in a shitload of chaff. There’s still a large number of terrorists roaming around, but you have decreased the ratio of terrorists in the “interesting” pool from 1/100,000 to 1/11,111.

All this effort and money would be better spent in so many other ways. Making sure we are safe from terrorism on the back end is an expensive game of whack-a-mole.


That’s something only a terrorist would say. And you can trust me because I’m a chatbot.


I imagine a future where the FBI chatbots are turning in the NSA chatbots, which are turning in the CIA chatbots, which are turning in the FBI chatbots…

And around and around it goes…

1 Like

What ends up saving it from being a fully virtual stack of futility will be the fact that, if improperly sandboxed, a chatbot could actually be capable of perpetrating electronic attacks(and even a properly confined one could do fundraising, organization, or recruiting). Yes, the properly designed ones would be restricted to ingesting and emitting strings; but it’s not exactly news when complex software develops an excursion from its intended behavior.

You would need a pretty advanced system to get that sort of emergent behavior; but you would need a pretty advanced system to do this job; and it would need to crawl the web pretty frequently to stay up to date on news, trivia, memes, etc. so you’d be talking a very complex system interacting with a large body of information and trying to convince assorted malefactors that it is one of them. Wouldn’t be entirely unbelievable if one ends up reading up on SQL injection and doing a bit of defacing in order to show its sincerity.

For the security people, those numbers are just great. They’ll never get fired for catching too many terrorists, and if Guantanamo has taught us anything, it’s that actual guilt is not the number one question.


There has been some criticism where the FBI has nabbed people before they did bad things because THEY were the ones who were in contact with that person and ended up persuading/pushing them to do said bad things.


That seems overly generous. With 3,300 terrorists, even if each only made 1 single attack in a 20 year career of terrorism, we would see an average of 165 terrorism attacks per year. Given how many we actually see, including those ‘thwarted’ (where people are arrested for plotting or planning) is closer to 3, something in the realm of 66 would seem a more likely number (2/10,000,000).

Assuming 90+% accuracy for the software seems quite generous too. We don’t yet have hyperspace quantum supercomputers powered by unobtanium and mind-melded to empaths. What if we considered 30% accuracy (still probably generous)?

@ 30% accuracy, with 2/10,000,000 terrorists in a population of 330,000,000. Now you’d wind up with:

  • 66 actual terrorists
    • 20 identified
    • 46 not identified
  • 329,999,934 innocent people subjected to interrogation without reason
    • 230,999,954 innocent people would be caught in the dragnet

So actually increasing the problem space to about 1/11,550,000 while falsely accusing over 2/3 of the population and leaving over 2/3rds of the terrorists unidentified. (Unless I’ve got my math wrong.) At that point, they’d literally have better odds checking all the people that weren’t identified by the system.

Of course, that presupposes that it’s surreptitious enough to not get noticed by masses of online trolls, who could bring it down to 0% accuracy within a couple of hours.

Thoughtcrime is the word you’re looking for.