Serial offenders plague Twitter

[Permalink]

This is something I wonder about:

Someone who went over the line so egregiously that Twitter suspended the account permanently would conceivably take that reprimand as a disincentive to either return with a new account or to behave as extremely in the future.

Suppose someone has actually invested time and energy into twitter, and uses it as a tool to interact with their friends. Their Twitter account means something to them. However, they don’t behave well and frequently say offensive or abusive things towards others people. Eventually their account gets banned because they are just too disruptive.

Is that an incentive to stop the behaviour or an incentive to double down on it with sock puppet accounts? Surely some people will just walk away, but we are talking about someone who was being disruptive or abusive to begin with - there is some reason they are doing that, and all the reasons I can think of (loads of undirected anger, immaturity, impulse control, psychopathic sadist) overlap with reason why someone would seek vengeance rather than walking away.

Obviously Twitter can’t just let people going on being abusive, and it isn’t in the business of sending out life coaches to get people to change their attitudes that are making them feel like internet harassment is a good idea. Banning feels like it makes sense and it has to be done, but given the current ease of just making a new account, how sure are we that banning people is actually reducing the abuse?

3 Likes

I think there’s no real way to stop trolls from making accounts, but what could be done is lock your account to where only verified accounts can directly comment and communicate with you regardless if your account is anonymous or verified, this way you can set up a buffer from trolls and if verified accounts do troll you it’ll be easier for Twitter to stop the abusers. And also should help those that wish to remain anonymous for safety or politial reasons.
Another method would be to have accounts verified, but set up a system that will do this verification without publicly disclosing the identity of the account holder. This way a person can use an alias but still have a layer of anonymity of sorts… this idea might be difficult and not entirely doable but its an idea.

PS: I very very rarely use Twitter so my understanding of what levels of account settings and protection are kind of tenuous. But i think my ideas are things Twitter hasn’t implemented.

2 Likes

One way to fix this would be to allow people to specify how old an account has to be before they will allow it to interact with them. For example, I could say that I only want accounts that are older than 7 days to be able to interact with mine. So if Sockpuppet McTroll creates a new account, it won’t be able to tag me or show up on my feed until the account is 7 days old. This would kill the instant gratification.

You could override this on a case-by-case basis. Mutual following, or a whitelist, or something similar.

The more determined trolls would then pre-emptively mass-create accounts so that they’d be able to have one when needed, but for most of them, this amount of planning ahead is too much trouble.

21 Likes

Joed your idea is awesome. It’s also easier than the verification thing - but that’d be a nice bit too. Although I don’t use my real name as my handle anywhere, I always have my real name in my profile, so I have no idea with a verification scheme.

This article and the responses so far show the spectrum we need to be on to make the Internet be the best possible Internet. On the one hand, Facebook and (for a while) Google+ with their always be a real person can deter speech where anonymity needs to exist. On the other hand, Twitter and many forums are open to abuse. I do tend to see that the forums I like best have tiered communications where someone has to do a bit of work before they have unfettered access.

Something like the proposed Twitter system by the OP and by you would be awesome. I

Finally, it frustrates me so much that people are so threatened by viewed different from theirs. The worst I ever get to is some name-calling - on my own blog that the person is free to ignore. (And I try not to get to that stage where I’m doing something so childish, but sometimes it still comes out) I don’t understand making someone’s life a hell just because you disagree with them. Ugh.

They Can’t? I think they can.

I do like the idea of flagging for abuse and limiting abilities before permanent suspension in hopes of being able to track the abusers better and limiting the abilities of new user accounts to reduce the speed of creating sock puppet accounts… But I don’t think that is going to accomplish a ton because they can all be worked around.

2 Likes

In a previous life I had great success identifying abusers online via http header and browser fingerprinting. It is obvious the sophisticated abuser will go through seven proxies so IPS are less helpful. So using something to ID the abuser that is more labor/financially intensive to change, literally their machine as opposed to an email/Ip/burner phone worked well.

1 Like

I find myself wholly immune to Twitter abuse, because I can’t remember my password and can’t be arsed to get a new one. I suspect if abuse becomes more prevalent there will be more people like me.

7 Likes

I like this idea a lot.

Another alternative is to allow for some sort of semi-verified accounts that simply prevents instant account recreation (probably low level - match to a credit card or something - no real name ever posted), and then allow people to block unverified accounts (+ whitelisting, of course).

Nothing can stop these psychopathic trollies, but put enough sand in their gears, and they may not get enough emotional pay-off to make it worth their while.

1 Like

It’s a good question, but I think there are degrees of psychology involved: a person who might be considered to have a normative personality could still behave badly and then either feel bad about their behavior or use the reprimand as a bit of a check on it and reform themselves. We see this constantly in daily lives — the attempt to use consequences as a tool to prevent future bad behavior by letting someone regulate themselves.

Not every person who violates Twitter’s terms of service is a sociopath or what have you.

I believe that verified accounts have additional options in this regard — but possibly only high-follower-count verified users. Twitter doesn’t disclose these extra tools.

Precisely what I’m recommending!

Some people are working on third-party tools that meet Twitter’s API guidelines that would let you set thresholds: age of account, follower count, previous interactions, etc.

I certainly didn’t mean to suggest they were. Unfortunately with sociopaths there’s not much you can do. I more wonder about a typical person who goes on twitter to hurl abuse at others. I don’t know, it’s an interesting question that I can’t think of a way to find an answer to.

Anyway, thanks for the article and thanks for your active engagement in the comment threads. I know I’ve seen you in discussions of your posts before, and it adds a lot to the discussion.

1 Like

You would think that Twitter would have plenty of information at their disposal to build a profile of an account that has been blocked for abuse and use that to automatically apply more scrutiny to other similar accounts. Between the user’s IP, the creation date, the email address, and the user’s social graph, there ought to be enough to at least automatically flag most sockpuppet accounts for escalation of any complaints against them.

1 Like

Exactly. And they must use these tools to deter or throttle spam accounts.

1 Like

It’s fairly easy to switch up your IP and obfuscate other identifying info, at least for a person that’s determined to remain under the radar from moderating tools. So clearly account creation is something that can’t be feasably controlled (you can but there are limits), so the focus should be more on placing reasonable barriers on interaction between people. If a troll can make an account but can’t easily harass someone then i would call that something worth exploring.
Like mentioned above by myself and a couple of others, there are a select number of ways its possible for Twitter to curb the harassment from an interaction standpoint.

The tools exist and are fairly well known in some circles–both backend tools (which I am more familiar with) and user tools (like the ones you have described). Their lack or efficacy speaks volumes.

You are correct, there is enough information at their disposal. It reminds me of that scene from Fight Club where Edward Norton describes the criteria for an auto recall.

“Narrator: A new car built by my company leaves somewhere traveling at 60 mph. The rear differential locks up. The car crashes and burns with everyone trapped inside. Now, should we initiate a recall? Take the number of vehicles in the field, A, multiply by the probable rate of failure, B, multiply by the average out-of-court settlement, C. A times B times C equals X. If X is less than the cost of a recall, we don’t do one.”

To a point, I agree with you. However, there’s a wealth of information that Twitter gets from anyone who is simply using their site. I think it would be a lot more difficult than you assume for someone to make and use a large number of sockpuppet accounts without there being enough similar information to at least flag the accounts as warranting special attention.

You wouldn’t catch everything, of course. But, similar work is done in other industries to deal with fraudulent orders or spam outfits, and this kind of profiling handles a large portion of those kinds of situations. Both of those are fairly close analogs to a person driving trollies with sockpuppets, both in terms of incentive and in the tools at their disposal to try to obfuscate their identity.

1 Like

Well yeah, i’m talking about the worst case scenario of a person totally determined to fly under the radar. But yes, even the average awful troll won’t go through the trouble of jumping through a bunch of technical hoops to make new accounts. They’ll just register new accounts normally, which can obviously be flagged with enough abuse. However, regardless if a person makes 1 or 100 accounts, if you make harassing someone a bit more difficult by giving users better tools then i think that improves the odds of Twitter having success at dealing with the problem users.

And yes i do realize the solution is possibly more difficult than i imagine, but i’d like to see Twitter take action in some capacity other than banning fools.