Originally published at: https://boingboing.net/2019/02/04/instagram-to-blur-self-harm-im.html
Molly Russell, 14, took her life in November 2017.
âInstagram plans to introduce âsensitivity screensâ to hide such images, which sounds like a euphemism for a blur âlayerâ the user has to click through to get to the provocative content.â
Will this keep anyone from clicking through?
Me.
Iâm not the target user for this, but Iâve noticed I just donât click through anything with a blur filter on it.
Even on Boing Boing? I get the sentiment, but youâre missing out.
Uh, yeah, fine.
I hope these steps make everybody feel better and stuff.
By identifying people inclined to self harm it may be possible to get them help.
But⌠(I think that everyone can see where Iâm going with this).
Mmm - I donât think this will help their problem, but I think the problem may be helped with limiting the unfettered access of the internet to children. At least up to a point⌠Granted 14ish is about the time more freedom should be allowed, and kids are generally more mature at that age than just a few years ago. But it sounds like in this story, the access started sooner.
There has been study suggesting that the smart phone boom has lead to a dramatic increase in self-harm and suicide in girls ages 10-14. Though this is thought a lot of it has to do with social media and the bullying and sniping and complex social politics associated with it, I could see site centered around self harm and suicide could encourage those acts as well.
I remember reading a pre-internet era study about the social aspect of suicides in teens. Apparently there is one, and itâs been broached as an issue with the way the media handles suicide before. Now, whether this will do a damned thing⌠eh⌠meh
Well, sure, now I clicked.
But images are also very different.
Absolutely. If you wouldnât let your child wonder the streets alone, donât let them wonder the Internet alone. The Internet is not a toy. The Internet is the world.
How much of this is like blaming violent video games, or naughty rock music, or movable type for godsake?
I suspect the images just made things worse, rather than being the cause.
When I was fourteen, I saw someone immolate himself. I was walking downtown, suddenly flames in the sky. I figured an accident, there was construction up ahead. But a half block later, there was someone in flames in the ground, theworkers trying to put the fire out. He was right there on the ground, I could have reached down, burt was afraid it would make things worse.
The news that night said he set himself on fire. The news two weeks later, the end of July 1974, was that he died from severe burns. You gotta make your own kind of music.
It was a very public suicide, intended in part for the store he did it in front of, but I think some others. Probably few ever noticed, except the people who were there.
How did things change from wanting to hurt someone emotionally and going into places with guns to actually hurt them, before killing yourself?
Of course it gives other people ideas. You imagine doing it yourself, and think about the reaction. But youâd be dead, so it doesnât really matter what others think.
So news of suicide can affect others, but if you donât get it from the internet, you can see it happen just by walking along the street .
But suicide isnât just fantasy, you risk just damaging yourself, and landing in a worse situation, or everything ends, and whatever happens has no relevancy to you.
A blur option for self-harm, while pictures of consensually and artistic nudes are just outright banned.
Sorry about the whataboutism, I just think Instagram (like most social media) has their priorities so completely out of whack.
Extremely rarely. Whereas on the internet there are easily found groups dedicated to it - and even more to self-harm - and even more to bullying/encouragement - for fuckâs sake.
Kids/teenagers are vulnerable and a âsocialâ platform really needs to take some responsibilty, have some values and morals, and stop the âwe are not publishers, we are just a platformâ bullshit.
What do you suggest they do that would be technologically feasible, actually effective (unlike this PR effort), and would not, if universally required, essentially change the Internet from a participatory platform into The Sunday Times letters page on your laptop?
Donât allow anonymous posting and kick people off the platform that break the rules.
Itâs different. I think itâs a little more akin to how hate groups recruit online. Self-harmers form self-reinforcing self-harm communities online. Making those groups harder to access will reduce their ability to ârecruitâ (recruit is not the right word).
And they are staggeringly easy to find. If you type self-harm into a search do you find resources to help people who need help? research on causes? or groups that promote it? If no one is intentionally shaping those results, it might be the latter.
Well, for a start Facebook (owner) could scale up the level of spending on addressing the problem. Hear about the Snopes fact checking thing they were so enthusiastic about? A spend of $100k per annum. Not even peanuts. Maybe when self-harm images are reported, just pull them down rather tha consider whether or not they contravene some arcanely defined ToS? maybe spend some real money on partnering with organisations with some expertise in the area rather than trying to protect their user numbers and ad revenue by slapping band-aids on it. But you know, I am not the âtechnical feasibilityâ expert here. I do recognise values and morals that prioritise revenue and data access (access to usersâ data) over actually even trying to do something about what most sane people recognise is a problem whose solution must start in part with the platform owner.
Plus what @Headache said
Deserves many more likes!
Sure, you could get rid of anonymous posting (said pelicancounselor to Headache). And I see no problem with kicking people off who break the rules, when they are reported to you.
But most of what people seem to want is a technological system that identifies content that breaks the rules, automatically. They usually either think this can be done without a mountain of false positives and false negatives, or they just donât care about the false positives. (If you try to automatically filter out smut, for example, youâre likely to filter out peopleâs beach selfies, lots of health content, etc. If youâre applying a textual filter, a hell of a lot of sexual health content is going to get caught too. At the same time, a mountain of bad stuff will still get through).
So even if social media companies start doing more, parents still need to start realising that the Internet is not a toy, and acting accordingly. We have had more than 20 years of household Internet access to get used to this thing. The level of parental naivete is astounding, and at this point, inexcusable.
But donât you understand? The CEO was deeply moved! Something had to be done!