No, it won’t. It reduces options for people who are searching for help, makes them feel ostracized and ashamed, and less willing to seek the help they need.
Me as well.
If a picture or video involves a person or animal getting harmed, then I have no desire to see it.
Actually, I thought twice before clicking on this BB post, but I trusted Xeni’s judgement.
As the father of a 14 year old daughter who has self-harmed, I WANT TO KNOW WHY THE F_CK ARE ANY OF THESE IMAGES ALLOWED?
Why should someone who’s been bullied to that point hurt themselves? I’m not saying it’s ok to be a spree killer, but I definitely understand the impulse to go after a bully.
Better to keep the hashtags and searches untouched but have the search results link to helpful resources.
yeah but how often do you go on sites which have subject matter you’re interested in to find that subject matter and then the site announces that subject matter will henceforth be blurred?
As part of our commitment to serving everyone self harm images will from hereon in be automatically added to your facebook feed.
I work in an ER and regularly speak to people who have hurt themselves, threatened to hurt themselves, or spoken of the desire to hurt themselves. I also frequently meet people who cut, but NOT in an attempt to hurt themselves. Cutting is often not ‘self harm’ in the “I want to hurt myself or die” modality. For many teens it seems to be a stress reliever. Is it safe and healthy? Probably not, but it is a lot better than many of the other behaviors I witness the effects of.
“Instagram now also blocks images of cutting from showing up in search, hashtags or account suggestions.”
This is a difficult subject. For one, simply ignoring or pretending something doesn’t exist (No cutting pictures here, folks. Nope. Nothing to see) surely doesn’t solve the problem.
And will making people double click really stop them from seeing a photo?
But if the associated dialogue, especially when accessible to fragile populations, trends away from help and support (like the woman who told her boyfriend to kill himself?), how do we focus that speech in a helpful way?
Am I rambling?
I don’t believe that happy, untroubled people can get triggered into self-harm by seeing pictures of it, nor would they seek out groups about it. The people at danger are already unhappy and troubled.
They probably won’t be deterred from seeking out such material, but I suppose it’s worth a try.
The UK’s limit on buying paracetamol – you can’t buy more than two packs of 16 pills in one transaction – led to a measurable decline in suicide by paracetamol, just by making it a bit harder to get hold of the amount of pills needed for a lethal dose.
The ‘self-harm communities’ being referred to there were those who encourage it. The ones who say “it’s not so bad” or “I do it too” or “look at what I’ve (proudly) done”. The ones who “recruit” (encourage). Not the ones that say “hang on, this is not good, there are better ways and here is how you can change and here is where you get help”. So I do not see how reducing access to those who encourage it will reduce options for those seeking help, given that there are sufficient options for those seeking help, without going near the places that encourage it.
Exactly!
Have there been any studies into the effects of restricting access to pro-ana sites?
No, that’s what Facebook’s investors want. There’s no natural law or expression of consumer preference that demands content filtering be done algorithmically; it’s just that if it were done manually it would transform the business model of Instagram in ways their investors wouldn’t like. It might also affect the platform in ways we as consumers and citizen do or don’t like, but fundamentally there is no problem of content moderation that can’t be solved by throwing money at it.
Whose values? Whose morals? Mike Pence might well say the same thing, but have a slightly different agenda.
If Instagram (and Facebag, and Titter) hold themselves out as publishers, then they are legally liable for everything that appears on their platforms, including the two percent of “we love cutting, it’s fun” posts which make it through the filter. If your real goal is to litigate these companies out of existence and/or turn them into walled gardens, this is a good way to do it.
You are treading close to what some here would term victim blaming.
Parents, consumers … having responsibilities?? Pardon me, I must go clutch my pearls.
I don’t think that is accurate.
What you describe is what Facebook et al. want. They also spend a lot of time and effort to get us to want it too.
They do not want to have to hire actual humans to moderate. They do not want parents to actually control/limit their offspring’s social media use.
Content moderation works pretty well on the level of boingboing, with a few hundred regular posters who pretty much agree on most hot button issues. Can you point to an example of a content moderation system that has worked on the hundred-million-member scale?
Excellent. It needs to happen. They are voracious data gatherers making millions of dollars out of aggregating data about human behaviour, care only that their ‘users’ use the service and care not one whit about the health or welfare of those ‘users’, other than to the extent it may result in bad PR.
Until they adapt their business models to be more user/human-friendly, rather than customer/corporate-friendly, and until they acknowledge that the value users receive from their services is vastly less than the value they themselves receive from all that data, they deserve to be litigated out of existence. If only that were possible…
Cue bleating about how people find them so useful and valuable and couldn’t do without them and how that would be so unfair… Frankly, I don’t give a damn. We did well enough without them and much of the persuasion that we need to use these services is propaganda from the services themselves about how they serve a useful social function, while conveniently ignoring the huge social damage they do that far outweighs any value, and blaming that damage entirely on the users. Until the balance is reset, they can go to hell AFAIAC
Again:
And
One 2017 survey of British schoolchildren found that 63% would be happy if social media had never been invented.
Full story:
(Please will nobody bother replying to me to say how useful you personally find these services and that they do have social value. I am no longer interested in debating that issue. Other non-toxic methods of communication are available.)
I was going to mention those sites. Pro-Ana (anorexia) sites are where anorexics group to support each other in their anorexia. Specifically, encouraging the body dysmorphia, sharing ways to lie to people around them about their eating (put some oregano on your tongue and tell people you ate a slice of pizza), sharing their pictures of emaciation as a goal…
You could compare them to MRA sites, or white supremacist sites, in that highly dysfunctional behavior is normalized and encouraged.
I’d be happy to read a study, but I’m going to take the cognitive leap that restricting access to a site that glorifies and encourages self-destructive behavior, discourages seeking treatment, and disseminates methods of concealment to decrease the likelihood of intervention is a Good Thing.
https://eatingdisordersreview.com/twitters-link-pro-ana-sites/
This study is more about redirecting searches… which is analogous to what some of the other commenters here are suggesting.
I also noticed that pro-ana is still a Thing, and seems to be widely searchable and hash tagged
Seeing as there’s a well-documented 10% mortality rate of anorexia, you’d think that this shit would be shut down…
Sure, the Great Firewall of China. I’m actually being very serious. Of course Instagram wouldn’t work anything like it does now and Instagram’s investors would hate it. My point isn’t that it would be feasible just to staff up and solve the issue for a token sum, but rather to counter the notion that content moderation is simply impossible so we must accept x or y social harm. We may choose to accept it but it’s not a natural law.
They do not want to have to hire actual humans to moderate.
That’s exactly what I was just thinking. The flip-side is that when you hire people to do those jobs, it’s terrible, terrible work. I read a story about somebody in the Philippines that did that for Facebook and after seeing torture photos, videos of killings, all kinds of abuse - they end up with PTSD.