Instagram bans graphic images of self-harm after teenager’s suicide

Instagram bans graphic images of self-harm after teenager’s suicide

The change appears to be in response to public attention to how the social network might have influenced a 14-year-old’s suicide.

The Instagram application is seen on a phone screen
The Instagram application is seen on a phone screen, Aug?3, 2017. (Photo: Reuters/Thomas White)

Instagram announced Thursday (Feb 14) that it would no longer allow graphic images of self-harm, such as cutting, on its platform. The change appears to be in response to public attention to how the social network might have influenced a 14-year-old’s suicide.

In a statement explaining the change, Adam Mosseri, the head of Instagram, made a distinction between graphic images about self-harm and non-graphic images, such as photos of healed scars. Those types of images will still be allowed, but Instagram will make them more difficult to find by excluding them from search results, hashtags and recommended content.

Instagram social media smartphone
(Photo: Unsplash/Josh Felise)

Facebook, which acquired Instagram in 2012 and is applying the changes to its own site, suggested in a separate statement that the changes were in direct response to the story of Molly Russell, a British teenager who killed herself in 2017.

Molly’s father, Ian Russell, has said publicly in recent weeks that he believes that content on Instagram related to self-harm, depression and suicide contributed to his daughter’s death.

Some of that content is shocking in that it encourages self-harm, it links self-harm to suicide.

The father has said in interviews with the British news media that after Molly’s death, he discovered she followed accounts that posted this sort of “fatalistic” messaging.

“She had quite a lot of such content,” Russell told the BBC. “Some of that content seemed to be quite positive. Perhaps groups of people who were trying to help each other out, find ways to remain positive.?But some of that content is shocking in that it encourages self-harm, it links self-harm to suicide,” he said.

READ:?Preventing suicide among university students

Mosseri said in the statement that the company consulted suicide experts from around the world in making the decision. In doing so, he said, the company concluded that while graphic content about self-harm could unintentionally promote it, removing non-graphic content could “stigmatise or isolate people who are in distress”.

“I might have an image of a scar, where I say, ‘I’m 30 days clean’ and that’s an important way for me to share my story,” he said in an interview with the BBC. “That kind of content can still live on the site.”

The changes will “take some time” to put in place, he added.

Depression sad woman
(Photo: Pixabay/sasint)

Daniel Reidenberg, the executive director of the suicide prevention group Save.org, said that he helped advise Facebook on its decision over the past week or so and that he applauded the company for taking the problem seriously.

Reidenberg said that because the company was now making a nuanced distinction between graphic and nongraphic content, there would need to be plenty of moderation around what sort of image crosses the line. Because the topic is so sensitive, artificial intelligence probably will not suffice, Reidenberg said.

“You might have someone who has 150 scars that are healed up –?it still gets to be pretty graphic,” he said in an interview. “This is all going to take humans.”

We’re doing things that feel good and look good instead of doing things that are effective. It’s more about making a statement about suicide than doing something that we know will help the rates.

In Instagram’s statement, Mosseri said the site would continue to consult experts on other strategies for minimising the potentially harmful effects of such content, including the use of a “sensitivity screen” that would blur non-graphic images related to self-harm.

He said Instagram was also exploring ways to direct users who are searching for and posting about self-harm to organisations that can provide help.

READ:?How cognitive behaviour therapy may help suicidal people

This is not the first time Facebook has had to grapple with how to handle threats of suicide on its site. In early 2017, several people livestreamed their suicides on Facebook, prompting the social network to ramp up its suicide prevention programme. More recently, Facebook has utilised algorithms and user reports to flag possible suicide threats to local police agencies.

Instagram user smartphone
(Pixabay/Erik?Lucatero)

April Foreman, a psychologist and a member of the American Association Of Suicidology’s board, said in an interview that there was not a large body of research indicating that barring graphic images of self-harm would be effective in alleviating suicide risk.

Suicide is the second-leading cause of death among people ages 15 to 29 worldwide, according to the World Health Organization. And it was a problem among young people even before the rise of social media, Foreman said.

READ:?How we apologise now: The popularity of saying sorry on social media

While Foreman appreciates Facebook’s work on the issue, she said that Thursday’s decision seemed to be an attempt to provide a simple answer in the middle of a “moral panic” around social media contributing to youth suicide.

“We’re doing things that feel good and look good instead of doing things that are effective,” she said. “It’s more about making a statement about suicide than doing something that we know will help the rates.”

Source: NYT

Bookmark