With a second brutal murder broadcast live on Facebook in as many weeks, the social media giant faces scrutiny. But with over 1.9 billion users, there is little the company can do to better police the content users upload, according to social media experts.
LOS ANGELES, CALIFORNIA, UNITED STATES (APRIL 25, 2017) (REUTERS) – For the second time in two weeks, a murder, this time showing a Thai man filming himself killing his 11-month-old daughter was posted on Facebook before committing suicide. The murder-suicide, filmed in two parts, was online roughly 24 hours and viewed more than 360,000 times on the father’s Facebook page.
Last week, Facebook said it was reviewing how it monitored violent footage and other objectionable material after a posting of the fatal shooting of a man in Cleveland, Ohio was visible for two hours before being taken down.
Murders, suicides and sexual assault have plagued Facebook despite making up a small percentage of videos.
“Facebook just like and especially Youtube and all of the others could not possibly monitor every single thing that’s uploaded. And so the burden is on us (users),” Karen North, a social media professor at USC’s Annenberg School for Communications, told Reuters.
After the company faced a backlash for showing the video of the Cleveland killing, CEO Mark Zuckerberg said Facebook would do all it could to prevent such content in the future.
North says Facebook relies largely on reports from its users to find objectionable material. Flagged items are forwarded to thousands of Facebook workers who judge whether they should be taken down.
“Legally speaking and in terms of business practices, they (Facebook) are immune from being responsible, or being held responsible for content because they don’t put up the content, they provide a broadcast network for us to put up our content,” she said.
The California company declined to answer questions about the latest incident or make employees available for interviews.
Facebook has said it is working to improve the software that automatically flags objectionable videos. North says identifying violence in a newly uploaded video would be very difficult given the volume of videos and a computer algorithms ability to distinguish between real and fake acts of violence.