The Wall Street Journal reported that Meta isn’t stopping people from using its platform to promote child abuse content. The report uncovered numerous examples of child exploitation on Facebook and Instagram, prompting scrutiny from European Union regulators. The report detailed tests with the Canadian Centre for Child Protection, showing how Meta’s recommendations can suggest groups, hashtags, and accounts used to share child exploitation material. Meta was slow to respond to reports about such content, and its algorithms often made it easier for people to connect with abuse content and others interested in it.
Meta claims to have improved its internal systems to restrict potentially suspicious adults and prevent them from connecting with each other, including in Facebook Groups. They are also disabling individual accounts that score above a certain threshold of suspicious behavior. However, the social network is facing backlash over its handling of child safety. Dozens of states recently sued Meta for allegedly harming the mental health of its youngest users. Mark Zuckerberg is set to appear at a Senate Judiciary Committee hearing focused on child safety online, where he will face intense questions about these allegations. Meanwhile, European Union officials are using a new law to investigate the company’s handling of child abuse material, following The Journal’s report. The company must turn over data to the bloc by December 22.