Facebook has profited from searches and ads for white supremacist groups on its social media platform, according to a report released Wednesday from the Tech Transparency Project (TTP).
The TTP investigation found ads monetizing search results for some white supremacist groups, of which there are more than 80 using Facebook as a hub, according to the report. The ads weren’t for the white supremacist groups themselves, but generated revenue for the social media giant when they appeared in searches.
A Facebook spokesperson said on Wednesday that the platform has corrected the issue.
The 80-group count found in the report amounts to more than a third of 226 total white supremacist groups labeled as hate groups by the Southern Poverty Law Center, the Anti-Defamation League and Facebook.
Meta says it has designated 270 groups under that label and have banned them all from the platform.
Even with clear flags of white supremacy like “Ku Klux Klan” in the name, searches for some groups showed users ads for Black churches, which the TTP flagged as possibly “highlighting potential targets for extremists.”
The platform has often come under fire for the facilitation of extremist groups and the spread of misinformation.
Facebook announced in 2019 that it would ban white nationalist and white separatist content, adding to its existing ban on white supremacist content, noting in a release that the company had “confirmed that white nationalism and white separatism cannot be meaningfully separated from white supremacy and organized hate groups.”
In 2020, Facebook updated its policies “to more specifically account for certain kinds of implicit hate speech, such as content depicting blackface, or stereotypes about Jewish people” and banned a number of white supremacist groups.
The new TTP report, though, suggests hate groups still find footing on the site.
The investigation found that 24 of the 119 Facebook pages identified had been auto-generated by Facebook, likely after a user noted one such group as an interest or employer.
“We immediately resolved an issue where ads were appearing in searches for terms related to banned organizations and we are also working to fix an auto generation issue, which incorrectly impacted a small number of pages,” Facebook spokeswoman Dani Lever said in a statement.
“We will continue to work with outside experts and organizations in an effort to stay ahead of violent, hateful, and terrorism-related content and remove such content from our platforms.”