(Bloomberg) -- Meta Platforms Inc.’s Oversight Board is investigating the company’s handling of AI-generated deepfakes of an “American public figure,” following a surge of such images depicting singer Taylor Swift earlier this year.

The board will evaluate whether two separate AI-created images depicting “nude” women posted on Facebook should have been removed under Meta’s content policies, according to a statement Tuesday. The organization, which is funded by Meta but designed to be independent, reviews content decisions made on Facebook and Instagram. 

Sexually explicit deepfake images of Swift began flooding Facebook, Instagram, X and other social media platforms in January, prompting outcry from the White House, AI experts and advocacy groups. One of the images now under review matches the description of a deepfake of Swift that went viral, but the Oversight Board declined to name the public figure in the case.

The board will begin reviewing that case over the next several weeks, along with a similar incident involving an Indian public figure who is also female.

Meta has policies against sexual content and AI-generated images that can mislead people. In the case of the “American public figure,” the deepfake was initially removed, but one of the users who posted the image has appealed to the Oversight Board to have it restored.

“Deepfake pornography is a growing cause of gender-based harassment online and is increasingly used to target, silence and intimidate women — both on and offline,” said Oversight Board Co-Chair Helle Thorning-Schmidt. “It’s critical that this matter is addressed, and the board believes it’s important to explore whether Meta’s policies and enforcement practices are effective at addressing this problem.” 

Meta and other social media companies have struggled to moderate the influx of fake and misleading AI-generated content on their sites in recent months. The barrage has included manipulated audio mimicking President Joe Biden’s voice and hoax videos of dead children and teenagers — with the posts amassing millions of views across social media.

The Oversight Board — founded in 2020 and made up of academics and other legal and human rights experts — can make recommendations about which posts should stay up on Facebook and Instagram, but Meta isn’t obligated to take them. Its advice can also guide how the company crafts its policies on AI and other contentious topics. 

The board has reviewed previous high-profile content moderation decisions, including Meta’s decision to ban then-US President Donald Trump. His accounts on Facebook and Instagram were restored last year.

Meta, based in Menlo Park, California, has already changed its policies about AI-generated content this year in response to a previous Oversight Board review. In that case, the company decided to allow more AI-generated content to stay up on its sites, even if that content was misleading. 

While Meta isn’t required to take the board’s recommendations, the company must respond within 60 days.

©2024 Bloomberg L.P.