The oversight board of Meta has urged the company to update its rules on deepfakes to keep up with advancements in artificial intelligence. The board, known as a top court for Meta’s content moderation decisions, made this recommendation after reviewing two cases involving deepfake images of prominent women in India and the United States.
One case involved a deepfake posted on Instagram that was not taken down despite a complaint, while the other case saw the fake image being removed from the Meta platform. Both decisions were appealed to the board, which determined that the deepfakes violated Meta’s rule against “derogatory sexualized photoshop”, a rule that needs to be clearer for users to understand.
The board explained that Meta defines this rule as involving manipulated images that sexualize individuals in a way that they would not want. While the term “photoshop” has been commonly used to refer to image editing since 1990, the board believes it is too limited when it comes to addressing deepfakes created using generative AI technology.
Therefore, the oversight board recommended that Meta explicitly state that it does not allow AI-generated or manipulated non-consensual content. While Meta has agreed to follow the board’s decisions on specific content moderation cases, it considers policy suggestions as recommendations that it may choose to adopt.