Twitter drafts deepfake policy that would flag manipulated content

The drafted policy says that manipulated content won't be removed unless it could cause physical harm


Twitter has shared a draft of its new policy that aims to address the problem of deepfakes on its platform and is asking the public for input.

As part of the new policy, Twitter will place a notice next to tweets that share manipulated or synthetic media. It will also warn users before they share or like tweets that contain manipulated media.

It also plans to add a link that will provide context as to why various sources believe that the media is synthetic or manipulated. Additionally, Twitter says it may remove a tweet if it contains manipulated media that could threaten someone’s physical safety.

The social media giant is looking to receive input from its users before it implements the new policy, and is accepting feedback through a survey. One of the questions in the survey asks users if they believe that altered photos should be removed entirely, have a warning label, or not be removed.

The survey will close on November 27th, after which the platform will make adjustments to the policy before it goes live.

Although the social media giant is taking steps to monitor manipulated content, it has not specified how it plans to detect deepfakes on its platform.

Amazon, Facebook and Microsoft are all also working to develop ways to prevent the spread of manipulated media through the DeepFake Detection challenge (DFDC).

Source: Twitter