Twitter will begin to label and in some cases remove doctored or manipulated photos, audio and videos that are designed to mislead people.
The company said on Tuesday that the new rules prohibit sharing synthetic or manipulated material likely to cause harm. Material that is manipulated but isn’t necessarily harmful may get a warning label.
Under the new guidelines, the slowed-down video of House Speaker Nancy Pelosi in which she appeared to slur her words could get the label if someone tweets it out after the rules go into effect March 5.
But deciding what might cause harm could be difficult to define, and some material may fall into a gray area.
“This will be a challenge and we will make errors along the way – we appreciate the patience,” Twitter said in a blog post. “However, we’re committed to doing this right.”
Google, Facebook, Twitter and other technology services are under intense pressure to prevent interference in the 2020 U.S. elections after they were manipulated four years ago by Russia-connected actors. On Monday, Google’s YouTube clarified its policy around political manipulation, reiterating that it bans election-related “deepfake” videos. Facebook has also been ramping up its election security efforts.
As with many of Twitter’s policies, including those banning hate speech or abuse, success will be measured by how well the company can implement it. This is likely to be especially true for misinformation, which can spread quickly on social media even with safeguards in place.
Facebook, for instance, has been using third-party fact-checkers to debunk false stories on its site for three years. While the efforts are paying off, the battle against misinformation is far from over.