Twitter will begin warning users when a tweet contains disputed or misleading information about the coronavirus.
The new rule is the latest in a wave of stricter policies tech companies are rolling out to confront an outbreak of virus-related misinformation.
Twitter will take a case-by-case approach to how it decides which tweets are labelled and will only remove posts that are harmful, company leaders said on Monday.
Some tweets will run with a label underneath that directs users to a link with additional information about COVID-19.
Other tweets might be covered entirely by a warning label alerting users that “some or all of the content shared in this tweet conflict with guidance from public health experts regarding COVID-19”.
The new labels will be available in roughly 40 languages and should begin appearing on tweets as soon as today. The warning could apply retroactively to past tweets.
Twitter won’t directly fact-check or call tweets false on the site, the company’s global senior strategist for public policy Nick Pickles said.
“People don’t want us to play the role of deciding for them what’s true and what’s not true but they do want people to play a much stronger role providing context,” Pickles said.
The fine line is similar to one taken by tech rival Facebook, which has said it doesn’t want to be an “arbiter of the truth” but has arranged for third-party fact checkers to review falsehoods on its site.
One example of a disputed tweet that might be labelled with a warning includes claims about the origin of COVID-19, which remains unknown.
Conspiracy theories about how the virus started and if it is man-made have swirled around social media for months.