Parody and satire are a legitimate part of public debate. Doctored videos are all over the place. Some are funny. Some are meant to make a point.
Arguments of this kind are being made to support Facebook’s decision not to take down the doctored video of House Speaker Nancy Pelosi, in which she was made to appear drunk or otherwise impaired.
One version, posted on the Facebook page Politics WatchDog, was seen over two million times in just a few days. There is no question that many viewers thought that the video was real.
Acknowledging that it was fake, Facebook said, “We don’t have a policy that stipulates that the information you post on Facebook must be true.”
Rather than deleting the Pelosi video, Facebook said that it would append a small informational box to it, reading, in part, “before sharing this content, you might want to know that there is additional reporting on this,” and linking to two fact-checking sites that found it to be fake. Facebook also said that it would also “heavily reduce” the appearance of the video in people’s news feeds.
Facebook, Twitter, YouTube and others should not allow their platforms to be used to display and spread doctored videos or deepfakes that portray people in a false and negative light and so are injurious to their reputations — unless reasonable observers would be able to tell that those videos are satires or parodies, or are not real.
The proposal grows directly out of libel law, which (simply stated) imposes liability on people who make false statements of fact that are injurious to people’s reputations. Those who make a doctored video or a deepfake might well produce something libelous, or very close to it.
It is true that in New York Times Co. v. Sullivan, decided in 1964, the Supreme Court ruled that the First Amendment imposes restrictions on libel actions brought by public figures, who must show that the speaker knew that the statement was false, or acted with “reckless indifference” to the question of truth or falsity.
But with doctored videos or deepfakes, the court’s standard is met. Those who design such material certainly know that what they are producing is not real.
To be sure, many doctored videos are legitimate exercises in creativity and political commentary, including satire, humor and ridicule. The same is true of the coming deepfakes.
If videos show the Beatles playing Taylor Swift songs, or Joe Biden and Bernie Sanders dressed in Nazi uniforms and looking like Adolf Hitler, let freedom ring. In such cases, reasonable viewers would not think that they are watching something real.
It is true that the key terms in my proposal – “false and negative light” and “injurious to their reputations” – are vague. In some cases, their application would require an exercise of judgment. We could easily imagine serious disputes.
In that light, some social-media platforms, including Facebook, might reject the whole idea and conclude that it’s better to inform people that a video isn’t real, rather than to take it down altogether.
But if a newspaper prints a libelous statement, it’s not enough for it to append a note saying, “This isn’t true.” Many readers will accept the libel, not the note. If the newspaper wants to avoid liability, it will not publish the statement in the first place.
It’s right to emphasize that Facebook is a social-media provider, not a newspaper. But it is also a private company, not a government. Because the First Amendment applies only to government, Facebook, like all private providers, has a lot of room to maneuver. It is free to provide safeguards against unambiguously harmful speech.
To their credit, Facebook and other social-media providers have proved willing to rethink their practices. With respect to doctored videos and deepfakes meant to undermine or destroy people’s reputations, it’s crucial to get ahead of what is bound to become a serious threat to individuals and self-government alike.