{{featured_button_text}}
Cass R. Sunstein

Parody and satire are a legitimate part of public debate. Doctored videos are all over the place. Some are funny. Some are meant to make a point.

Arguments of this kind are being made to support Facebook’s decision not to take down the doctored video of House Speaker Nancy Pelosi, in which she was made to appear drunk or otherwise impaired.

One version, posted on the Facebook page Politics WatchDog, was seen over two million times in just a few days. There is no question that many viewers thought that the video was real.

Acknowledging that it was fake, Facebook said, “We don’t have a policy that stipulates that the information you post on Facebook must be true.”

Rather than deleting the Pelosi video, Facebook said that it would append a small informational box to it, reading, in part, “before sharing this content, you might want to know that there is additional reporting on this,” and linking to two fact-checking sites that found it to be fake. Facebook also said that it would also “heavily reduce” the appearance of the video in people’s news feeds.

In striking the right balance, Facebook is in an admittedly tough position, but these steps are an inadequate response to a growing danger.

Also last week, the responsibilities of Facebook, Twitter and other social-media providers were put in sharp relief by an announcement, both creepy and amazing, from engineers at Samsung. They have made major progress in the production of “deepfakes”: faked videos of human beings, alive or dead, that appear real. The new approach can take very few images — potentially just one – and produce “highly realistic and personalized talking head models.”

Before long, any public figure, and indeed anyone who has ever been photographed, can be shown to say and do anything at all. If Russia continues to seek to undermine the electoral process in the U.S., it will have (and may already have) a powerful tool.

To respond to the evident risks, here is a proposal, meant tentatively and as an invitation for discussion:

Facebook, Twitter, YouTube and others should not allow their platforms to be used to display and spread doctored videos or deepfakes that portray people in a false and negative light and so are injurious to their reputations — unless reasonable observers would be able to tell that those videos are satires or parodies, or are not real.

The proposal grows directly out of libel law, which (simply stated) imposes liability on people who make false statements of fact that are injurious to people’s reputations. Those who make a doctored video or a deepfake might well produce something libelous, or very close to it.

It is true that in New York Times Co. v. Sullivan, decided in 1964, the Supreme Court ruled that the First Amendment imposes restrictions on libel actions brought by public figures, who must show that the speaker knew that the statement was false, or acted with “reckless indifference” to the question of truth or falsity.

But with doctored videos or deepfakes, the court’s standard is met. Those who design such material certainly know that what they are producing is not real.

To be sure, many doctored videos are legitimate exercises in creativity and political commentary, including satire, humor and ridicule. The same is true of the coming deepfakes.

In a sense, doctored videos and deepfakes are even worse than purely verbal libels. Viewers of convincing images might continue to think, in some part of their mind, that what they saw captured reality.

Because the First Amendment applies only to government, Facebook, like all private providers, has a lot of room to maneuver. It is free to provide safeguards against unambiguously harmful speech.

It’s crucial to get ahead of what is bound to become a serious threat to individuals and self-government alike.

Cass R. Sunstein is a Bloomberg Opinion columnist.  

0
0
0
0
0

Load comments