Last week, when a doctored video of the Speaker of the House, Nancy Pelosi, began circulating on Facebook, it seemed like it would only be a matter of time before it was removed. After all, just one day before, Facebook proudly announced that it had recently removed 2.2 billion fake accounts between January and March as part of its expanded efforts to curb the platform’s circulation of misinformation. The video, which was manipulated to make Pelosi seem drunk and confused, is not a particularly sophisticated fake. But it was convincing enough that countless commenters believed it to be true and sent it spinning through cyberspace. At one point, there were seventeen versions of the video online; various iterations had jumped to Twitter and YouTube, and one was picked up by Fox News. The Fox News clip was then posted on Twitter by President Trump. Within hours, a version of the doctored video had been viewed more than two million times.

Click Here:

The Pelosi video was reminiscent of one broadcast on Facebook, last June, just after Alexandria Ocasio-Cortez beat Joe Crowley in the Democratic primary for New York’s Fourteenth Congressional District. What appeared to be an interview between Ocasio-Cortez and Allie Stuckey, the host of the online political channel Conservative Review, which made the young, self-declared democratic socialist look foolish and uninformed, was, in fact, a cut-and-paste job: answers from a legitimate public-television interview with Ocasio-Cortez were paired with new questions posed by Stuckey which were designed for maximum humiliation. In less than twenty-four hours, the video was viewed more than a million times. When Facebook was asked to take it down, the company demurred, saying that the bogus interview did not violate its “community standards” because Stuckey told them that it was meant as satire and that Facebook does not police humor. Similarly, Facebook refused to remove the Pelosi video because, according to Monika Bickert, the company’s head of global policy management, it does not violate the company’s community standards, even though it is demonstrably false.

Facebook’s community standards are not regulations. They are not laws. They are arbitrary and fuzzy guidelines developed by employees of a private company that are then open to interpretation by people paid by that company, and enforced—or not—by other employees of that company. Such solipsism accounts for Bickert’s inability to give CNN’s Anderson Cooper a straight answer when he asked her if the company would take down a similarly doctored video of Donald Trump that made the President, a known teetotaler, appear to be inebriated. The correct response, if the company were to follow its reasoning for not removing the Pelosi video, should have been no. But because Facebook’s community standards are interpreted subjectively and applied inconsistently, Bickert did not, and could not, answer.

How Facebook developed its roster of community standards is instructive. As Tim Sparapani, the company’s former director of public policy, told the “Frontline” journalist James Jacoby, “We took a very libertarian perspective here. We allowed people to speak. And we said, If you’re going to incite violence, that’s clearly out of bounds. We’re going to kick you off immediately. But we’re going to allow people to go right up to the edge and we’re going to allow other people to respond. We had to set up some ground rules. Basic decency, no nudity, and no violent or hateful speech. And after that, we felt some reluctance to interpose our value system on this worldwide community that was growing.”

Choosing to allow false information to circulate on Facebook is not just “interposing” the company’s value system on those who use its platform, it is actually imposing its value system on the culture at large. In Bickert’s conversation with Cooper, she continually fell back on her company’s commitment to keeping its users “safe,” by which she meant free from the threat of physical harm. For a company whose products have been used to incite genocide and other kinds of violence, this is a crucial, if not always successful, aim. But safety is not one dimensional; when a company that operates a platform with more than two billion users takes a value-free position on propaganda, ancillary perils and threats will follow.

Bickert also repeatedly pointed out to Cooper that the Pelosi video now contains a warning stating that fact-checkers have determined it to be untrue. But that, too, is untrue. Rather, it comes with a notice that says, “Before you share this content, you might want to know that there is additional reporting on this from PolitiFact, 20 Minutes, Factcheck.org, Lead Stories and Associated Press.” Bickert told Cooper, “We think it’s up to people to make an informed choice what to believe.” What she seems to mean is that viewers can decide for themselves if the fact-checkers are right, or if the determination that this is fake news is itself fake news. This suggests a fundamental misunderstanding of the meaning of fact or the point of having fact-checkers in the first place.

Facebook is continually falling back on the premise that it is a social-media company and not a media company, meaning that its allegiance is to free expression rather than to the truth. This is disingenuous. Facebook calls a user’s main page a “news feed.” And forty-three per cent of American adults say that they get their news from Facebook, more than from any other social-media site. Indeed, last year, in its response to a lawsuit brought by Six4Three, a bikini app startup, for breach of contract, the company cast itself as a publisher, with editorial discretion to publish what it wants. As Eric Goldman, a professor at Santa Clara University Law School, told the Guardian at the time, “It’s politically expedient to deflect responsibility for making editorial judgements by claiming to be a platform. . . . But it makes editorial decisions all the time, and it’s making them more frequently.” Choosing not to remove the Pelosi video, for example, is an editorial decision.

A few months before the 2018 midterm elections, I asked Zac Moffatt, Mitt Romney’s digital director in 2012 and the C.E.O. of Targeted Victory, a Republican strategy firm that sometimes works with Facebook, what he expected to see in future elections. “I worry all the time about the ability of people to create video content that is not true in origin, and the distinction is almost impossible to make,” he said. “These are really very scary trends. That’s what I think about the most: How will you determine what’s real and what is not real? I mean it’s hard now, but when video can be faked to that degree, I find that very scary going into 2020.”

It should be no consolation that the Pelosi video did not achieve the level of sophistication anticipated by Moffat. What Facebook and Twitter (which has also refused to remove it) have done is to make it clear to anyone with malign intent that it’s fine to distort the truth on these platforms. They are sanctioning the creation of misinformation and injecting it into the public consciousness. Bickert told Cooper that after Facebook learned that the Pelosi video was a fake, it had “slowed down the virality.” In other words, it will continue to infect the culture.