Social Media and the Limits of Free Speech

by Simon Adell

LiftLetters-SocialMediaFreeSpeech.jpg

This past week most of the major social media and public tech platforms took significant steps to limit the reach of Infowars, a content provider that could reasonably be described as ground zero in the fake news ecosystem. Infowars specializes in espousing particularly heinous theories that virtually every recent major American violent tragedy (including 9/11, the Sandy Hook and Parkland school shootings, and many others) were in fact perpetrated by the US government on its own people. In the past few years Infowars has managed greatly to amplify its reach through the use of social media. Along with coarsening social-media discourse, this content has had offline impact: followers of Infowars have harassed and threatened the survivors of these tragedies and their families to a frightening degree. To many observers this week’s crackdown amounted to too little, far too late. This Lift Letter will explore the business model of social media platforms, the conflicts, limits and dangers inherent in those models, and the specific case of Infowars.

Social media employs a different business model than do traditional publishers such as the New York Times (“NYT”). NYT employs a large staff of journalists, writers and editors to produce the content (current affairs, commentary, sports updates, etc.) that NYT then publishes out to its readers. NYT operates a “one-sided” business: it produces the product (the content) and attempts to sell it to consumers. Consumer demand is the only piece of the business puzzle that NYT doesn’t control.

In contrast, social media sites such as Facebook and Youtube are considered “platforms” because they don’t provide the content itself, but rather the tools for other people to create and consume the end product (which is generally published content). Content on Facebook is created by the site’s own users, both consumers and businesses. The same is true for all the videos on Youtube -- Google didn’t produce those, Youtube users did. Social media platforms are thus true technology companies since they focus almost exclusively on building and creating tools for content creation, sharing and discovery, rather than on creating the content itself.

Platforms are therefore multi-sided business, in that they need to attract both product producers and product consumers. As such, platforms are generally considered higher-risk propositions than traditional one-sided businesses since the business controls neither supply nor demand and thus there’s a chicken-and-egg problem to solve. Interestingly, TripleLift is a platform and a great illustration of this liquidity challenge. Imagine the difficulty our early colleagues had convincing advertisers to spend effort bidding into an ecosystem with very few publishers, and conversely, convincing publishers to bother making inventory available when there were hardly any buyers bidding.

And with high risk comes the potential for high reward. Platforms that are able to break through the initial liquidity challenge and achieve critical mass are some of the most valuable companies, because these successful entities don’t have to spend the resources to produce the end product - other people do the creation. (In TripleLift’s case our publisher partners create the digital inventory.) So it’s no coincidence that the successful social media platforms (including Facebook, Google, Pinterest, Twitter, Snapchat and others) are so highly-valued. Google and Facebook in particular are each among the ten highest-valued public companies in the world, more valuable than any bank or consumer products company.

Digging into this inherent potential efficiency example a little further, consider Facebook versus NYT. The numbers tell the story:

pasted image 0.png

 

Facebook is a veritable money-making efficiency machine; it has seven times as many people as NYT but churns out about 25 times the visitors and revenue, a hundred times the profit, and as such is rewarded with 138 times the market capitalization. (And note that NYT is one of the most successful traditional publishers, currently faring far better than most of its peers.) Imagine if Facebook were required to create its own content. Producing content for its two billion-people audience would take an awful lot of creators, no doubt tens of thousands of extra people. Instead these very same two billion customers create their own content. Facebook is literally having its cake and eating it too. Facebook has already added 7,500 content reviewers and has committed to adding even more, but this number is still extremely efficient from a resourcing perspective, versus the content-creation model. (It’s also worth noting that ad tech has been at least partially responsible for enabling this incredible wealth-creation, as it’s hard to imagine Facebook having had nearly this level of success with a subscription or other non-ad-supported business model.)

So what is the problem, and what does this have to do with Infowars? Shouldn’t Facebook just kick off those trouble makers, be done with it and go back to making scads of money? It’s not quite that simple, for a few reasons.

First there is the issue of free speech. Notably, the US Constitution does not prevent private companies from regulating speech on their own platforms, but rather protects against government regulation of speech. The senior leaders at many modern tech companies (ones immersed in the Silicon Valley culture in particular) often tend towards a libertarian viewpoint that frequently overlaps with the optimistic perspective that technology is in and of itself a good thing. This fits neatly with the classical philosophical case for free speech, by which enabling a robust “marketplace of ideas” is the best way to ensure that good wins out. It’s probably safe to say that not a single US tech titan would ever want to think of her/himself as being in any way in favor of censorship. This worldview is strongly entrenched among the tech leadership community and usually extends even as far as an entity as noxious as Infowars. And the impact of Infowars pales in comparison to the Facebook-fuelled damage wreaked by other bad actors in some developing countries with less robust legal and police protections.

Second and related, there exists an ongoing concern among tech leaders of being seen as biased against certain political leanings. For years there’s existed a lingering grudge amongst conservatives that the social media platforms are biased against their political outlook. Broadly speaking there are very few examples of this being true, but the sentiment persists and the leaders of public tech platforms have often bent over backwards to prove the opposite, even to the extent of erring on the side of allowing free reign to an Infowars (which while hard to peg on the spectrum is generally considered to be in the conservative camp). Indeed, following this week’s actions against Infowars Nigel Farage of Brexit fame claimed “collusion by the big tech giants”.

Finally are the inherent limitations of the platform model. The platform scalability advantage is Facebook’s strength but also its greatest weakness. Since the platform doesn’t produce its own content, it only gets visibility into what tens of millions of people are creating until after the content is published. In his appearance before Congress this year Mark Zuckerberg acknowledged this “reactive” nature of Facebook’s approach to content moderation. The platform’s human content review team generally only learns of objectionable content because users have flagged it (so Facebook is essentially outsourcing the first line of content moderation back out to its users). Bad actors are able to take advantage of this timing asymmetry, as well as the sharing and targeting tools built into Facebook, to amplify their messages far beyond the fringes of public discourse to which they traditionally have been relegated.

Unsurprisingly social media platforms’ standard answer to the problems exacerbated by a lack of human supervision is more technology, generally in the forms of enhanced algorithms, computer vision, and other forms of machine learning and/or artificial intelligence. Thus far however these scalable but non-human approaches have shown only partial success.

The situation thus is a perfect confluence of factors. Noble desires to promote free speech and to resist bias combine with the realities of the user-generated content model to push the social media platforms against taking strong action against the bad actors. And so an Infowars runs amok for years.

However this past week the dam finally broke, in a delayed reaction to a late-July Infowars broadcast in which the site’s founder intimated shooting Robert Mueller over the ongoing Russia probe. A slow drip suddenly turned into a relative flood when platform after platform -- no doubt hoping neither to be seen as the first to act nor a laggard -- imposed restrictions or outright bans on Infowars content. Twitter notably is still making the argument that the most effective response to Infowars is to encourage debate and thereby exposure of its lies. However this approach overlooks the reality that for purveyors of fake news and extremist content such as Infowars, any engagement or notoriety represents valuable publicity.

Time will tell whether the Infowars situation fundamentally change how social media platforms manage offensive content, or whether was this a one-off, an isolated case so obvious and egregious that the reputational risk of inaction simply became too great.