How does the the business model of social media platforms, the conflicts, limits and dangers inherent in those models, and risk of censorship apply to the specific case of Infowars.Read More
Facebook announced that it is changing the way its newsfeed works to deprioritize publisher content. Given that Facebook is the largest source of referral traffic for many publishers, this has a significant impact for the industry.Read More
The question of Fake News is a very interesting subject in ad tech. Fake news is not new. It has a long and terrible history. This includes journalists Joseph Pulitzer and William Hearst simply fabricating stories about why the USS Maine exploded in 1898 in their newspapers to serve their agenda of a war with Spain, to the blood libel tales about the Jews killing people and drinking their blood - forming a basis for mainstream anti-semitism as far back as the 12th century, to the role of Fox News in the election of George W. Bush. Indeed, the National Enquirer once had the largest circulation of any publication in the US.
So why is fake news especially an issue in 2016 / 2017? First, the extraordinary vitriol in this election, with the two least popular major party nominees of all time - and the new president having the lowest incoming approval rating ever - have heightened the dramatic partisan tensions in the country. Fake news could have played just as big a part in a less contentious election and people would have cared less. And indeed, fake news has been around in elections for as long as this country has held them.
Certain things have changed recently. While the cost of publishing content on the internet has always been fairly low, the cost of getting distribution of that content has decreased markedly - particularly with the rise of social channels like Facebook and Twitter. In the newspaper era, information - or misinformation - could only go as far as the number of copies printed. Similarly, in the cable news era, fake news only ever reached the people that chose to watch. And while the reach was greater, it was often self-selecting (the right watched Fox, the left watched MSNBC).
Audiences online are orders of magnitude larger. Stories can grow quickly and do not need a factual basis to be shared millions of times. While word-of-mouth could always have the same effect, the speed and compounding effect of social distribution has created an unparalleled situation.
There are certain easy fixes. Some sites exist solely to peddle fake news and that employ authors to make up news. TripleLift and all ad tech players should not work with these publishers. Similarly, sites that promote racism or hate speech violate our terms of service. We have a direct relationship with nearly every publisher that we work with and generally have a sense of the nature of the site when we onboard them. To the extent we learn of any such violations, we will terminate our relationship with these publishers. TripleLift's business model and standing generally makes these decisions easy and clear.
For sites like Facebook, however, the question is much more difficult. It is precisely because of Facebook's ability to distribute content so quickly that many have pointed to it (and, to a lesser degree, Twitter) as having primary responsibility to ensure fake news does not spread. This is where things get exceptionally complicated.
One could point to the Pizzagate conspiracy, an exceptionally strange situation where Hillary Clinton was accused of participation in a child trafficking ring out of DC area pizza restaurants (?). This was probably not true, but because it's so strange and scintillating and was based on extrapolations from actual - albeit unrelated - facts (John Podesta's leaked emails), it spread quickly. But how would Facebook know that it wasn't factually true? Similarly, there was a rumor that Hillary Clinton was fed the debate questions in advance - given how she dominated Trump in the second debate. This was probably not true. But then, when Donna Brazile's emails were leaked, it turned out to be actually true. How would Facebook have known, in either case, which rumor was correct and which was incorrect? Indeed, until Donna Brazile's emails were leaked a month later, the accusations were "incorrect." Had Facebook marked this as fake news, a legitimate story would have been suppressed.
So if Facebook or Google were to get into business of marking news as fake news, where would they get their authority or knowledge. Blocking sites that simply peddle nonsense is easy, but otherwise - news stories are based on actual facts that are often not known to Google or Facebook. This means some entities, be it Snopes or others, will define "truth." And they've all been wrong in the past. This would also mean that only news that sticks to approved narratives, even if those narratives are wrong, would be eligible for broad distribution. Investigative journalism that calls on unique sources or relies on confidential information may be marked as inaccurate simply because the fact checker lacks the same information. It's not hard, especially given the current executive branch, to see this devolving into authoritarianism. Which is all to say, broadly speaking, the problem of fake news is not easy.