FacebookIn the wake of the fake news scandals of the 2016 U.S. presidential election, social media platforms such as Facebook and Snapchat are increasingly being held to standards expected of media, rather than tech, companies. Fact-checkers and editors are entering the scene, raising the question whether social media platforms will continue to be passive Internet service providers, or content providers, or perhaps more of a hybrid. 

The U.S. election catapulted fake news to the center stage of public debate. Which social media engines perpetrated the most fake news? Who believed the most fake news? How did fake news contribute to Donald Trump becoming President-elect?

The more pressing question at this point is how social media is going to correct the fake news problem.

Following strong criticism after the election, Facebook announced on December 15, 2016 new measures it is taking to address the issue of hoaxes and fake news.

Facebook reiterated its commitment to giving people a voice and its belief that it cannot become an arbiter of truth itself. Instead, Facebook has announced that it will partner with third-party fact-checking organizations. As a precondition to partnering with Facebook, the fact-checking organization must be a signatory of Poynter’s International Fact Checking Code of Principles. This Code is the result of international consultations among fact-checkers and sets out principles for fact-checkers to aspire to in their everyday work. Signatories must produce a public report indicating how they have lived up to each of the five principles within a year from signature, and once a year thereafter.

If a fact-checker identifies a story as fake, it will get flagged as “Disputed by 3rd Party Fact-Checkers”, with a link to an article explaining why. Although it will still be possible to share flagged stories, a warning that the story has been disputed will be displayed upon sharing.

Facebook is also tackling the financial incentives inherent in fake news. Hoaxers posing as news organizations are able to drive people to their websites that are usually advertisements. Facebook is eliminating the ability to spoof domain names and will analyze publisher sites to detect where policy enforcement action is necessary. The platform will also ensure that once a story is flagged, it is no longer be possible to make it into an ad or promote it.

Is this approach the correct one? The answer turns on how one perceives Facebook. Mark Zuckerberg has consistently described his platform as a “tech company”, not a “media company”, maintaining that it is up to users to decide who to follow. In an update posted after the election, Zuckerberg reiteratedWe believe in giving people a voice, which means erring on the side of letting people share what they want whenever possible. We need to be careful not to discourage sharing of opinions or to mistakenly restrict accurate content.

On the one hand, Facebook is primarily a social platform, not a news organization. Its users should be expected to exercise a minimum amount of good judgment when assessing the content that appears in their News Feed. We cannot, as an ever-growing online community, completely absolve ourselves from responsibility either.

On the other hand, with its 1.79bn users, Facebook wields incredible power. Until now, Facebook has relied mostly on algorithms, keeping human editorial judgment to a minimum – an approach that has not always worked for Zuckerberg’s data empire. The platform has been repeatedly criticized for taking down socially important content (e.g. its removal in October 2016 of a Swedish breast cancer awareness video or recent censoring of the Dakota Access pipeline protest livestream) and for the lack of transparency in its take-down process. Speaking at the Future Today Summit on December 6, 2016 in NYC, Judith Miller, an American journalist and commentator, shared her frustration at Facebook’s censorship of one of her articles on the war in Iraq. Miller questioned whether Facebook would “become our censors”, a status quo people should be “outraged” about as it is “not even possible to get someone on the phone to explain to you why your article was removed”. Meredith Broussard, an assistant professor at NYU’s Arthur L. Carter Journalism Institute, also called for greater editorial control, criticizing Facebook’s algorithm for optimizing what is “popular”, not what is “good”. Broussard also expressed skepticism that an algorithm is more neutral than editorial control as algorithms are made by people who have biases that can be replicated in the algorithm.

Snapchat, by way of comparison, exercises greater editorial control over news. Its news section, Discover, was introduced in 2015. Unlike social media companies that present users with content that is recent or popular, Discover counts on editors and artists, not clicks and shares, to determine what is important. Snapchat’s intention to rely on human editing and curation was made clear with its hiring in 2015 of Peter Hamby, a national political reporter for CNN, to head its news division. The benefits of these developments have not gone unnoticed, and are elegantly summarized by the title of Farhad Manjoo’s New York Times November 2016 articleWhile We Weren’t Looking, Snapchat Revolutionized Social Media.

Facebook certainly has been looking, and appears to be drawing back its algorithm “shield”. Its new feature, Collections, will highlight news stories submitted by “handpicked media partners”, according to Business Insider. Unlike news stories that appear in the News Feed at present based on likes or as paid content, publishers will see their content inserted into the News Feed, as well as on Collections.

The combination of Collections and Facebook’s partnership with third party fact-checkers should herald an improvement in the quality and accuracy of the news on the platform. Whether one views Facebook as a tech giant or a media company, the move away from pure algorithms is a positive development. Hiding behind algorithms has, correctly, been described as increasingly untenable. After all, “algorithms are made by humans; choosing which story appears in your Facebook feed is the responsibility of Facebook whether they choose it explicitly or implicitly via an algorithm.”

This increased editorial control also raises the issue of Section 230 of the Communications Decency Act 1996, which provides immunity for providers and users of interactive computers services for user generated content (“UGC”) (“no provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another content provider”). Notwithstanding this immunity, traditional media companies initially continued to check facts and took responsibility for UGC to preserve integrity of information in their publications. Now, 20 years after the enactment of Section 230, we are witnessing the intended beneficiaries of Section 230 immunity do the very same thing.

It is open to question whether curation and human editing will be sufficient to be deemed a waiver of immunity under Section 230. A service provider does not lose Section 230 immunity for exercising a publisher’s traditional editorial functions, such as deciding whether to publish, withdraw, postpone or alter content. See Zeran v. Am. Online, Inc., 129 F.3d 327 (4th Cir. Va. 1997) [pdf]. But where a service provider materially contributes to the alleged illegality of the conduct – as in Fair Housing Council v. Roommates.com, LLC, 521 F.3d 1157 (9th Cir. Cal. 2008 [pdf]) where Roomates.com’s connection to the discriminatory filtering process was direct and palpable – immunity is lost.

Courts are yet to address Section 230 and fake news on social media. This September, however, the Second Circuit denied Section 230 immunity to LeadClick, the now defunct operator of an affiliate marketing network, for using fake news sites. Federal Trade Commission v. LeadClick Media, LLC, (2d Cir. Sept. 27, 2016 [pdf]). The Federal Trade Commission was entitled to hold LeadClick liable not as the publisher of third-party content but for “its own deceptive acts or practices—for directly participating in the deceptive scheme by providing edits to affiliate webpages, for purchasing media space on real news sites with the intent to resell that space to its affiliates using fake news sites, and because it had the authority to control those affiliates and allowed them to publish deceptive statements.” As applied to social media platforms, it would run counter to the purpose of Section 230 – to encourage voluntary monitoring for offensive or obscene material – to hold them liable in their effort to combat fake news.

For now, one thing is certain: being “reliable” is just as pressing issue on the Internet as being “liable”.

Ed Klaris is the founding partner of Klaris Law PLLC. Alexia Bedat is an Associate at the firm.