Every second, on average, 6,000 tweets are published on Twitter – that’s 500 million tweets per day. Of these millions of tweeters, how many are considering defamation law when they retweet? Probably not many. And yet, one click of a button could land you into legal trouble.

In some countries, you could be liable for retweeting a defamatory tweet authored by somebody else, in others maybe not.  Put simply, defamation comes into play when one person publishes a statement about another person that is false, damaging to reputation, and baseless.  For example, falsely calling someone a racist or a criminal could be defamatory if you haven’t checked the story adequately.

Lawsuits for defamation by retweeting (i.e. simply passing on a tweet to your followers without adding more) have already been brought in countries outside of the United States. A court in India, for example, in 2017 held Twitter users can be liable for retweeting defamatory content (a particularly troubling finding in India where defamation remains a crime punishable by imprisonment for a term of two years and/or a fine).

In the United Kingdom, in 2013, Alan Davies paid in settlement £15,000 after retweeting Sally Bercow’s tweet that suggested Lord McAlpine, a leading Conservative politician from the Thatcher years, had committed child abuse. McAlpine planned to sue at least 10,000 other people who had either retweeted Bercow’s tweet or other Tweets that named McAlpine as the alleged child abuser.  He eventually dropped the defamation claims against all retweeters with fewer than 500 followers.

Two countries have even found “liking” and “tagging” on Facebook to be sufficient grounds for a defamation claim. In Switzerland, the Zurich District Court in May 2017 fined a defendant 4,000CHF for “liking” a defamatory comment on Facebook that accused the plaintiff animal rights activist of racism and antisemitism. According to the Zurich District Court, even if the comments had not been authored by the defendant, by “liking” the comment, the “defendant clearly endorsed the unseemly content and made it his own” and had “thus made it [the content] accessible to a large number of people”. In South Africa, where the first defendant posted a number of defamatory comments on her Facebook wall, merely tagging the second defendant in her comments, the North Gauteng High Court in Pretoria also found the second defendant liable, explaining its decision in 2013 only with the following sentence: “The second defendant is not the author of the postings. However, he knew about them and allowed his name to be coupled with that of the first defendant. He is as liable as the first defendant”.

These decisions are noteworthy and concerning as they may reflect a trend of judges finding increasing responsibility for fairly passive behavior on the Internet.

What if the subject of your retweet is a US-based individual?

US courts have not yet addressed whether retweeting defamatory content can form the basis of a defamation claim. The issue was set to be considered by the Eastern District of New York in a lawsuit brought by Roslyn La Liberte, a Trump supporter, against Joy Reid, a known MSNBC host, in relation to a photograph in which La Liberte, wearing a MAGA hat, appeared to be shouting at a high school student during a City Council meeting On November 13, however, La Liberte announced that she would be amending her complaint to remove the claim relating to the retweet, and limiting her lawsuit to Reid’s Instagram and Facebook posts where Reid falsely accused La Liberte of shouting racial slurs at the boy.

While Reid has dropped the defamation by retweet claim, the coverage of the incident raised the question whether Section 230 of the Communications Decency Act (“CDA”) would bar such a claim.

Section 230 was enacted by Congress in 1996 to prevent the threat that tort-based lawsuits might present to freedom of speech in the (then) new and burgeoning Internet medium. It immunized online platforms and internet service providers from liability for the content posted by users of such platforms or websites (“No provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider”, 47 U.S.C. § 230). Section 230 is what, absent behavior that would place them outside the statutory immunity, prevents providers such a Twitter, YouTube or Facebook from being liable for content posted on their platforms.

In addition to protecting Twitter, Section 230 could also immunize a retweeter from liability if the words “user of an interactive computer service” would be construed as applying to such an individual.

So who is a “user” under Section 230?

The term “user” is not defined in Section 230 and the limited legislative record does not indicate why Congress included the word “user” as well as “provider”. Case law, however, suggests that retweeters could indeed be “users” for Section 230 purposes.

In Barrett v. Rosenthal (2006), which involved the liability of an individual rather than a service provider – Rosenthal had posted an allegedly defamatory article authored by her co-defendant Bolen on the website of two newsgroups –the California Supreme Court asked the parties to address the definition of the statutory term “user”. Like Reid on Twitter, Rosenthal had no supervisory role in the operation of the newsgroup websites where the allegedly defamatory material appeared and was clearly not an “interactive computer service”. The Supreme Court concluded that the word “user” plainly referred to someone who uses something and found that the statutory context made it clear that Congress simply meant “someone who uses an interactive computer service”. Rosenthal had used the Internet to gain access to newsgroups where she posted Bolen’s article, and was therefore a “user” under the CDA.

The California Supreme Court also made it clear that the word “user” extended to “active” as well as “passive” users. Citing the Ninth Circuit in Batzel v Smith, the court agreed that for Section 230 purposes, no logical distinction could be drawn between a defendant who actively selects information for publication and one who screens submitted material and removes offensive content. Though the California Supreme Court noted that it “shared the concerns” of those who had expressed reservations over the broad interpretation of Section 230 immunity, by declaring that no “user” may be treated as a “publisher” of third-party content, Congress had “comprehensively immunized republication by individual Internet users”.

Applying the reasoning in Barrett, a retweeter “uses” the internet to gain access to Twitter, where he or she merely retweets an original tweet, and as such, could also be considered a “user” under Section 230.

This raises the issue of a clear gap between the literal interpretation of Section 230 and its practical consequences.

While retweeters may be literal users of Twitter, Reid or other individuals retweeting original content would do themselves a favor not to count on being deemed a “user” for Section 230 purposes.  Courts have indicated some doubts over a very broad interpretation of the statute, and Congress recently took away immunity if providers or users post online sex trafficking adverts. See the Fight Online Sex Trafficking Act.

One could argue that the use of the words “provider or user of an interactive computer service” were necessary to cover the situation where a website operator doesn’t just “provide” a website but also “uses” it, in so far as the website operator exercises editorial control over the website’s content. For example, in Donato v. Moldow (2005), the operator of an electronic community bulletin board website devoted to discussion of local government activities, who also controlled the content of the discussion forum by banning users or deleting messages he deemed offensive, was found to be both a “provider” and “user” of an interactive computer service within the meaning of Section 230. Without the word “user”, Section 230 immunity would be lost as soon as any website operator took on any sort of active role.

If Reid, for example, is a “user” for Section 230 purposes, what is the limit to the literal interpretation of Section 230? Taken to its logical extreme, the reasoning in Barrett would mean that any “user” of Twitter, or any other website, could copy and paste an entire defamatory article written and published on another website by a third party (i.e. “information provided by another information content provider”), publish it via a number of tweets and not be liable for defamation, even as the original tweeter. By that same rationale, a news reporter having obtained a quote from a source containing defamatory content could freely republish that quote in her online news story because the quote was “information provided by another information content provider.” The uncomfortable result of a strict interpretation of Section 230 was forewarned by Judge Gould in 2003, who in issuing the minority opinion in Batzel argued that the majority’s opinion:

licensed professional rumor-mongers and gossip-hounds to spread false and hurtful information with impunity. So long as the defamatory information was written by a person who wanted the information to be spread on the Internet (in other words, a person with an axe to grind), the rumormonger’s injurious conduct is beyond legal redress. 

Courts have consistently refused to bridge the gap between the specific wrongs Congress intended to right in enacting Section 230 immunity and the broad statutory language it used to achieve such immunity. La Liberte v. Reid would have provided an opportunity do so, or at the very least, re-examine Section 230 and its application to online providers, such as Twitter, that did not exist at the time the statutory language was drafted and highlight the troubling results of an excessively literal interpretation of Section 230.

 In the event Section 230 does not immunize retweeters from libel claims, in the United States, an individual who republishes a defamatory allegation is as liable as the third party who originally made the statement. No U.S. court, however, has considered whether a retweet is a “republication” of defamatory content. Would U.S. courts take a position similar to other countries such as India, Switzerland and South Africa and consider a retweet, like a “like” on Facebook, to be both an act of “endorsement” and an act of “dissemination”? Such a result would arguably be at odds with the reality that people usually don’t think twice about “liking” or “retweeting” something. These tools are often used as a way of flagging or sharing content and are not necessarily intended as an endorsement of third-party content. On the other hand, retweeting does present the defamatory content before the retweeter’s followers, which are most likely different from and possibly greater than the original tweeter’s followers. In La Liberte’s case, for example, Reid’s retweet increased the tweet’s potential viewership from approximately fourteen thousand viewers (Vargas’ followers) to 1.24 million viewers (Reid’s followers) (or even more depending on subsequent retweets by Reid’s followers).

The second question, the issue of how much care you need to exercise before publishing a defamatory statement, arises in the United States because a higher constitutional standard of fault is applied to defamation claims against public figures, who must prove that the defendant acted with actual malice (i.e., that the defendant had knowledge of falsity or had reckless disregard for the truth). Private figures, generally, need only show that the plaintiff acted negligently, although some states, like New York, apply a higher fault standard to private individuals where the issue is of public interest. Where the subject of the retweet is a public figure, it raises the interesting question of how, if at all, the nature of Twitter influences the “actual malice” analysis.

Twitter makes it incredibly easy for users to retweet a post, with a built-in retweet button. How does the ease and lack of substantive thought that goes into a retweet fit into the actual malice analysis? While anger or hostility against a subject or political bias alone may not suffice for a finding of actual malice, a combination of these factors may suffice to show the requisite state of mind. See e.g. Reader’s Digest Assn. v. Superior Court (1984) (where the court held that factors such failure to investigate, anger and hostility toward the plaintiff, reliance upon sources known to be unreliable, or known to be biased against the plaintiff, could, in appropriate cases, indicate that the publisher had serious doubts regarding the truth of the publication). Twitter is a medium that both lends itself to rash and hostile exchanges and preserves a digital trail of a defendant’s bias and/or animosity towards a specific cause or individual. Plaintiff’s counsel will invariably refer to the defendant’s Twitter history and you may wish to consider the circumstantial evidence that could be used against you before clicking “retweet”.

In short, think about where the person is from before retweeting or you may get drawn into court someplace in the world where the law protects reputations more than the United States, and, if you are merely a “user” watch out for the judge who decides that Section 230 immunity does not apply to you

Ed Klaris is the founding partner of Klaris Law PLLC. Alexia Bedat is an Associate at the firm.

This post originally appeared on the Klaris Law PLLC website and is reproduced with permission and thanks.