Publications

Possible Effects of President Trump’s CDA Section 230 Executive Order

June 11, 2020Article

On May 28, 2020, President Trump signed the Executive Order on Preventing Online Censorship. The catalyst for this executive order was Twitter marking one of the President’s tweets as violating the website’s terms of service, and the stated goal of the executive order was to protect free speech online by stopping censorship from social media platforms. There is some question whether the executive order is constitutional, not because the First Amendment protects political speech, but because the targeted statute, Section 230 of the Communications Decency Act, was enacted by Congress and cannot be amended or repealed by an executive order. Even if the executive order is not constitutional, though, it could cause a massive shift in internet regulation and, therefore, in the functionality of the internet itself.

CDA § 230: A History

To understand CDA § 230, it helps to understand the world from which it came. In the early 1990s, at the time of the internet’s inception, there were two cases about libel on the internet. The first was Cubby, Inc. v. Compuserve, Inc.,[1] in which Cubby sued Compuserve for libel based on statements posted on one of Compuserve’s forums. The Southern District of New York ruled that Compuserve was not liable because it neither knew nor had reason to know what had been posted on its forums.

Four years later, the New York Supreme Court in Nassau County ruled very differently in a very similar case. In Stratton Oakmont v. Prodigy Servs. Co.,[2] Prodigy hosted forums much like Compuserve did. The main difference was that Prodigy monitored its forums and removed some messages it found to be inappropriate. The court found that Prodigy was a publisher because (1) it had content guidelines requesting users refrain from harassment, (2) it used software to screen for offensive language, (3) it employed board leaders to monitor forums, and (4) board leaders had an emergency delete function to remove offensive content. Because Prodigy was considered to be a publisher, the court found it was liable for defamatory statements posted by a third party.

While Stratton Oakmont was being argued in court, Congress was arguing over the Telecommunications Act of 1996,[3] which included what’s commonly known as the Communications Decency Act.[4] Included in the Communications Decency Act was the Cox-Wyden Amendment, now known as CDA § 230. Representatives Cox and Wyden introduced their amendment in direct response to Stratton Oakmont.[5] In his remarks on August 4, 1995, Cox said, “[T]he existing legal system provides a massive disincentive for the people who might best help us control the Internet to do so.”[6] Cox held up Cubby as an example of how the courts should treat forums and Stratton Oakmont as “backward[s]” for disincentivizing online services from attempting to police their own boards.[7] Cox listed two purposes for § 230: (1) to “protect computer Good Samaritans, online service providers, anyone who provides a front end to the Internet . . . who takes steps to screen indecency and offensive material for their customers” and (2) to avoid content regulation by the federal government.[8] As written, CDA § 230 has two important components: (1) websites are not publishers of third party content and (2) a website does not become a publisher when it voluntarily and in good faith restricts access to content it finds to be objectionable even if that material is constitutionally protected.[9]

The courts have generally interpreted CDA § 230 broadly to completely bar liability for third party posts on websites.[10] However, some courts have narrowed the interpretation of the statute and will find liability under certain circumstances. Those circumstances in which liability has been found fall under four different categories: (1) liability for actions outside the scope of § 230,[11] (2) liability for provoking third parties to commit liable actions,[12] (3) liability for repackaging liable content,[13] and (4) liability for profiting from liable content.[14]

President Trump’s Executive Order

The executive order declares that it is the policy of the United States that CDA § 230 should be limited to the statute’s text and purpose. From the legislative history, the statute’s text and purpose is presumably to protect websites from being held liable for third-party speech when the website polices speech it finds to be offensive. According to the executive order, the purpose of CDA § 230 is to protect websites from liability when they protect minors from harmful content and to promote the internet as a forum for diverse political discourse. Notably, neither interpretation of the purpose of CDA § 230 would have prevented Twitter from fact checking the President’s tweets.

The executive order goes on to say that the policy of the United States is to ensure that CDA § 230’s liability shield for websites is limited to those acting in “good faith” rather than engaging in “deceptive or pretextual actions (often contrary to their stated terms of service) to stifle viewpoints with which they disagree.”[15] To advance this policy, the executive order requires the Secretary of Commerce to file a petition for rulemaking with the FCC that (1) would treat websites as the publisher of third-party speech if that website does not comply with a strict interpretation of CDA § 230(c)(2)(A)’s requirement of “good faith” when restricting access to offensive speech and (2) would define “good faith” as excluding actions that are deceptive or inconsistent with the website’s terms of service and that are taken without providing notice or a meaningful opportunity to be heard. It’s difficult to see how the FCC would overcome decades of case law interpreting CDA § 230 broadly, but at least a few of the FCC commissioners have indicated that they are eager to take on the challenge.

Even though the FCC has a significant hurdle of case law to overcome, that case law could change. The U.S. Supreme Court has never ruled on CDA § 230, and the circuit courts don’t all agree on the extent of the statute’s protections. The executive order could encourage lawsuits against social media companies for censorship, whether perceived or otherwise. As of May 1, 2020, President Trump has appointed 192 Article III federal judges, including two Supreme Court justices, 51 federal appeals court judges, and 139 district court judges. That means that a quarter of federal appeals court judges have been appointed by President Trump. At least some of the judges appointed by the President likely share his views on social media. If a lawsuit is appealed to the Supreme Court, then five justices could change the way the internet works.

The executive order also acts as a signal to the President’s allies in Congress that he is anxious to change the law. The Senate may be willing to amend CDA § 230 with language limiting the statute’s protections to fit the President’s image of it, but that bill would have a difficult time passing in the House. Even if the executive order is found to be unconstitutional, it will serve to polarize the debate on CDA § 230. The law had its critics before, but even those critics understood that changes required a deft hand to avoid disrupting an internet built on the protection provided by that law. There is no room for that kind of subtlety in a national political debate in the modern era. For that reason, the executive order actually makes legislative changes to CDA § 230 less likely.


Author: C. William Knapp (Associate, Charleston) Editor: S. Christopher Collier (Senior Partner, Atlanta)


[1] 776 F. Supp. 135 (S.D.N.Y. 1991).

[2] 1995 N.Y. Misc. LEXIS 229 (Sup. Ct. N.Y. 1995).

[3] Telecommunications Act of 1996, Pub. L. No. 104-104, 110 Stat. 56 (1970).

[4] Id. at §§ 501-61

[5] H.R. Rep. No. 104-458, at 194 (1996).

[6] 104 Cong. Rec. H8469 (daily ed. Aug. 4, 1996) (statement of Rep. Cox).

[7] Id. at H8470.

[8] Id. (emphasis added).

[9] CDA § 230(c).

[10] See, e.g., Zeran v. Am. Online, Inc., 129 F.3d 327 (4th Cir. 1997) (finding AOL not liable when it didn’t remove postings it knew were defamatory); see also Carafano v. Metrosplash.com, 339 F.3d 1119 (9th Cir. 2003) (finding that a dating service is an information content provider because its questionnaire was a structure for content, not actual content); see also Jones v. Dirty World Entm’t Recordings LLC, 755 F.3d 398 (6th Cir. 2014) (finding that a website is not liable for defamatory content unless it materially contributes to that content).

[11] See Song Fi Inc. v. Google, Inc., 108 F. Supp. 3d 876 (N.D. Cal. 2015) (YouTube is not protected by CDA § 230 against claims of breach of contract or tortious interference when it removes a video for allegedly violating its terms of service by falsely inflating view counts because it could not show that “otherwise objectionable” covers artificially inflating view counts).

[12] See Fair Housing Council v. San Fernando Valley v. Roommates.com, 521 F.3d 1157 (9th Cir. 2008) (A website could be responsible for fair housing violations in its users’ profile pages because it solicited improper questions from those users making “every such [user] page . . . a collaborative effort between [the website] and the subscriber”); Vision Sec., LLC v. Xcentric Ventures, LLC, 2015 U.S. Dist. LEXIS 184007 (D. Utah 2015) (Ripoff Report is not protected by 230 in the discovery phase when its tagline is “By consumers, for consumers” and “Don’t let them get away with it. Let the truth be known;” when it allows competitors to post comments; when it refuses to remove offensive content the author tells it is false; when positive posts are not allowed; and when it profits off of businesses with negative reviews paying for a corporate advocacy program to improve their search results); Huon v. Denton, 841 F.3d 733 (7th Cir. 2016) (CDA § 230 does not protect a website when it (1) encourages and invites users to defame the plaintiff through selecting and urging the most defamation-prone commenters to post more comments and continue to escalate the dialogue; (2) edits, shapes, and choreographs the content of the comments that it receives; (3) selects for publication every comment that appears beneath the article; and (4) employs individuals who author at least some of the comments themselves).

[13] See Fraley v. Facebook, Inc., 830 F. Supp. 2d 785 (N.D. Cal. 2011) (Facebook is not protected by 230 when it publishes that one of its members “liked” a business’ page, showing the member’s name and picture, and advertises it as a “promoted story”); General Steel Domestic Sales, LLC v. Chumley, 2015 U.S. Dist. LEXIS 108789 (D. Col. 2015) (Choosing “certain summaries and quotations describing the referenced court proceedings, fail[ing] to accurately describe the proceedings as a whole, and post[ing] those quotations and summaries on the [website]" go beyond editorial functions when they are taken out of context to place the plaintiff in an unflattering light while ignoring the nature and outcome of the court proceedings); Diamond Ranch Acad., Inc. v. Filer, 2016 U.S. Dist. LEXIS 19210 (D. Utah 2016) (A website is not protected by CDA § 230 when it uses the statements of others to create its own comments or when it uses surveys with leading questions designed to create specific responses).

[14] See Tanisha Sys. v. Chandra, 2015 U.S. Dist. LEXIS 177164 (N.D. Ga. 2015) (A blog is not protected by CDA § 230 when it acts as a team with the allegedly libelous party to create content while only appearing to be an independent commenter); Congoo, LLC v. Revcontent LLC, 2016 U.S. Dist. LEXIS 51051 (D. NJ 2016) (CDA § 230 does not protect an online advertising company when the ads it publishes from third parties employ false and misleading advertising and the company uses those ads to generate greater income for themselves).

[15] Exec. Order No. 13,925, 85 FR 34079, § 2(a) (May 28, 2020).