ݮƵAPP

Opinions

Majority Opinion Author

Elena Kagan

SUPREME COURT OF THE UNITED STATES

Syllabus

MOODY, ATTORNEY GENERAL OF FLORIDA, et al. v. NETCHOICE, LLC, dba NETCHOICE, et al.

CERTIORARI TO THE UNITED STATES COURT OF APPEALS FOR THE ELEVENTH CIRCUIT

No. 22–277. Argued February 26, 2024 — Decided July 1, 2024*[1]

In 2021, Florida and Texas enacted statutes regulating large social-media companies and other internet platforms. The States’ laws differ in the entities they cover and the activities they limit. But both curtail the platforms’ capacity to engage in content moderation — to filter, prioritize, and label the varied third-party messages, videos, and other content their users wish to post. Both laws also include individualized-explanation provisions, requiring a platform to give reasons to a user if it removes or alters her posts.

NetChoice LLC and the Computer & Communications Industry Association (collectively, NetChoice) — trade associations whose members include Facebook and YouTube — brought facial First Amendment challenges against the two laws. District courts in both States entered preliminary injunctions.

The Eleventh Circuit upheld the injunction of Florida’s law, as to all provisions relevant here. The court held that the State’s restrictions on content moderation trigger First Amendment scrutiny under this Court’s cases protecting “editorial discretion.” 34 F. 4th 1196, 1209, 1216. The court then concluded that the content-moderation provisions are unlikely to survive heightened scrutiny. Id., at 1227–1228. Similarly, the Eleventh Circuit thought the statute’s individualized-explanation requirements likely to fall. Relying on Zauderer v. Office of Disciplinary Counsel of Supreme Court of Ohio, 471 U.S. 626, the court held that the obligation to explain “millions of [decisions] per day” is “unduly burdensome and likely to chill platforms’ protected speech.” 34 F. 4th, at 1230.

The Fifth Circuit disagreed across the board, and so reversed the preliminary injunction of the Texas law. In that court’s view, the platforms’ content-moderation activities are “not speech” at all, and so do not implicate the First Amendment. 49 F. 4th 439, 466, 494. But even if those activities were expressive, the court determined the State could regulate them to advance its interest in “protecting a diversity of ideas.” Id., at 482. The court further held that the statute’s individualized-explanation provisions would likely survive, even assuming the platforms were engaged in speech. It found no undue burden under Zauderer because the platforms needed only to “scale up” a “complaint-and-appeal process” they already used. 49 F. 4th, at 487.

Held: The judgments are vacated, and the cases are remanded, because neither the Eleventh Circuit nor the Fifth Circuit conducted a proper analysis of the facial First Amendment challenges to Florida and Texas laws regulating large internet platforms. Pp. 9–31.

(a) NetChoice’s decision to litigate these cases as facial challenges comes at a cost. The Court has made facial challenges hard to win. In the First Amendment context, a plaintiff must show that “a substantial number of [the law’s] applications are unconstitutional, judged in relation to the statute’s plainly legitimate sweep.” Americans for Prosperity Foundation v. Bonta, 594 U.S. 595, 615.

So far in these cases, no one has paid much attention to that issue. Analysis and arguments below focused mainly on how the laws applied to the content-moderation practices that giant social-media platforms use on their best-known services to filter, alter, or label their users’ posts, i.e., on how the laws applied to the likes of Facebook’s News Feed and YouTube’s homepage. They did not address the full range of activities the laws cover, and measure the constitutional against the unconstitutional applications.

The proper analysis begins with an assessment of the state laws’ scope. The laws appear to apply beyond Facebook’s News Feed and its ilk. But it’s not clear to what extent, if at all, they affect social-media giants’ other services, like direct messaging, or what they have to say about other platforms and functions. And before a court can do anything else with these facial challenges, it must “determine what [the law] covers.” United States v. Hansen, 599 U.S. 762, 770.

The next order of business is to decide which of the laws’ applications violate the First Amendment, and to measure them against the rest. For the content-moderation provisions, that means asking, as to every covered platform or function, whether there is an intrusion on protected editorial discretion. And for the individualized-explanation provisions, it means asking, again as to each thing covered, whether the required disclosures unduly burden expression. See Zauderer, 471 U. S., at 651.

Because this is “a court of review, not of first view,” Cutter v. Wilkinson, 544 U.S. 709, 718, n. 7, this Court cannot undertake the needed inquiries. And because neither the Eleventh nor the Fifth Circuit performed the facial analysis in the way described above, their decisions must be vacated and the cases remanded. Pp. 9–12.

(b) It is necessary to say more about how the First Amendment relates to the laws’ content-moderation provisions, to ensure that the facial analysis proceeds on the right path in the courts below. That need is especially stark for the Fifth Circuit, whose decision rested on a serious misunderstanding of First Amendment precedent and principle. Pp. 12–29.

(1) The Court has repeatedly held that ordering a party to provide a forum for someone else’s views implicates the First Amendment if, though only if, the regulated party is engaged in its own expressive activity, which the mandated access would alter or disrupt. First, in Miami Herald Publishing Co. v. Tornillo, 418 U.S. 241, the Court held that a Florida law requiring a newspaper to give a political candidate a right to reply to critical coverage interfered with the newspaper’s “exercise of editorial control and judgment.” Id., at 243, 258. Florida could not, the Court explained, override the newspaper’s decisions about the “content of the paper” and “[t]he choice of material to go into” it, because that would substitute “governmental regulation” for the “crucial process” of editorial choice. Id., at 258. The next case, Pacific Gas & Elec. Co. v. Public Util. Comm’n of Cal., 475 U.S. 1, involved California’s attempt to force a private utility to include material from a certain consumer-advocacy group in its regular newsletter to consumers. The Court held that an interest in “offer[ing] the public a greater variety of views” could not justify compelling the utility “to carry speech with which it disagreed” and thus to “alter its own message.” Id., at 11, n. 7, 12, 16. Then in Turner Broadcasting System, Inc. v. FCC, 512 U.S. 622, the Court considered federal “must-carry” rules, which required cable operators to allocate certain channels to local broadcast stations. The Court had no doubt the First Amendment was implicated, because the rules “interfere[d]” with the cable operators’ “editorial discretion over which stations or programs to include in [their] repertoire.” Id., at 636, 643–644. The capstone of this line of precedents, Hurley v. Irish-American Gay, Lesbian and Bisexual Group of Boston, Inc., 515 U.S. 557, held that the First Amendment prevented Massachusetts from compelling parade organizers to admit as a participant a gay and lesbian group seeking to convey a message of “pride.” Id., at 561. It held that ordering the group’s admittance would “alter the expressive content of the[ ] parade,” and that the decision to exclude the group’s message was the organizers’ alone. Id., at 572–574.

From that slew of individual cases, three general points emerge. First, the First Amendment offers protection when an entity engaged in compiling and curating others’ speech into an expressive product of its own is directed to accommodate messages it would prefer to exclude. Second, none of that changes just because a compiler includes most items and excludes just a few. It “is enough” for the compiler to exclude the handful of messages it most “disfavor[s].” Hurley, 515 U. S., at 574. Third, the government cannot get its way just by asserting an interest in better balancing the marketplace of ideas. In case after case, the Court has barred the government from forcing a private speaker to present views it wished to spurn in order to rejigger the expressive realm. Pp. 13–19.

(2) “[W]hatever the challenges of applying the Constitution to ever-advancing technology, the basic principles” of the First Amendment “do not vary.” Brown v. Entertainment Merchants Assn., 564 U.S. 786, 790. And the principles elaborated in the above-summarized decisions establish that Texas is not likely to succeed in enforcing its law against the platforms’ application of their content-moderation policies to their main feeds.

Facebook’s News Feed and YouTube’s homepage present users with a continually updating, personalized stream of other users’ posts. The key to the scheme is prioritization of content, achieved through algorithms. The selection and ranking is most often based on a user’s expressed interests and past activities, but it may also be based on other factors, including the platform’s preferences. Facebook’s Community Standards and YouTube’s Community Guidelines detail the messages and videos that the platforms disfavor. The platforms write algorithms to implement those standards — for example, to prefer content deemed particularly trustworthy or to suppress content viewed as deceptive. Beyond ranking content, platforms may add labels, to give users additional context. And they also remove posts entirely that contain prohibited subjects or messages, such as pornography, hate speech, and misinformation on certain topics. The platforms thus unabashedly control the content that will appear to users.

Texas’s law, though, limits their power to do so. Its central provision prohibits covered platforms from “censor[ing]” a “user’s expression” based on the “viewpoint” it contains. Tex. Civ. Prac. & Rem. Code Ann. §143A.002(a)(2). The platforms thus cannot do any of the things they typically do (on their main feeds) to posts they disapprove — cannot demote, label, or remove them — whenever the action is based on the post’s viewpoint. That limitation profoundly alters the platforms’ choices about the views they convey.

The Court has repeatedly held that type of regulation to interfere with protected speech. Like the editors, cable operators, and parade organizers this Court has previously considered, the major social-media platforms curate their feeds by combining “multifarious voices” to create a distinctive expressive offering. Hurley, 515 U. S., at 569. Their choices about which messages are appropriate give the feed a particular expressive quality and “constitute the exercise” of protected “editorial control.” Tornillo, 418 U. S., at 258. And the Texas law targets those expressive choices by forcing the platforms to present and promote content on their feeds that they regard as objectionable.

That those platforms happily convey the lion’s share of posts submitted to them makes no significant First Amendment difference. In Hurley, the Court held that the parade organizers’ “lenient” admissions policy did “not forfeit” their right to reject the few messages they found harmful or offensive. 515 U. S., at 569. Similarly here, that Facebook and YouTube convey a mass of messages does not license Texas to prohibit them from deleting posts they disfavor. Pp. 19–26.

(3) The interest Texas relies on cannot sustain its law. In the usual First Amendment case, the Court must decide whether to apply strict or intermediate scrutiny. But here, Texas’s law does not pass even the less stringent form of review. Under that standard, a law must further a “substantial governmental interest” that is “unrelated to the suppression of free expression.” United States v. ’B, 391 U.S. 367, 377. Many possible interests relating to social media can meet that test. But Texas’s asserted interest relates to the suppression of free expression, and it is not valid, let alone substantial.

Texas has never been shy, and always been consistent, about its interest: The objective is to correct the mix of viewpoints that major platforms present. But a State may not interfere with private actors’ speech to advance its own vision of ideological balance. States (and their citizens) are of course right to want an expressive realm in which the public has access to a wide range of views. But the way the First Amendment achieves that goal is by preventing the government from “tilt[ing] public debate in a preferred direction,” Sorrell v. IMS Health Inc., 564 U.S. 552, 578–579, not by licensing the government to stop private actors from speaking as they wish and preferring some views over others. A State cannot prohibit speech to rebalance the speech market. That unadorned interest is not “unrelated to the suppression of free expression.” And Texas may not pursue it consistent with the First Amendment. Pp. 26–29.

No. 22–277, 34 F. 4th 1196; No. 22–555, 49 F. 4th 439; vacated and remanded.

Kagan, J., delivered the opinion of the Court, in which Roberts, C. J., and Sotomayor, Kavanaugh, and Barrett, JJ., joined in full, and in which Jackson, J., joined as to Parts I, II and III–A. Barrett, J., filed a concurring opinion. Jackson, J., filed an opinion concurring in part and concurring in the judgment. Thomas, J., filed an opinion concurring in the judgment. Alito, J., filed an opinion concurring in the judgment, in which Thomas and Gorsuch, JJ., joined.

Notes

[1] *Together with No. 22–555, NetChoice, LLC, dba NetChoice, et al. v. Paxton, Attorney General of Texas, on certiorari to the United States Court of Appeals for the Fifth Circuit.

Share