Supreme Court Endorses Neutrality Triangulation Approach to Constitutionality of Platform Regulation

2 months ago 2

 

Matthew B. Lawrence

 

On July 27 the Senate passed the Kids Online Safety Act.  The bill, a major federal public health measure regulating social media platforms, now moves to the House, but it is dogged by opponents’ questions about its constitutionality. 

 

For years, uncertainty has surrounded state and federal efforts to regulate social media platforms.  Last month’s decision in Moody v. Netchoice endorsed a framework for assessing the constitutionality of laws regulating platforms that substantially clarifies the law in this space, although major questions remain.  As described below, what might be called the neutrality triangulation framework endorsed in Moody embeds Balkin’s free speech triangle, looking not only to how a law treats platform conduct but also to how the regulated platform conduct treats user content.  


Readers are no doubt aware of concerns that platforms have used their power to construct and oversee digital spaces in ways that produce unnecessary harm.  For example, plaintiffs in a growing number of addictive design cases allege that platforms have used slot machine design tricks to foster compulsion in kids—that the platforms have designed the spaces in which we interact to be more like a casino than a public square, complete with flashy machines dispensing unpredictable rewards on schedules structured to exploit human psychology and get people “hooked.”    The bipartisan sponsors of the Kids Online Safety Act highlight youth mental health concerns, as does an important Surgeon General’s advisory.  Jonathan Haidt’s “The Anxious Generation” is a #1 New York Times bestseller.  And, of course, concerns about social media harms go far beyond these direct public health impacts to include worries that social media supercharges the spread of misinformation about elections and epidemics, among other harms.


These concerns have fueled regulatory efforts aiming to mitigate social media harms, but—as Gaia Bernstein’s book Unwired describesthese efforts have been clouded by unresolved First Amendment questions.  Due to the novelty of the technology, it has not been clear how courts would or should think about how the First Amendment applies to laws that regulate platforms.  For example, would courts view Tik Tok as more akin to a slot machine (themselves usually assumed to be beyond the coverage of the freedom of speech), to a “matchmaker” (perhaps subject to distinctive rules), or to a novel (subject to strict constitutional protections)? 

 

The Supreme Court went a long way toward resolving these issues in Moody v. Netchoice.  The case was focused on the constitutionality of Florida and Texas laws restricting discrimination by platforms in their censorship and content prioritization choices (what they recommended to users or put at the top of ther “feeds”) against certain political content (for more on the case see Teachout’s summary in the Nation).  But observers and a host of amici (including a brief by a group of law and history scholars and the American Economic Liberties Project that I joined) watched the case closely in the expectation that the Court might use it to develop a broader framework for evaluating the constitutionality of platform regulation, and that turned out to be true. 

 

In Moody, the court endorsed a neutrality triangulation approach to resolving questions about the applicability of the First Amendment to content moderation.  Several years ago, Balkin pointed out that in the platform era, “Free Speech is a Triangle” because speech governance is not simply a bilateral relationship between the state and the speaker, it is a trilateral relationship between the state (which can regulate the platform or the speaker), the platform (which regulates the speaker), and the speaker (who is governed by both the state and the platform).

 

This triangle concept is embedded in Moody.  Justice Kagan’s majority opinion explains that the “expression” entailed in content moderation that may be protected by the First Amendment is a “particular edited compilation of third-party speech.”  Op. at 11, 14, (emphasis added).  In other words, “expressive activity[] include[es] compiling and curating others’ speech.  Op. at 17 (emphasis added).  And the opinion elaborates on what it means by the key words “compiling” and “curating,” namely, “choices about whether—and if so, how—to convey posts having a certain content or viewpoint.”  Op. at 24.

 

Note how the key coverage question under Moody is not about the relationship of the government regulation and the platform conduct only; it is about the interaction of the platform regulation and user content—as indicated in Figure 1 below, the other vertex of Balkin’s triangle.  The court indicates that state regulation of a platform function is protected by the First Amendment if the platform function itself discriminates (entails “choices about whether—and if so, how—to convey”) among user speech based on its “content or viewpoint.” 
 

 

Figure 1: Neutrality triangulation



 

This is a function-by-function, choice-by-choice level determination.  It is not platforms that might be covered by the First Amendment (or not), nor is it content moderation (loosely defined) that might be covered.  Rather, it is individual functions of particular platforms that might or might not be covered (with the key question being whether those functions entail discriminating among user content).  As Justice Kagan puts it, courts must “ask[] as to every covered platform or function, whether there is an intrusion on protected editorial discretion.”  Op. at 11; see also op. at 18 n. 4 (“an entity engaged in expressive activity when performing one function may not be when carrying out another”).  Technical details matter.

 

In other words, what Moody seems to be saying is that assessing the coverage of the First Amendment when it comes to the broad set of activities understood as “content moderation” is a question of neutrality triangulation: Whether a platform activity is itself protected by the First Amendment depends on whether that platform activity is content neutral vis a vis user speech.  When a platform operator is actively discriminating among user speech based on its content—making choices to prioritize some content and to censor others—it is protected by the First Amendment.  When a platform operator is not actively discriminating among user speech based on its content it is not covered (unless it is otherwise expressive).  Just as states have more discretion to regulate speech when they do so in a content-neutral way (because constitutional tests for content-neutral regulation are more forgiving), states have more discretion under Moody to regulate platform activity when the activity they regulate itself is content neutral vis a vis user speech. 

 

As I explain in a recent article describing this neutrality triangulation approach, this way of understanding the First Amendment’s applicability to particular platform content moderation choices has been percolating in the lower courts recently.  In the Social Media Cases pending in California, for example, Facebook, Instagram, Snapchat, TikTok, and YouTube moved to dismiss all plaintiffs’ addictive design cases on First Amendment grounds.  Judge Kuhl granted that motion in part but denied in part last October in an opinion that focuses on the content neutrality vel non of the regulated platform conduct vis a vis user content.  Judge Kuhl thus found many of the claims to lie outside the coverage of the First Amendment because “[t]he allegedly addictive and harmful features of Defendants’ platforms are alleged to work regardless of the third-party content viewed by the users.”  Op. at 38. 

 

An advantage of the neutrality triangulation approach is that it produces workable lines.  Those lines are illustrated in as Judge Kuhl’s opinion, as well as a hot-off-the-presses Ninth Circuit opinion in Netchoice v. Bonta applying Moody (and so neutrality triangulation) to affirm in part and vacate in part a lower court’s ruling enjoining in toto the California Age-Appropriate Design Code Act.  Proponents of regulation allege that infinite scroll encourages compulsive use by eliminating natural stopping points and thus opportunities for self-governance; that design feature regulates the manner in which users consume content so it would not be covered by the First Amendment.  Proponents of regulation also allege that platforms prioritize content encouraging eating disorders or self-harm in adolescents’ feeds; any active such choices would discriminate among user content and so would be protected by the First Amendment.  Under the neutrality triangulation approach states can regulate the former as their democratically elected or empowered policymakers think best, but any regulation of latter must be supported by a sufficient (and sufficiently-tailored) state interest. 

 

Frischman and Benesch helpfully analogize neutrality triangulation to the distinction between “content-based” and “time, place, and manner” restrictions in traditional free speech law.  It has long been understood that states may more readily regulate the “time, place, and manner” of speech directly than they may its content.  Neutrality triangulation draws a similar line along the free speech triangle’s other vertex: It permits states to regulate platforms’ choices about the “time, place, and manner” of user speech more readily than they may regulate platforms’ choices to discriminate among user speech based on its content or viewpoint. 

 

To be sure, much remains unresolved.  The majority’s endorsement of the neutrality triangulation approach in Moody is technically dicta because its holding was premised on the lower courts’ failure to assess the appropriateness of the platforms’ facial challenges in the case.  Of course, I am here offering my own understanding of the Court’s ruling in Moody; triangles do not actually appear in Justice Kagan’s opinion (though, in light of Moody, we may well see more triangles in future opinions).  Moreover, the majority made clear that states may well have interests sufficient to regulate even platform design and operation choices that are covered by the First Amendment.  As Justice Kagan put it, “[m]any possible interests relating to social media” could justify regulation even of protected speech.  Op. at 4.  She hinted that public health concerns around youth mental health may be one potential source of such interests.  Op. at 19 (“Today’s social media pose dangers not seen earlier.  No one ever feared the effects of newspaper opinion pages on adolescents’ mental health.”). There is also the question, teed up by Justice Barrett, whether platform choices must be actual choices made by an actual human being who is entitled to the First Amendment’s protections in order to trigger coverage.  Op. at 22 n. 5. 

 

While much remains unresolved, the Supreme Court’s endorsement of the neutrality triangulation approach provides guidance that legislators can consider in crafting laws regulating social media, that courts can consider in adjudicating challenges to such laws, and that researchers can consider in developing the evidence base addressing the benefits and costs of such regulation.  After Moody, all these groups should remain mindful of the potentially-determinative question whether regulated platform design and operation choices discriminate among user expression based on its content.

Matthew B. Lawrence is Associate Dean of Faculty and Associate Professor at Emory University School of Law. You can reach him by e-mail at matthew.lawrence@emory.edu.

Read Entire Article