An FAQ on Murthy v. Missouri Ahead of Oral Arguments Tomorrow in the Supreme Court
The jawboning case is one of the many crazy internet speech cases taken by the Supreme Court. I asked Dean Jackson of Tech Policy Press to walk us through the basics of the case.
After almost 25 years of silence, the last two years have seen an absolute explosion in activity in the Supreme Court around online speech.
In just two terms, the Court has heard issues ranging from Section 230 and tort liability questions last year in Gonzalez and Taamneh, to the Netchoice cases three weeks ago, to Friday’s decision about when a public official’s personal Facebook page is state action for the purposes of the First Amendment to Murthy v. Missouri, the jawboning case being heard tomorrow, Monday, March 18, in the Supreme Court (live audio of oral arguments here).
It’s hard to keep up with all of it, and so when I read
’s excellent write up of the issues and politics around the case in Tech Policy Press, I thought he’d be a great person to talk to catch me up — and all of my readers up.Murthy v. Missouri: 7 Questions with Dean Jackson
Could you briefly set the stage for us? What has happened in the Murthy v. Missouri case since the Supreme Court took it on on October 20, 2023? What is this jawboning term everyone’s yammering on about and why is it a difficult normative and legal question?
“Jawboning” refers to government efforts to coerce speech intermediaries into removing “disfavored” expression, lest the intermediary face some punishment. The classic case from the 1960s, Bantam Books v. Sullivan, involves Rhode Island sending police officers to bookstores and warning that certain materials might be considered obscene by a state board, which could then refer the bookseller to prosecutors.
Of course, hearing from police that something “might be considered obscene” is an end run around the judiciary. No court or jury had yet determined if the materials violated obscenity laws. If someone in the government wanted those materials removed the government couldn’t just strong arm them into removal. There’s a process and the burden was on the government to demonstrate that those materials were obscene. Which is why in Bantam, the Court found that sending police to raise the prospect of prosecution, was government coercion to censor possibly protected speech.
Where this gets tricky is that *some* persuasion by the government is allowed, and many would argue more than allowed: that the use of persuasion by government officials to achieve policy objectives and influence public opinion is a key element of democracy. But the line between persuasion and coercion is often fuzzy.
Which is where we are today with Murthy v. Missouri, being heard by the Supreme Court tomorrow, Monday, March 18 at 10 am ET. In Murthy, plaintiffs allege the government ‘jawboned’ social media platforms into removing user-generated content during the COVID-19 pandemic.
Last July, a district court in Western Louisiana issued a surprise injunction that sided with the plaintiffs and forbade several federal departments, including the White House, FBI, State Department, and CDC, from communicating with social media companies in all but a few vaguely defined circumstances. The chilling effect was immediate as even anodyne meetings about cybersecurity were paused. Academic researchers at Stanford, the University of Washington, and elsewhere were also implicated in the injunction.
[KK: I wrote an opinion piece for the New York Times shortly after this decision came out arguing that the District Court decision was a terrible way to set global online speech policy]
A later ruling by the Fifth Circuit Court of Appeals vacated parts of that injunction, and in particular the section affecting researchers, because they have their own First Amendment rights to communicate with the government. The injunction was further stayed on October 20 when the Supreme Court agreed to hear the case.
But the question of what exactly would count as jawboning still felt like an open question -- and so the chilling effect was not dispelled. Government continues to avoid meeting with platforms on topics related to national security and election integrity, and the researchers named in the initial injunction have been targeted by separate lawsuits.
Moreover, I think it’s important to note that in the intervening months since the District Court decision, some serious questions about the framing of the case have emerged. The reply brief from the government, for example, criticizes the earlier Court rulings for uncritically accepting several factually inaccurate statements and inaccurate quotations. Myself and others have also written that the evidence, as presented in earlier rulings, is cherry-picked, removed from crucial context, contains basic misunderstandings about social media content moderation, and is generally misleading.
In your piece summarizing the issues of the case for TechPolicy Press, you wrote,
“Murthy v. Missouri is best understood not as an academic exercise in identifying government coercion, but as part of a political campaign to end content moderation as we have known it. In a way, this campaign has succeeded no matter how Murthy v. Missouri is decided…. it has generated a chilling effect that is already felt across the country.”
Tell us the bad and the worst of how this case could go, then. Would a ruling in favor of Missouri have any practical implications for the government and social medias’ ability to keep the American public safe? Or is it already sort of a done-deal?
It’s already had practical consequences. Meta’s quarterly threat report in November, for instance, said that tips from the government about malicious actors have been paused since July. I’ve talked to a lot of people in this field who are worried that grant funding for counter-disinformation work will dry up. Some big-name election integrity initiatives that operated in 2020 will probably be scaled back or discontinued this year. To paraphrase what one researcher told me, the objective of this case and related efforts is to make the field of counter-disinformation “radioactive,” and no matter how the Court rules, that’s been achieved.
But things can always be worse. Right now the work of protecting elections is stymied out of an abundance of political caution, but if the Court upholds the injunction it could be precluded by law. That would be a more durable, lasting challenge.
There’s also this question of the state action doctrine. The Fifth Circuit’s injunction argued that through jawboning, the government transformed platforms and researchers into state actors. State actors do not enjoy the same speech rights as private actors, and so platforms and researchers could see their own speech rights curtailed just through contact with the government.
And what’s most mind-boggling and scary about this is that in some of those cases of alleged coercion, the platforms reached out proactively to government agencies like the CDC to help inform their own private judgment about how to deal with the pandemic. If the Court were to rule that this mere act of consultation with government experts transformed speech intermediaries into state action, the implications could be sweeping.
Beyond the policy and politics of all of this, you made a descriptive observation about online speech that I enjoyed: that “the private and public sectors rely on one another for information and operational capacity.”
Can you spell it out a little more for those that might not know what you mean?
This goes back to that Meta threat report, where the authors say that leads from government agencies are a significant source of support for all kinds of desirable activities: stopping cybercrime, for example, or disrupting terrorist or criminal groups operating on Facebook and related platforms. Similarly, Yoel Roth, who used to be head of integrity at Twitter, has written about how he used to interact with the FBI in this collegial, non-coercive way to share information about threats to national security and public safety.
The information exchanged through those channels is really useful for both sides. Roth is a great example of how this benefits everyone and his work in particular spells out how the government often has intelligence that platforms do not, and likewise, platforms have information about their services that government does not. Platforms also have the operational capacity to disrupt operations on their services that present a risk of harm to the public. So the two can work hand-in-glove for all kinds of legitimate reasons.
The risk here is that we throw out those legitimate, beneficial use cases because of a conspiracy theory alleging censorship. As the government’s reply brief to the Supreme Court says, plaintiffs have not yet demonstrated a content moderation decision traceable to government coercion. What we’re dealing with here are snippets of emails arranged to present a very selective version of events. And on the basis of that the Court could throw a wrench into very necessary public-private efforts to disrupt some very bad actors from organizing and operating online.
So in light of the soft meets hard politics and law of jawboning, we’re in the middle of a timely news cycle. Trump flipped to be in favor of banning TikTok, the House has passed a bill to do this, though it's unlikely to pass the Senate. This is way beyond threats or jawboning! We’re directly trying to regulate an information application! Does this change the Murthy case at all?
What a great question. I don’t really know if the two are similar in a legal sense. TikTok isn’t at risk of a ban because it refused any kind of request about content moderation from the government. In fact, in some ways the company has bent over backward to appear eager to respond to all kinds of government concerns. So I am not sure there is a “jawboning” claim here.
There are of course concerns about content on TikTok—whether or not it’s suppressing content critical of the Chinese government, for example, or boosting anti-American content. I’m not sure those have been substantially demonstrated. But I also don’t think there’s anything the company could do on the content front to assuage concerns about its Chinese parent company. In fact I’m not sure it’s possible to address the lingering fear of interference by Beijing at all, even if there was zero evidence (and there isn’t zero evidence, even if there’s no smoking gun). It will always be in the back of policymakers’ minds, especially the more hawkish ones, so it’s probably an unsolvable problem for TikTok.
I think TikTok and the Murthy case do both touch on this issue, though, of the role intermediaries play in our society and what it means to lose access to them. Part of the reason the Court is eager to hear this case is because of the outsize role a few companies are playing in our public square. But that role actually seems to be shrinking as new competitors emerge. The social media space is fragmenting. That means being banned from Twitter or Facebook is not the sort of total exile it might have felt like a few years ago. By banning TikTok, you might actually increase Meta’s market share and by extension its control over public discourse. Doesn’t that seem counter to the agenda of both parties, which seem to take exception to the concentrated power of tech companies?
What bothers me about a TikTok ban isn’t that American citizens lose speech rights by having to use a different platform. I mean, a writer doesn’t lose speech rights if they have to place an op-ed in the Post instead of the Times.
I’m more bothered that we seem to be writing legislation to single out one company. This is sort of vibes-based policymaking. I’d rather see data protection legislation that applies to all platforms, and some kind of robust national security audit regime, and data access for researchers so we could assess risks on an ongoing basis.
Similarly but separately: This is obviously different from the ‘jawboning’ at hand in the Murthy case, but do you think we can reasonably have presidential candidates running presidential campaign accounts on TikTok, while also preventing federal government officials from accessing such platforms on their work devices?
I wonder if both parties will claim their candidate is being suppressed by Beijing.
It isn’t just that this feels weird or hypocritical. If the data security implications are so severe that TikTok can’t be on federal devices, should it be on a phone belonging to the next President’s campaign advisor, whoever that might be? We saw how much damage can come from a successful cyber intrusion against a political campaign in 2016 with the Russian DNC hack. That was probably as or more impactful than any ads the IRA ran that election.
I don’t know whether or not TikTok represents a grave threat to information security on personal devices but if it’s worth protecting one setting it would seem, to me, to be worth protecting both. Of course, I suppose the campaigns could insulate themselves from risk by using isolated devices solely for posting to TikTok. But I highly doubt they’re doing that.
Of course the Murthy case isn’t happening in a vacuum, just last month we heard on huge Netchoice Supreme Court cases on whether the Florida and Texas laws are considered a First Amendment restriction in the sense that the government is limiting the ‘speech’ or ‘curatorial power’ (depending who you ask) of social media platforms.
How would a NetChoice ruling in favor of the Florida and Texas laws possibly jive with a Murthy outcome? Could it expand government power over social media content moderation in a way that could move the government’s communications at issue in Murthy v Missouri from ‘persuasion’ to ‘threat’?
These cases absolutely are not happening in a vacuum. I think it’s more useful from a big picture perspective to look at them together.
I don’t necessarily agree with their desired ruling, but I think the amicus brief from the Foundation for Individual Rights and Expression (FIRE) does a good job connecting the dots between these cases. On the one hand, you have a group of people in Murthy saying the government cannot tell platforms how to moderate content on their platform—that “you have to take this down or else.”
On the other hand, you have many of the same people telling the Court, “Hey, the government insists that platforms leave this up on their platform under penalty of law.” The Texas Bill, for example, says platforms cannot moderate content on the basis of viewpoint. I don’t think Texas state legislators really considered that Holocaust denial is a viewpoint. But here you have the state, telling platforms what they can or cannot do.
Do platforms have speech rights or don’t they? The principled stance of the plaintiffs in these cases frays when you look at them in tandem. It dissolves completely when you look more broadly and combine it with attacks on speech in other parts of society, like bills in Texas and Florida targeting higher education, or local legislation pulling books from library shelves.
When you combine those things with the cherry-picked presentation of evidence in Murthy, it becomes clear pretty quickly that the goal here isn’t to preserve speech rights from government infringement. It’s to about making the Internet a place where lawful but awful speech can proliferate -- from election denial to all kinds of other harmful content.
You wrote:
“Congress and the Executive Branch should assuage free expression concerns by considering guidelines and transparency requirements for government communications with platforms about online content.”
This sounds like a great way to solve the problem at hand in Murthy.
Do you have any further thoughts on what these transparency requirements could look like, and how they could garner the credibility they’d be designed to establish?
A reasonable place to start the debate would be to look at this draft bill proposed by Will Duffield, published by the Knight First Amendment Institute. I am not sure I endorse every provision here, but Will has thought through some of the basic questions toward establishing some sort of transparency regime. What would you want to know about even a permissible government request to a platform to take some sort of content moderation action? What would need to be included in such a transparency regime? How would that information be preserved and accessed? Who would preserve, and who would be able to access it?
If you have answers to those questions, you can shed a little sunshine on alleged incidents of jawboning. That would inspire self-discipline within the government, which would think twice about how it speaks if it knows those conversations will be public. And if all of this is mostly permissible, there shouldn’t be harm in allowing journalists and civil society to scrutinize it so they can sound alarms if things do become censorial.
(Of course, real government coercion could just as easily take place during private phone calls between the President and tech CEOs, but that’s a different, harder problem.)
I’m not kidding myself that Congress would pick up a bill like this and pass it. It would probably fall victim to the same political gladiatorial combat over speech that spawned Murthy. But it would be a good thing if Congress did pass something, or if the White House enacted its own transparency regime via executive order.
Tech companies could also do this themselves, though they have little incentive to do so. But this is pretty similar to the approach they took around requests for government access to personal data. It would be reasonable to have something like this here, too, especially for countries where jawboning really is a problem.
More takes on Murthy from smart people:
Renée DiResta on the political politics and flimsy facts in Murthy.
Center for American Progress on the misinformation machine that created the Murthy case.
Brennan Center on the threat of Murthy to election disinformation.
Knight First Amendment Institute’s series of papers on jawboning.
Tech Policy Podcast on what’s at stake in Murthy v. Missouri
Thank you to Margo Williams for help compiling these FAQs and editing.