Week 3 - What is Standing?
March 20, 2026
Welcome back, Free Speakers!
Last week I set up the hypothetical we’ll be thinking about for the next few weeks. I’ll begin answering the question by reading through the arguments for and against dismissal on First Amendment grounds posed by the defendants and plaintiffs. In addition, I’ll consider the judge’s ruling to inform how the analysis under a slightly different legal scheme should be performed.
The first question the Court asks itself is whether Character.AI can even claim First Amendment protections. In legalese, this is called standing, whether a party has the authority to bring a claim to the Court. The reason why the Court questions Character.AI’s standing in defending First Amendment protections is because the company wants to assert the First Amendment rights of its users. You might hear that and say, “hey, the company is the one being sued, not the users! They can’t be allowed to assert the users rights if they aren’t named defendants, right?” That line of reasoning was argued by the plaintiffs, but in the end, the judge ruled against them on that prong. Let’s step back to see why.
In our hypothetical, the government seeks to limit the C.AI outputs by instituting guidelines for how the LLMs should be trained, and how the company should market their service. Character.AI retorts that doing so would change the speech, and therefore implicate the First Amendment. Now, the company isn’t really comfortable claiming the AI outputs as their own quite yet (since that would likely result in them being liable for Sewell’s death), so they argue that the users of their service have a right to listen to their speech. To understand this argument, we have to reframe the way we think about the first amendment. In the landmark Supreme Court case Citizens United v. FEC (commonly known as the ‘money = speech’ decision), Justice Antonin Scalia wrote in a concurring opinion that “[t]he [First] Amendment is written in terms of ‘speech,’ not speakers.” This opinion, while not binding, has shifted courts’ First Amendment jurisprudence with its persuasive authority. The First Amendment doesn’t simply protect citizens’ rights to speak their mind, but to hear all ideas that aren’t exempt from the protections of the amendment. This distinction is critical; if speech comes from a foreign citizen that in itself shouldn’t afford the speech less protections. In Kleindienst v. Mandel, the Supreme Court ruled that American students had a right to listen to a Marxist professor from Germany, under this same doctrine. Further, in Lamont v. Postmaster General, the Court ruled that American citizens had a right to consume communist political propaganda from foreign countries. The right of listeners to consume speech is what the First Amendment is meant to protect, and, in the case at bar, that’s what Character.AI seeks to protect.
We’ve established that the First Amendment protects the rights of listeners, but there’s still a problem: the listeners aren’t the ones being sued; here, it’s the Character.AI, the one producing the content (or the ‘speaker’). So we ask ourselves: does the company have the authority to go out on a limb and protect listeners’ rights on their behalf? The Court in their order cites Kowalski v. Tesmer, stating “a litigant may assert the rights of a non-party when the litigant has ‘a close relationship with the person who possesses the right’ and ‘there is a hindrance to the possessor’s ability to protect his own interests’” (citation and internal quotation marks omitted).[1] Plaintiffs argued here that Character.AI doesn’t have a close relationship with “the public,” the people they claimed they were protecting.[2] However, Courts are “quite forgiving with these criteria in certain circumstances.” (internal citation omitted). In Warth v. Seldin, the Court allowed legal action to be taken on behalf of third parties if the “challenged restriction” violated the “third parties’ rights” (citation omitted).[3] Further, in Craig v. Boren, “a licensed beer vendor had standing to raise equal protection challenges” to a law discriminating against their buyers.[4] Although the language in Kowalski suggests the standard for litigation on behalf of a third party is high, common law says otherwise. For these reasons, Character.AI is able to assert the First Amendment rights of their users.
Now that we’ve got the question of standing out of the way, we have to ask if First Amendment rights are actually implicated in this case. The question the Court is forced to wrestle with is whether or not AI outputs are in fact protected speech.
You might be wondering why this is a question the Court has to answer in the first place. You may also say to yourself, “the First Amendment protects the rights of listeners to consume all that they can in the marketplace of ideas. AI just adds to that marketplace; so, shouldn’t AI outputs be protected under the First Amendment?” Well, Character.AI wants their LLMs outputs to have core First Amendment protections, meaning their outputs are considered pure speech.
Next week, I’ll further outline the distinctions between pure speech, partially protected speech, and unprotected speech, as well as the different levels of scrutiny they trigger.
See you Free Speakers next week!
[1] Garcia v. Character Technologies, Inc., 6:24-cv-01903, (M.D. Fla. May 21, 2025) ECF No. 115 at 25 https://www.courtlistener.com/docket/69300919/115/garcia-v-character-technologies-inc/
[2] Garcia v. Character Technologies, Inc., 6:24-cv-01903, (M.D. Fla. Mar 21, 2025) ECF No. 85 https://www.courtlistener.com/docket/69300919/85/garcia-v-character-technologies-inc/
[3] Id. at 26
[4] Id. at 26
Reader Interactions
Comments
Leave a Reply
You must be logged in to post a comment.

Hey Nathan, really appreciate how you’ve laid out the effects of treating AI as pure speech. I just find Character AI’s stance on this really contradictory. You mentioned that they are not “comfortable” taking the AI outputs as their own, yet at the same time, they want to establish the speech that the AI produces with core First Amendment protections. If they want to distance themselves from the speech so badly, then don’t you think its a little interesting that they would want it to be treated as pure speech? In my eyes, if I want to see something as pure speech, I would at least want an origin of that speech, and if it’s not Character AI (and its resultant LLMs that are associated with the company), then who is taking ownership of this “pure” speech? Seems like this whole thing is them trying to play two sides of an argument depending on which one is more self-serving, but just wanted to get your thoughts or what the Court had to say about it. Once again, really well done!
Thanks for the question Diyaan! You’re 100% right to question the lack of responsibility over LLM outputs, yet simultaneous demand for the highest First Amendment protection and scrutiny. The plaintiffs make the same argument as you, adding that there can’t be any expression attached to the “speech” without a named speaker for the outputs. They argue the whole point of the First Amendment is to promote the ‘dissemination of free thought in the marketplace of ideas,’ and all that jazz I talked about in Week 2.
However, Character.AI argues that all that is irrelevant. Because the First Amendment is meant to be understood in terms of “speech,” and not “speakers,” the Court should only concern themselves with the speaker when the expressive intent is unclear. The expressive intent has traditionally been in question when conduct has “significant ‘non-speech elements.'” (In this context, ‘speech’ vs. ‘non-speech’ refers to the literal meaning of ‘words’ vs. ‘non-words.’) I personally believe that because the conversations in question are in written word, there shouldn’t be a question of expressive intent. However, the Court triggered the expressive conduct test anyway.
The Court cites Holloman v. Harland as their primary reasoning. In the case, a high school student was punished for “silently raising his fist during the daily flag salute,” and he sued for relief. In the Circuit Court’s analysis, they stated the act seemed “similar to the wearing of a black armband,” akin to the pure speech in Tinker v. Des Moines. The Court then said that it didn’t matter if the acts were “‘pure speech’ or ‘expressive conduct’ because the same test [is applied] in assessing school restrictions on either kind of expression.” The Circuit Court basically said, ‘it doesn’t matter if this is expressive conduct or pure speech, because we’ll apply the expressive conduct anyway.’
I personally believe the Garcia Court bent over backwards to trigger this test. The case above only question expression, because the student did not use spoken word. In cases where spoken word is the conduct under scrutiny, expressive intent is never questioned or speculated. That’s because we can read the words. Here, the words are present for us to read, and in turn the intent. For that reason, it rubs me the wrong way that the Court goes down this avenue of analysis, says there’s a “split in persuasive authority,” and refuses to answer the question at hand. But that’s just my humble opinion as a high school senior. I’m sure the honorable Judge Conway knows more about the law than I.
Looking forward to more of your questions! This was fun to answer 🙂
Hi Nathan, I love the way you broke down the complicated matter of first amendment protections through a variety of court cases. I’ve frankly never heard of the protection of listener’s side of the first amendment argument and I am quite skeptical on the logistics when applied to AI. Although these companies attempt to distance themselves away from their chatbot programs, reminiscent of social media companies distancing themselves from user speech, I feel that artificial intelligence companies hold more control and are deeply intertwined with the content that their models produce. Do you think previous examples of artificial intelligence companies implementing biases or censoring certain content will provide grounds to hold companies accountable for the content their LLM’s generate? Very informative post all around and I thoroughly enjoyed reading it!
Thanks for the comment Kingston, and yes I definitely do! To be honest, we don’t need to look at the variance in responses from different AI models to know that the way you train a model can greatly change the content it generates. But, examples of Grok producing obscene content on Twitter, when ChatGPT wouldn’t, does illustrate that difference pretty well.
I don’t want to spoil my analysis (because I may end up being wrong as I read more), but it’s currently my position that AI companies won’t be able to get around accepting their AI’s outputs as their own. The companies make the deliberate decision to create the AI. They are theoretically well versed in the technology, and understand the extent of its capabilities. During the deep-learning phase, companies have a heavy hand on the scale when determining what an AI can and cannot say. It follows that it should be the companies’ job to create enough guardrails during the learning process to ensure the AI doesn’t engage in any nefarious action. However, to what extent the action in the case at bar is nefarious is a different question.
Hope my future posts are able to remain up to standard for you!