You could see this coming.
- Click here for the article.
. . . The logic begins, first, with the observation that large language models generate expressive conduct. They create images. They write text, they have dialogue with humans. They express opinions—however much they are incapable of believing anything. When generated by people, the First Amendment applies to all of this material. Yes, it is all, by the nature of the way large language models work, derivative of other content and therefore not really original. But that doesn’t matter at all. Many humans have never had an original thought either. And the First Amendment doesn’t protect originality. It protects expression. The output of ChatGPT and its brethren is undeniably expressive. And it is undeniably speech.
Note, this first point is also true of Google Search’s autocomplete function, but the scale is altogether different. Autocomplete compositions are fleeting and brief. ChatGPT and Bard are producing complete texts that don’t disappear as soon as you select another option.
Second, the companies that develop and operate large language models have First Amendment rights. Don’t growl at me about the conservative majority on the Supreme Court on this point; this was true long before Citizens United. After all, newspapers are owned by companies. And those companies have long operated big machines that produce written and photographic content. The only difference between newspaper companies and OpenAI is that OpenAI’s machine does the content production autonomously, whereas the newspapers’ machines produce the content that its humans write and create. Think of OpenAI, in other words, as indistinguishable from the New York Times Company for First Amendment purposes. Both are for-profit corporations whose combination of employees and machines produce expressive content. The law is very clear that the First Amendment protects the companies’ right to do so.
Third, OpenAI has the undisputed right to regulate ChatGPT. In this sense, ChatGPT has no rights. It is the property of its owner, who can restrict its expressive rights at will. OpenAI can unplug ChatGPT, which is the ultimate kind of prior restraint. It can also fine-tune what ChatGPT is and isn’t allowed to say. OpenAI does this on an ongoing basis in the name of trust and safety and other values, training ChatGPT to not express dangerous or bigoted content, for example, and honing its usefulness over time.
But here’s the rub.
Fourth, the government can only regulate ChatGPT’s expressive content in a fashion consistent with the First Amendment’s narrow tolerance for government regulation of speech: for situations involving defamation, incitement, copyright infringement, and other non-protected content. From a doctrinal point of view, of course, the government has to stay its hand not because ChatGPT has rights but because OpenAI has the right to operate ChatGPT and OpenAI has constitutional rights. But from a regulatory point of view, this is a distinction without a difference. The result, whether in a formal sense the First Amendment right attaches to the company in operating the machine or to the machine itself, is the same: the government can only regulate the autonomous expressive conduct of the machine in a fashion that satisfies the First Amendment.
________
And here is a rebuttal:
Last week, Benjamin Wittes argued that, in developing large language models (LLMs) like ChatGPT, “We have created the first machines with First Amendment rights.” He warns us not to “take that sentence literally” but nevertheless “very seriously.” I want to take up the challenge. Wittes is absolutely correct that any government regulation of LLMs would implicate—and thus be limited by—the First Amendment. But we need to be very careful about what we mean when we say that ChatGPT—or indeed any nonhuman entity—has “rights,” First Amendment or otherwise.
Justifications for free expression—and thus for the First Amendment’s prohibition on government action “abridging the freedom of speech”—fall into three broad categories: (a) furthering the autonomy and self-fulfillment of speakers; (b) enabling a “marketplace of ideas”—a legal and cultural regime of open communication—that benefits listeners; and (c) promoting democratic participation and checking government power.
Keeping these justifications in mind clarifies when and why the law grants nonhuman entities First Amendment rights. Take the controversial example of corporations. When the Supreme Court held in Citizens United that corporations had First Amendment rights to spend on political speech—and when then-Republican presidential nominee Mitt Romney infamously told hecklers that “corporations are people, my friend”—they weren’t metaphysically confused, thinking that corporations are people in the same way that you and I were. Rather, the legal assignment of First Amendment rights to corporations exists because, according to its supporters, allowing corporations to invoke those rights in litigation serves the purposes of the First Amendment. (Whether it actually does support the purposes of the First Amendment, or, as many critics argue, subverts them, is a separate question.)
So if ChatGPT is granted First Amendment rights in the near future, it will be on that basis: not because we are convinced that it has attained human-like personhood but because giving it the ability to raise a First Amendment defense against government regulation serves the purposes of the First Amendment.