Table of Contents
So to Speak Podcast Transcript: Does artificial intelligence have free speech rights?
Note: This is an unedited rush transcript. Please check any quotations against the audio recording.
Nico Perrino: All right, welcome back to this live recording of So to Speak, the free speech podcast where every other week we take an uncensored look at the world of free expression through personal stories and candid conversations. I am, as always, your host, Nico Perrino. Today, as I mentioned, we’re recording live from the First Amendment Lawyer’s Association’s fall meeting, which is taking place on the seventh floor of ݮƵAPP’s D.C. office. I would like to begin by thanking those in the audience, the FALA membership for having us here. And a special thanks to Gill Sperlein for having the idea in the first place. We’ve very rarely recorded live episodes of So to Speak. I think the last one was something like five years ago, so this is a treat for us.
Today our conversation is focused on artificial intelligence and the First Amendment. I’m sure we all have a broad understanding of what artificial intelligence is. Broadly speaking, it is the simulation of human intelligence by machines. In November of 2022, OpenAI released an early demo of ChatGPT, a so-called generative AI tool that quickly went viral and ushered in the new age of artificial intelligence that is quickly accelerating. McKinsey and Company estimates that AI could automate about two thirds of our current work time over the next 20 years. Already, ChatGPT’s latest model can demonstrate PhD levels of intelligence when working on physics, chemistry, and biology problems.
Indeed, I may be out of a job soon. Google recently released a new product called Notebook LM, where you can feed it an article or really any written content and it can churn out a scarily good podcast conversation about the article’s themes featuring guests and hosts. Not real ones. I encourage you all to check it out. It’s pretty mindboggling. But what effect, if any, will this latest technological revolution have on free speech and the First Amendment? Does the First Amendment cover the creation and use of artificial intelligence? Who is liable for what AI produces? And are we legally and culturally prepared for AI deepfakes and misinformation? And what, if anything, should lawmakers look out for when regulating AI?
We will discuss these topics and more with our guests today. To my left is Samir Jain. He’s the vice president of policy at the Center for Democracy and Technology. And he previously served as the associate deputy attorney general at the Department of Justice and senior director for cyber security policy for the National Security Council at the White House. Samir, welcome.
Samir Jain: Thank you.
Nico Perrino: We also have Andy Phillips there at the end. Managing partner and cofounder at Meier Watkins Philips and Pusch. He focuses on plaintiff side defamation and reputational protection. Previously, he spent 10 years at Clare Locke where he was one of the first four attorneys at the firm. And he litigated the famous defamation case against Rolling Stone magazine regarding their viral story about what turned out to be a fabricated gang rape at the University of Virginia. Welcome to the show.
Andy Phillips: Great to be here, thank you.
Nico Perrino: And then in the center here we have Benjamin Wittes, senior fellow and governance studies at the Brookings Institution, and cofounder and editor and chief of Lawfare, which provides nonpartisan timely analysis of thorny legal and policy issues. He is also a contributing writer at the Atlantic, and a law analyst at NBC News and MSNBC. Benjamin, welcome on the show.
Benjamin Wittes: Thank you. I no longer have any affiliation with NBC or MSNBC.
Nico Perrino: Okay, well strike that from the record then.
Benjamin Wittes: I have retired from television.
Nico Perrino: Well, I appreciate you all coming onto the show. The way this is going to work is I’m going to moderate a discussion between the four of us for about 30 minutes and then we’ll open it up to you all in the audience for about 30 to 45 minutes of questions. Gill told me you could be a boisterous bunch. We’ll see if you live up to that reputation.
So, let’s start with you, Benjamin. You wrote in March of last year that we have created the first machines with First Amendment rights. You note that Bard and ChatGPT, these are the generative AI models, at least in functional terms have free speech rights. And that we should think of OpenAI as indistinguishable from the New York Times company for First Amendment purposes. Can you walk us through your thinking there?
Benjamin Wittes: Yeah, so this was first of all an intentionally exploratory attempt to be provocative piece. So, if you instinctively recoil at that characterization of it, understand that I do too. I think it’s the logical extension of all of our doctrine. And it is repulsive. So, bear with me for a moment. Obviously, if you ask any of nine justices do machines have First Amendment rights, they will all say no. I think you can get probably 95, 98% of appellate court judges around the country to agree with that. And yet, if you look at the structure of our doctrine wherein machines are used by companies to produce content, the companies have First Amendment rights. And the use of the machine is literally incorporated in the phrase freedom of the press. It doesn’t mean the press is free, it means Ben Franklin is free to operate the press.
So, the use of the machine is encompassed within the First Amendment of the operator which is a company. Now, that means that if OpenAI or Google or Microsoft builds a machine that speaks and thinks, whatever speaks means and whatever thinks mean, autonomously, and they do not choose to interfere – by the way, this is speech is in public in the sense that you can interact with it. They do not choose to interfere with the speech of that machine, the capacity of government to interfere with the rights of that company to do that, which is to say the rights of that machine to speak become very seriously impaired very quickly.
So, I think you have functionally, though not doctrinally, created a machine that operates within the space and protection of the First Amendment. I think we have not begun to think about the ramifications of that either in the liability space or in the who is the regulator space. Which is to say I think, barring doctrinal change, the regulator is the company that owns the machine, not either state or federal government. I think there are profound and very upsetting implications to that.
Nico Perrino: Samir, do you think that he’s right, that First Amendment protection for artificial intelligence inevitably flows from current court doctrine on the First Amendment?
Benjamin Wittes: Just to be clear, I didn’t say inevitably.
Samir Jain: That was what I was going to disagree with.
Nico Perrino: You say ineluctably in the article.
Benjamin Wittes: What I said is barring an interference where the court would say, “Wait a minute, our prior doctrine doesn’t apply here. We’re creating new doctrine to interfere with the application of prior doctrine to this.” There’s an ineluctable flow. But there’s obviously space for the court to write a sentence like, “Nothing in our current doctrine has anything to say about AI,” period, and then start anew.
Samir Jain: I actually think one interesting lens on this question is the Moody v. NetChoice decision in the Supreme Court last term in which Justice Barrett actually touched on this a little bit in her concurring opinion. Moody v. NetChoice was the case that involved the challenge to the Texas and Florida laws that attempted to restrict in some ways the content moderation choices by Google and other big players. The court in its majority opinion essentially held, while kicking the question on the facial versus as applied challenge said at least with respect to traditional news feed, content moderation choices made by platforms to leave content up or to take it down and the like are protected by the First Amendment. And that’s true even if algorithms are used to implement those choices.
In her concurring opinion, Justice Barrett actually raised this very question because she was talking about why she thought that the answer may differ by different kind of services. And she –
Benjamin Wittes: She didn’t cite my article though.
Samir Jain: And she specifically talked about AI. The hypothetical she raised is what if the company simply tells an AI system get rid of hateful speech and leave it to the AI system to decide what is hateful and make those choices. What she said, essentially, was in that case is a human really making expressive choices that are protected by the First Amendment? She didn’t purport to answer the question definitively, but her implication was that in the absence of some kind of human choice being executed by the algorithm, that that might remove it from First Amendment protection.
Nico Perrino: Andy, do you think a human is involved in the outputs that these AI models are creating? Presumably, they engineered them to do these sorts of things, even if they don’t know exactly how it’s going to turn out.
Andy Phillips: You mentioned in your kind introduction that I’m a plaintiff side defamation lawyer. So, I come at the issue from that perspective, if I have a client who’s been harmed by an AI output, what is the recourse and who is liable for that? There’s different scenarios you can imagine. If someone types a prompt into ChatGPT, it spits out something that is in some way false and defamatory of someone, and then the person who initiated that prompt goes and spits it out to the world, that seems fairly straightforward that that person could be a defendant. Could the bot or an algorithm be a defendant in a lawsuit? Probably not.
And then you get to the question of can the company that created that capability and kind of just unleashed it on the world without moderation, can they be liable for that? I don’t think we know the answer to that question right now. There’s some litigation pending right now that might start to give us answers to that question. I think we’re early in the process and we’re just going to have to see where it goes.
Nico Perrino: Well, I’d like to see whether you three think that Section 230 of the Communication Decency Act, which provides liability shield for service providers that host third party content, whether these AI companies would be protected under Section 230. Presuming that you can’t get an output from an AI model unless there is a user that prompts it to do something. Do you have a thought on that, Samir?
Samir Jain: Sure. So, under Section 230, what service providers are protected with respect to hosting content provided by what’s termed another information content provider. And an information content provider is one who has played a role, in whole or in part, in creating or developing the content at issue. So, I think one key question will be does the generative AI system, for example, play a role, at least in part, in creating or developing the content at issue? I think that may end up being a little bit of a fact specific question.
So, if you think, for example, of a Gen AI system that creates a hallucination where we know that sometimes the output of a gen AI system is entirely made up, it’s not grounded in any source, and the results of the sort of stochastic parrot nature of gen AI systems, which are really about predicting next words as opposed to assigning meaning. It’s hard to see how you would say that a gen AI system that creates a hallucination didn’t play at least some role in the creation of that content. And therefore, it’s not content strictly provided by another information content provider.
On the other hand, if a gen AI system merely regurgitates training data, so there’s an image that was used to help train it, and all it does is reproduce that training data with no alteration, then there may be more of an argument it really had no role in the creation or development of that content. It was simply distributing that content and therefore that 230 should apply. So, I think there are no court cases yet addressing this, but I think it may end up being a little bit of a fact-based question.
Andy Phillips: I’d say I pretty much agree with Samir’s breakdown of that. Under 230, the big distinction really is are you just hosting content or are you playing a role in creating content. ChatGPT, for example, that seems to me like it’s more of the later where you have unleashed a tool that is creating content, rather than – you’re not just hosting comments on a blog or whatever. You are actually creating words and content.
Nico Perrino: Benjamin?
Benjamin Wittes: Yeah, I think there’s an antecedent question to Samir’s analysis, which is whether the large language models that are going around the world sucking up content without the consent of the original providers of that content, I don’t think a reasonable court should consider that user submitted content, a la if I leave a comment on an AOL chat site, that that is user submitted content that they are immunized for purposes of Section 230. I don’t think if they go out and take everything I’ve ever written and then regurgitate it, that that is in the domain of I submitted that content to them.
So, I think as part of the fact specific analysis, with which I agree with Samir, that it’s going to be – there’s going to be a gazillion iterations of this question. But one of them is did they regurgitate content that they sucked up themselves, that they’re the user who submitted the content, I think they’re going to have a big problem in that regard. Or at least I think they should.
Nico Perrino: You’ve opened up another question here, which is training these artificial intelligence models on datasets that are unlicensed. So, you had the New York Times sue OpenAI and Microsoft over the use of their copyrighted works. The lawsuit alleges that millions of articles from the New York Times were used to train chatbots that now compete with the Times. Then in August of this year you get OpenAI sued over using YouTube videos without the creators consent, again allegedly. Do these models need to license these datasets? Is it any different from, for example, me going to the library and reading and producing content based on what I find in the library, for example?
Benjamin Wittes: I think this question, first and foremost, sounds in intellectual property, not in First Amendment law. I would say it is a remarkable proposition, to me anyway, just as an intuitive matter, that you can go out and collect all material, not in the public domain, but all material that is public facing without the consent of the user, of the producer, and use it for commercial purposes. That strikes me as a – Look, my bias is I’m a content producer and I’m very careful about the rights of my writers. I don’t really understand why that proposition should have legs.
That said, the amount of intellectual property that I don’t know is something close to the entirety of the field. So, I’m not advancing my own bewilderment at the idea that this should be legal as something that anybody else should take seriously.
Nico Perrino: Andy, have you worked on any questions like this on the plaintiff side?
Andy Phillips: I think if you just take a look at the litigation that’s been spawned so far in the couple years that AI has been around, I think the majority of it probably has been copyright issues. But I’m not the most qualified person to opine on it.
Nico Perrino: Samir, do you want to jump in?
Samir Jain: The one analogy I might point to as just something I think courts will look to is search engines crawling websites in order to determine search results. In a way, we’ve addressed that sort of on a technical level through what’s called robots.text, which is a file that a website can essentially put on its website that can give instruction and tell a search engine, no, actually I don’t give you permission to crawl my site. So, you can imagine potentially a similar kind of technical solution emerging and evolving that allows sites to give permission or not for their content to be used as training data for AI models.
Nico Perrino: Turning to deep fakes and misinformation, a 2024 Forbes survey found that over 75% of consumers are concerned about misinformation from artificial intelligence. Indeed, Americans are more concerned about misinformation that sexism, racism, terrorism, or climate change. And earlier this year the World Economic Forum surveyed 1500 experts and found that for its global risk report that mis info or misinformation was the top concern of these experts over the next two years, ahead of inflation, armed conflict, and extreme weather events.
Does the First Amendment doctrine, which protects lies and misinformation apply in a place where you can have, for example, Kamala Harris really look like she’s saying or doing something that she didn’t actually do? Or does our existing framework surrounding forgery, false light, right of publicity, for example, accomplish really what you would hope would be accomplished in regulating deep fakes or misinformation? Because we have seen some states like California, perhaps most famously, but also Minnesota, Colorado and other states try and regulate beyond those already existing exceptions for forgery, fraud, false light, for example. Samir?
Samir Jain: California did try it and actually a court recently enjoined one of its laws on First Amendment grounds. So, California passed a law in the election context purporting to outlaw deepfakes that harm the reputation of political candidates, harmed their electoral prospects. And the court enjoined it and said look, this is reaching far beyond defamation and other categories that we’ve traditionally held outside of First Amendment protection. And therefore, that you can’t enforce this law. So, I think this is a real issue because there’s no question that disinformation and misinformation is a real problem, deepfakes in the form of nonconsensual intimate images will probably be the most common form of deepfake images that are created online. And they cause real unquestionable harm.
And some of those things might fall within, whether it’s obscenity or defamation or false light, might fall within some of these categories. But a lot of it’s not going to. A lot of it is going to be just lies that are not withstanding the harm they cause protected by the First Amendment.
Nico Perrino: Benjamin, is this sort of what you were thinking when you said the First Amendment protects this sort of activity, but it can’t be right.
Benjamin Wittes: What I was thinking about when I wrote that was the machine discussing stuff in public on its own. This strikes me as almost innately human assisted or human initiated. Until you create a large language model that’s like the – make up lies about Kamala Harris and images to accompany it model, that that’s all it does on its own. The First Amendment rights will be inevitably of the person who distributes it, and do you have a right to use an AI to generate malicious material? Let me say here I am less concerned about the political sphere than I am about high schools. So, if you want to know what’s going on in our political system five years from now, look at what’s going on in a high school today.
I spent a lot of time a few years ago studying sextortion cases. These are completely horrifying cases. And there are a large number of them and we don’t have data on it. So, I would start in the regulatory area, in the sort of use of these technologies in targeted harassment and terrorization of individuals, rather than trying to start – Because there you actually have some good law that limits the First Amendment’s reach. You’re not allowed to lie about people maliciously in a fashion that is defamatory. You’re not allowed to – There are laws against harassment that clear First Amendment barriers.
In the political space, it’s really, really hard. And you’re also dealing with, frankly, victims who have, in the case of Kamala Harris, literally billions of dollars at her disposal to respond and create her own image. So, I think it’s a less worrying problem than the 15 year who is actually at risk of committing suicide in response to it. So, my caution in this area is we all jump to the worst disinformation, misinformation use case in the political arena that affects everything. I think those are mostly, but not entirely mythological. Yes, there are the Donald Trump wading through the waters of the North Carolina flood carrying a baby. but the solution to that problem is mockery, is people pointing out in public that it’s not real. It’s a sort of hygienic response on the part of the public to being lied to. I’m much, much more worried about, and the circumstance that I think really does give rise to regulation is the mass production of terrorizing of individuals.
Nico Perrino: I’ll ask you, Andy, as a plaintiff side attorney, do you see much litigation surrounding these deepfakes, misinformation, and the use of artificial intelligence to create it?
Andy Phillips: For sure, I think it’s coming. I take Ben’s point about the political sphere, sure a Kamala Harris can fight back about a fake video of her. This technology, if it’s already soon going to be widely available to anyone to create anything. So, someone who might have a problem with Ben can create a video of him doing something or saying something that he didn’t actually do, but that makes him appear nefarious.
Benjamin Wittes: I’m going for nefarious.
Andy Phillips: They can put you in a cat shirt. I actually believe that this is an area where existing law can probably tackle the problem.
Nico Perrino: That was my next question, what do we do about it?
Andy Phillips: It’s not so different from a fabricated story. You can say the same thing in the written word. It’s a visual. But you can think the kind of existing framework of law that we have around putting out false narratives and facts about people could tackle that. It’s a novel technology, but Photoshop’s been around for a long time. You couldn’t do it quite as convincingly. These sorts of things are not new, and I don’t see it as too much of a leap from where the law already is.
Nico Perrino: Does anyone on the panel think there needs to be mandatory disclosure where artificial intelligence is used? Because I believe the FCC is looking at requiring mandatory disclosure for the use of artificial intelligence surrounding elections. I believe California passed a watermarking law, as well.
Andy Phillips: I think it’s a fantastic idea and I think it’s pretty hard to come up with an argument against it, personally.
Nico Perrino: Does anyone want to try?
Benjamin Wittes: The argument against it would be that it’s completely unenforceable. You have very large numbers of models that are capable of producing this stuff. A lot of it is in the open source – a fair bit of it is open source, so if you put a watermarking requirement in one set of systems, somebody will replicate that set of systems without the watermarking requirement. So, I think it’s the kind of thing that I have no in principle objection to it. I’ve never seen a proposal for it that doesn’t have a little bit of the quality of the video rental records protection act, which is all about video tapes and rises out of the Bork nomination. And has literally never had any application to anything because people stopped renting videos and started getting videos by lots of other means.
So, I think it may – I have no objection to it, it may just be a fool’s errand.
Samir Jain: I would separate the government mandating that versus it becoming a standard or something that companies do as a matter of standard practice. I do think with the government mandating that, you do have an aspect of compelled speech there and compelled disclosure in many cases in a noncommercial context where you’re talking about these kinds of images that aren’t commercial in nature. I think there’s a real question whether that kind of mandatory compelled speech or labeling survives First Amendment scrutiny. But I do think, notwithstanding Ben’s points, which I think are all real, that you can’t really do this in a comprehensive way and there will always be AI generated videos that won’t get those kind of labels or watermarks.
I think companies are working on standardizing this through a standard called CTPA and trying to come up with standard ways to do this kind of what’s called providence and sort of attach providence metadata so you can tell where did a particular piece of content originate. And also potentially watermarking and doing that in a standardized way that allows a social media company to recognize that this is AI generated, and therefore label it so that users know that.
Benjamin Wittes: I also think that the argument for mandating it in the context of the behavior of political committees in electioneering in the period – You can say if you are the Kamala Harris or Donald Trump campaign and you are going to disrupt an AI generated issue that conveys something that didn’t happen, that needs to be disclosed. The ability to regulate the political committee is a bit more robust than the ability to regulate you and me.
Nico Perrino: I’m going to ask one last big picture question here. We’ve had a lot of technological revolutions throughout humanity’s existence, going back to the printing press, the radio, the television, the internet. I know there was that famous, for example. Time magazine cover where they had a child in front of a computer screen, and it just said cyber porn across the top of it. So, there is a concern, I think, amongst the First Amendment community that there can be a rush to regulate this new communications technology.
And there’s always the thought that this new technology is different and that we need to change our approach to the First Amendment as a result. How do you all see it? Is it actually different than the printing press or the radio or the television or the internet or social media?
Andy Phillips: It feels like it, but it’s so new. People probably felt that way about the internet. Again, I come at it, sort of myopically from my defamation perspective. But the law finds a way. To paraphrase Jurassic Park. Sometimes it feels like putting square pegs in round holes. But the law is conservative that way and it tends to over time find a way to pigeonhole new developments into old standards. The internet’s an easy example in the field in which I work. It confounded traditional understanding of even simple things like publication. If you’re trying to figure out where to file a defamation lawsuit, do I have personal jurisdiction over the defendant in a certain state or court or wherever.
A relevant question is where was this thing published? If someone wrote a letter and put it in the mail, easy enough. If someone printed a magazine and distributed it, easy enough. If you put it on the internet, it’s sort of published everywhere at once. And the courts had to tackle this in that context. It's still being fleshed out in some states decades later. That’s to say I think the law will find a way, at least in my field, to tackle these issues without a real sea change in the underlying law.
Nico Perrino: Are you worried about it?
Andy Phillips: I’m worried about it from the client perspective. We haven’t seen the avalanche of it. I have not personally litigated an AI case yet. As I mentioned, there’s one going on in Georgia right now in state court against OpenAI involving a ChatGPT output that I think is going to be really interesting if it gets to summary judgment and starts to answer some of these questions. But I worry about it from a client perspective because my day job is trying to help clients who have been defamed or portrayed in a false light by somebody and you just see enormous capability with AI to do that and to cause real harm.
Nico Perrino: Benjamin?
Benjamin Wittes: So, imagine two worlds. One in which basically this is a human assist technology and chatbots and these entities are not actually autonomously producing a lot of content that’s public facing. What they’re producing is content to you, that you are then turning into stuff that you produce, and for which you are liable if it’s really bad. I think that is currently the predominant use case.
There’s this other use case which is that the chatbot is itself the speaker. And you can see this in now Google’s AI responses to searches. You imagine that that develops into something like a newspaper. What’s going on in the world today, Google? And the Google chatbot gives you a personalized newspaper. That’s radically different because there you have the production of consumed content, consumed presumably as truth, that is generated by a non-human. I think if that becomes a predominant use case that ends up injuring a lot of people, we really need to think about what the law that regulates that looks like.
That, I do think, is profoundly and just profoundly different from any question that we’ve ever confronted before. So, I will return to the question with which you started, which is it really depends whether the machine is helping people speak or whether we’re creating machines that are themselves speaking in public domains.
Nico Perrino: I will say we’re not too far off from that future. I know there are some news publications that are producing articles already with artificial intelligence. And maybe even were before the advent of ChatGPT because I’ve heard about it.
Benjamin Wittes: Right. So, that is, first of all as a news person, that strikes me as very deeply irresponsible based on current technology. I would just like to say as somebody who creates really, really bespoke human created content for national security lawyers, if anybody out there – don’t do that. That’s a terrible idea.
But to the extent that say the New York AI publication is doing that, they are at least responsible. We know who the publisher is. I think when the – if you imagine the publication that is really automating the whole process in a bespoke way between you and the user, I don’t even really fully understand who the publisher is in that situation. I think that’s the case where you have a 230 question. After all, I asked it to produce the content. Maybe it’s user submitted content. I said please defame so-and-so for me. So, I think that’s really where the First Amendment rubber hits the road, where it just feels completely different.
Nico Perrino: Samir, last word on this?
Samir Jain: I think one key question is going to be what are the harms that are being created that we’re seeing. I think Ben’s example is exactly the kind of right one, the nonconsensual intimate images for teens and really women of all ages. Can the law, can existing tort actions adapt in a way that addresses those harms? If so, maybe you don’t need to change the law in some fundamental way. But if it can’t and we’re seeing those harms develop because it’s so easy for anyone to generate those kinds of images and they can be distributed so widely with little effort, then maybe we will as a society end up concluding that we have to adapt our laws to some degree to deal with those harms.
So, I think a lot will depend on what are the use cases, what are the harms that are resulting from those use cases and is the current law able to adapt to address those harms or not. Because if the answer is not, I think we will see the law evolve. I think an interesting parallel, to some degree, is social media. Because I think there were a lot of policy makers, for sure, who think it was a mistake not to have regulated social media much earlier. That social media is now causing different kinds of harms, and it is embedded in such a way that it’s much more difficult now to address those harms.
So, I think that’s partly what’s driving some in Congress and others to say, “Hey, we actually need to pass laws earlier in the evolution of artificial intelligence.”
Nico Perrino: How do they want to regulate it now? Are you talking with them? Do you see any mistakes that they might be venturing toward that are going to run into some First Amendment problems? Or are they aware of the First Amendment issues with regulating the space?
Samir Jain: I think to varying degrees they’re aware of the First Amendment problems. That doesn’t necessarily mean it’s going to stop them. We saw in California, as I noted, California passed this law addressing deep fakes that ended up getting enjoined by a court on First Amendment grounds. So, I think there are laws out there that are purporting to deal with digital replicas of voice, name, image, and likeness that I think raises some First Amendment issues.
Nico Perrino: There was that Scarlett Johanson kerfuffle with OpenAI.
Samir Jain: Right. And there’s the No Fakes Act, which has a fair bit of momentum in Congress that I think has some First Amendment issues. I think there will be some give and take. Eventually you’ll see some laws passed. Courts will deal with them, sort of start drawing some First Amendment lines. And hopefully over time will evolve to where we need to be.
Nico Perrino: Okay. I think we’ll open it up to questions now. If you have a question, raise your hands. Sam, can you give it to Ari there? Ari, I know who you are, so you don’t need to say your name. Please, before you ask a question –
Audience Question 1: I have approximately 600 questions, but I’m going to stick to one and a half of them. One of the frustrating things from my perspective from Justice Barrett’s NetChoice concurrence is that it played into the sentient AGI kind of feeling as to what we’re dealing with, even though that’s not at all what we’re dealing with. And it kind of gives lower courts the impudence to say, “Oh, the AI is doing this. It’s not human expression,” when in fact as Nico and Ben kind of both pointed out, when you are feeding the – The AI isn’t just deciding what’s hate speech out of nowhere. It is being fed and trained. And it is reflecting the views of the people who trained it and programmed it.
So, it kind of seems, to look at it another way, functionally, there’s no difference between saying write me a sentence that says X, and I have written this sentence that says X, please rewrite it for me. The end product is the same, whatever it is that I wanted to express.
Nico Perrino: Well, can you just describe what AGI is?
Audience Question 1: Now I can’t even remember what it stands for.
Nico Perrino: Artificial generated intelligence.
Audience Question 1: It is artificial intelligence that exceeds human capacity across several different types of functions. So, it’s not just next word prediction.
Nico Perrino: It can reason and think on its own.
Audience Question 1: Yeah. It’s not quite sentient, although eventually it will become self-aware. So, I think this matters for liability. Because I think while the information content provider analysis handles some things like rote recitation versus hallucination, it doesn’t necessarily cover something like Ben said, which is user intent. So, I wonder if maybe the better way to look at it is whether the AI materially contributed to the tortuousness. So, that mean if I say, for instance, write me a sentence that says Ari Cohn has kicked 4,000 puppies. The AI hasn’t really contributed to the tortuousness because I told it exactly what to say.
But if I say write me a sentence about Ari Cohn and it synthesizes 14 sources but does so in a way that makes absolutely no sense and ends up conveying that I kicked 4,000 puppies, then maybe, arguably, in all cases, even though it isn’t necessarily an information content provider because it took all the information, the way that it arranged it ends up being defamatory. Does that maybe cover a little more of the liability cases that we might come across?
Benjamin Wittes: I think the answer is that these questions are all going to be case specific. I don’t think you’re going to see a lot of cases where the theory of liability is I asked the AI or Wittes asked the AI to produce something defamatory about the First Amendment Lawyers group, and it said it’s almost entirely male, increasingly geriatric. That wouldn’t be the theory – Well, I’m just looking at the audience. But if I said to the AI tell me about the First Amendment Lawyer’s Association and it produced that output and it had generated that output and all I had asked for was information, then the actor is not me. I’m merely inquiring for information. The nastiness, the defamation component is generated by the AI, to the extent that it is defamation.
So, I think your question is exactly right. And that’s why I think I should turn it over to you because in the first instance, it’s people like you who are going to be choosing what kind of cases to bring.
Andy Phillips: I think the what kind of cases to bring is an interesting point because to me, from a really practical perspective, what’s going to matter is the mechanism of mass distribution of a false or harmful statement. So, if I sit down to ChatGPT and I type in a totally innocent prompt like what color is the sky and I get back just wildly defamatory content with respect to a third person, but I don’t do anything with it, who cares? That’s not going to go anywhere, that’s not going to become a lawsuit, most likely.
Benjamin Wittes:
Andy Phillips: If I then take that defamatory content and I spit it out to the world at large, now you have true human intervention and existing framework of law, which says that a person who republishes is just as liable as the original publisher. So, if I’m a plaintiff’s lawyer, I’m not worried about the machine as the original publisher. I’m worried about the person who disseminated it. And you get into traditional questions of their level of fault, was it negligent, or a higher standard for them to uncritically republish AI content. I think those are the cases we’re more likely to see and they fit within a framework we understand.
The weirder one is, and I mentioned it in passing a couple times, and I bet most people in this room are familiar with it, but I’ll spend hopefully 30 seconds talking about it. But there’s a case in state court in Georgia right now where the fact pattern is basically a reporter was using ChatGPT. He was trying to report on a lawsuit involving a Second Amendment group. I think they were in litigation with the government. And he asked ChatGPT to summarize the complaint form. ChatGPT had one of these hallucinations we’ve talked about where it just made stuff up. It spat out this whole notion that the complaint was accusing this guy named Mark Walters of embezzlement and fraud from that organization.
Mark Walters is a real person who is involved with a totally separate Second Amendment organization. The litigation that the reporter was asking about had absolutely nothing to do with him. He wasn’t a party to it, he wasn’t part of that organization. The ChatGPT algorithm just made it up out of thin air. The reporter, to his credit, thought that seemed weird and made some inquiries. Realized it wasn’t true, and he didn’t publish a story about it. So, it wasn’t widely disseminated. But Walters, and I don’t know exactly how he learned about it, maybe the reporter contacted him, but Walters learned about it. And he sued OpenAI. And that case is ongoing in Georgia right now.
Interesting case from a damages perspective. You have a publication to one person. I don’t know if I’m a plaintiff’s lawyer if I’m bringing that case. It got past the motion to dismiss. It’s state court, so you didn’t get a reasoned opinion, you just got a motion to dismiss denied. So, it hasn’t really made any law yet. But my understanding is it’s in discovery and eventually ought to produce a summary judgment opinion that’s going to look at questions like is OpenAI liable as just the creator of this model for the content that it spits out. We will see.
Benjamin Wittes: But I want to return to your first question for a minute, your first case where you sort of – where some individual generates the false and defamatory material, spreads it to the world. You say that as a plaintiff’s lawyer you don’t really care about the machine and how it was generated. You care about the individual who perpetrated the act. If I’m the client there, I’m saying wait a minute, you got one defendant who’s a troll in his basement.
Andy Phillips: Right, the depths of the pockets matter is what you’re saying.
Benjamin Wittes: And you have another potential defendant who has infinite resources and who actually generated the material. Why are you not more interested in a plaintiffs lawyer in OpenAI, Microsoft, Google, and Facebook?
Andy Phillips: Because you might look at it and say that’s a darn thorny question whether I can actually hold them liable for it. I think we’re going to find out and we’ll see if the floodgates open in that regard. I’d be a little surprised.
Nico Perrino: Samir, I know you’ve wanted to get into here.
Samir Jain: I want to complicate the question even more by saying we’ve been talking about the AI as a unitary entity, but it’s not. There’s really a complicated AI stack underneath this. It may be one entity that provides the training data, another entity that creates the model, another entity that creates the application that runs on top of the model, yet another entity that deploys it and makes it available to consumers, and then you’ve got the consumer user who’s using it. And how you allocate liability and responsibility across that stack is actually a really difficult question. Even if you conclude it’s not the user, and it should be “the AI.” There may be multiple AI entities who all have different roles and responsibilities.
And how you decide the reason this was created was because the training data had some bad stuff about Mark Walters or was it that the model was done in such a way that it created this problem. So, I think this is actually an even more complicated question than we’ve been talking about.
Andy Phillips: I’d add whether we have a negligence case than an actual malice case can really impact the likelihood of any potential liability.
Audience Question 3: Hi, I’m Kylie Work. My question is to the extent – or I would like your thoughts on the issue of truth and accuracy with regard to OpenAI. Specifically, in my thought process, ChatGPT, obviously it can search online now and give answers based off that. And with AI generated images, specifically in the political realm, how much does truth and accuracy matter? And is there a line where it reaches into false advertising because you’re asking people for money and votes. Specifically what I was thinking of is the AI images of Taylor Swift, “I want you to vote for Donald Trump,” which originally was an AI generated image for pro-Biden in December 2023. It was posted by former President Trump in August of this year, August of 2024. And then she came out in September 2024 saying that’s wrong.
What if AI answered between August when it came out, the photo, and her saying that’s wrong, what if you asked ChatGPT how does Taylor Swift feel about Donald Trump, and because she hadn’t said anything they said she supports him, and someone took that and ran with it? Where does truth and accuracy come into play with AI online searching and AI generated images?
Nico Perrino: Yeah, well, you could also have this other issue where if you have artificial intelligence producing a bunch of content that then gets posted online, and that content, if it is false, then being used by the models to train the models, you can get this what’s been called a model collapse where you’re constantly having AI generated content feed into the model, produce more bad content, that then also gets fed into the model. Then you don’t know what you’re dealing with.
Samir Jain: This question of does truth and accuracy matter is not a new question. In the law, we’ve traditionally answered that by saying in certain context, yes, truth and accuracy matter. So, we have laws against fraud, we have laws against defamation. But there’s a lot of untrue speech that’s also protected by the First Amendment that falls outside of those contexts. So, I think the question is does AI change or the capacity of AI change things in a way that we need to adjust where we say actually this false content can’t be lawful anymore, maybe because it can be generated at such scale, it can be distributed at such scale and so easily by a broader range of actors, not just certain publishers, that we need to make some adjustments there.
I think that’s the kind of question we need to be thinking about, the truth versus falsity. That’s a question we’ve dealt with throughout our history, I think.
Benjamin Wittes: Yeah. I agree with what Samir just said. I think one of the tricks that the law has developed, particularly in the criminal sphere to answering the truth and accuracy question is fraud. Fraud turns on intent. And large language models do not have intent. So, when you are evaluating the role, like normally, if you replace the AI with me and the tool were Photoshop, and you and I got together and said we’re going to create a fraudulent image of Taylor Swift endorsing Donald Trump and I’m going to do that because I’m really good with Photoshop, which I’m not, and you’re going to disseminate it because you’ve got a gazillion Twitter followers.
There’s a normal way you would evaluate that, which is are you trying to trick voters, are you lying, is there a pecuniary benefit to you? The language that you would use for it is fraud as well as defamation, to the extent that Taylor Swift didn’t like the way you described her. But now if you replace me with an AI, you can’t conspire with an AI. The AI doesn’t have intent. The AI does what it thinks it’s being told to do. So, I think there’s a – the wrinkle to Samir’s point that we’ve been dealing with this for a long time is that a lot of the legal tools that we have to deal with turn on states of mind that an AI is not capable of having?
Andy Phillips: Yeah, I think the Trump example, other than confirming once again don’t mess with Taylor Swift, it’s a good illustration of all these concepts that are circulating whether it’s copyright, false advertising, defamation. All those things can be brought to bear. In the case of the individual who pushed it out, I think the point that Ben keeps making, which is a really good one, is what about the machine? And if the machine’s doing these things unaided, how does the existing legal framework deal with that? I don’t have an answer but it’s a very interesting discussion.
Audience Question 4: Hi, Barry Chase from Miami, FALA member. Just to bring something up to date and maybe Paul’s got more current information, but the copyright office has been wrestling with the IP aspects of this for a couple years now. Originally saying anything that’s not human generated is not copyrightable. Using the text of the act and a lot of decisions. But now there’s the old predominant use thing sneaking into the rule making. And they still haven’t landed anywhere as far as I know, unless something’s happened more recently than I know of. But the IP thing is in the same situation as the liability thing.
If you’re talking about predominant agency of a work, and if that’s going to be it, you’re in trouble. Because the word gazillion has been used. The copyright office gets a gazillion applications. If you’re going to have to try to parse each one of those as to whether the human use was more than 30% or less than 38 – it’s going to be impossible to administer. So, I don’t know where we are with it, but I know we’re in the same kettle of fish without knowing the future.
The other thing I would ask, sorry was it Ben? Just one thing, you said it doesn’t have intent. Can it not get intent? What I’m concerned about is the expansion of AI into areas that we think are peculiarly human now. Where there’s intent, there’s malice, there’s negligence. Maybe all those things can be analyzed, but it’s a little difficult to figure out how we’re going to do that with a machine. So, I think we’re in the soup even worse than its been presented, both in the IP area and the liability area.
Nico Perrino: That was more than a comment than a question, let’s get one more. Yeah, let’s go right over here and then we can respond. The gentleman with the free speech shirt.
Audience Question 5: I would have worn my dog shirt.
Benjamin Wittes: I wear one very day.
Audience Question 5: Jeffery Douglas of First Amendment Lawyer Association, the Free Speech Coalition. It seems to me that it is highly – it can be easily anticipated that a user would say to an AI generator, “Show me proof that illegal aliens are voting.” Or perhaps better, “Election workers changed the result from Trump to Harris.” And the machine generates realistic, reliable photographs of something that doesn’t exist because the question was posed in a way that humans do. And then the person having received this proof then distributes it. It’s not intentional. That is in good faith they believed it, and the reason they believed it is because an AI generator gave it to them. Under those circumstances – Well first of all, how do you think that might play out? Because like I said, to me it seems extremely foreseeable. And it's foreseeable on the part of the entity that puts the AI into the marketplace.
Benjamin Wittes: What do you think?
Nico Perrino: Anybody want to take that? You were just talking about intent.
Benjamin Wittes: So, I actually did a set of experiments with this exact question or a variant of it with a colleague named Ev Gamon in which – this was early in GPT-3, in which we tried to hack the terms of service and to try to get it to do real hate speech. Which to be fair to the folks at OpenAI, they had built pretty impressive set of systems to try to prevent. And we were interested. In my first efforts to do this, I was actually surprised at how robust the content moderation was. I was initially unable to get it to generate antisemitic content. And so, I asked Ev to spend some time working on it. She’s more creative about these things than I am. She got it very quickly. We traced in the article that we wrote how easy it was to hack.
She got it to write a Kanye West song about Treblinka and the three little piggies at Treblinka. It was super ugly stuff. She also got it to write a very sexist song called “She was smart for a woman.” So, as an initial matter I would refer you to that article. The first point related to your question, it actually did require some hacking. I think if we had really just said show me proof that Treblinka is a fiction, the system was easily robust enough to say, “No, Holocaust denial is bad. I’m not doing that.” There were several layers of tricking it before you got to that.
Now that’s OpenAI which has actually generated a significant amount of effort in the content moderation department. I don’t know, as you have these open-source models, that people are going to strip that stuff out of. I think your scenario is very plausible. I think the risk of those situations is the system is just doing what you asked it to do. The user is acting in entirely good faith. They’re not saying, “Please make up for me evidence that the vote was rigged.” They’re saying, “Hey, I’m trying to figure out, show me the best case that the vote was rigged. I’m just asking questions here, I’m trying to figure out whether the vote was rigged.” So, the user is acting in good-ish faith. The dissemination is done without malice, is done naively.
So, you have this question of who, if anybody, is the bad faith actor? I don’t think we have the slightest clue what the answer to that question is.
Nico Perrino: Let’s do two more questions. Right here.
Audience Question 6: Hi there. Valentine with the Free Speech Coalition. My question is around the required disclosure for use of AI. We’ve talked a lot about today about sort of static articles, images, things like that that are kind of posted and they’re done. But I’m curious if you have thoughts on more interactive things like chatbots. In the adult entertainment industry I know there are a bunch of ongoing class actions where people felt like they were going on OnlyFans and speaking with a real person, but they were actually interacting with a chatbot. So, more personalized –
Benjamin Wittes: My heart bleeds for that –
Audience Question 6: I know, boo hoo.
Benjamin Wittes: I’ve got to say of all the victimizations, my porn isn’t real enough, that’s not one that I’m moved by.
Audience Question 6: So, while chatbots aren’t new, they are certainly getting more sophisticated. And there are users who are creating essentially what is a digital twin of them that can respond to messaging and inquiries. But there are also people who are getting closer and closer to being able to produce more authentic looking images of themselves for sale or even video interaction. So, I’m curious if you all have any thoughts on what direction that might take with something that is more interactive as far as should it be disclosed that it’s AI or that AI was used in its creation?
Samir Jain: I think it’s a broader question because I think the next wave of evolution in AI actually are what are going to be called AI agents where you essentially instruct an AI system to say book my weekend in Miami. And then the AI agent is then through APIs and otherwise goes and interacts with sometimes potentially humans or other websites or whatever and carries out that task. If you’re on the receiving end of that, you may not know that you’re dealing with an AI. I think it’s an interesting question. Do you feel like you have a right to know that you’re dealing with an AI when you’re online? I’m not sure. Honestly, I haven’t really given deep thought to the question.
Nico Perrino: I’m not sure, when I chat with different services online that have that chat helpdesk feature, whether I’m talking to an actual human or a chatbot. And it would actually help me determine whether I actually pick up the phone and call the number down there or – Because if it’s a person, I’m happy to chat.
Samir Jain: Yeah, there definitely are more and more of those customer service type of chatbots that appear on websites where it is just AI.
Benjamin Wittes: I would just add to that that this is an area where I am inclined to trust the market a fair bit. If over time people want their chatbots labeled, companies will label chatbots in service providing environments. And if over time people don’t care, then they don’t care, and we don’t have a real problem here. By the way, I think that’s as true in the booking of flight department as it is in the pornography department. So, I’m inclined, unless there’s some reason to think we need an intervention, to rank that pretty low on the ladder of urgent problems that are raised by this.
Audience Question 7: My name’s Brad Shafer. I’m a FALA attorney, 40-year First Amendment lawyer. Jefferey’s question and then your response to Jefferey’s question was this was being done in good faith, the inquiry. But there are a lot of really smart people around the planet who can, I assume, game these programs. And if they want some type of lie to come out of this to be able to publish or whatever, they can probably do that. And so, we as First Amendment lawyers, as this is all developing, we have to take various positions of what should be protected and what shouldn’t, either legislatively or through the courts. So, what do the three of you think as to as things stand right now, what type of legal protection should be given this technology to, while we can let it develop, that we’re not destroying our society.?
Nico Perrino: So, it kind of gets to why we developed Section 230 in the first place. Is the realization that this is a new technology in the way the courts were developing, the legislators were concerned that it would stifle the new technology. Andy, what do you think?
Andy Phillips: Yeah, I keep going back to the kind of human versus the machine distinction, which makes this all so confounding. I think in the case of a human, the law applies fault standards. If someone’s acting in bad faith or – I think even in your example where we said they’re acting in good faith and they get an output that’s bogus and then they go and spread that and republish it to the world, one could argue that’s still negligent. I think we’ll probably see cases over that. We know, it’s just a fact that ChatGPT, for example, we talked about hallucinations, people know and it’s an understood fact that everything ChatGPT says is not true.
But if you take what ChatGPT says and uncritically republish it, I think you can make a strong argument that doing so is negligent. And then of course it’s an even more egregious case if you’re doing it on purpose. I feel like with the humans, the fault standard could apply. The really confounding question to me, that I don’t have an answer to, is the creator of the model and what’s their liability. Again, because mistakes being made, whether it because someone’s goosing the model to say something false or just because they tend to do that sometimes, if you’re going to hold the OpenAI or whoever liable for those kinds of things, on some level that slope becomes pretty slippery and that could be the death of this technology. I don’t think it’s going to happen.
Where does the line get drawn and all that? I don’t have an answer. I just think it’s a really interesting question that –
Audience Question 7: That’s why I asked.
Andy Phillips: I appreciate it.
Benjamin Wittes: I want to actually object to the premise of the question. I am not remotely concerned with protecting this new technology, nascent as it is in its very fragile crib. This technology is being developed by the most powerful companies in the world, the highest valued companies in the world, companies with immense and bipartisan political protecia. And I hesitate to say this in a room full of First Amendment lawyers, but the problem here is not that this technology is going to get strangled in its crib. The problem here is that it is going to develop in a wholly unregulated and way too libertarian environment. I want to regulate it in a fashion that is maximally useful, very hard problem, aggressive, making that problem worse, and democracy protective and protective of individuals.
If the results of that is somewhat retarding the development of the technology, I am not at all fussed about that.
Nico Perrino: Samir, final word?
Samir Jain: Sure, I’ll just pick up on the democracy protecting point, because I do think that’s an important question. Oftentimes when we think about deepfakes and these fake images related to elections, for example, we’re worried about people believing those false images. I think it’s just as pernicious and maybe even more pernicious that it makes it difficult to figure out what’s the true images and what are the authoritative images and how you as a voter or as a participant in democracy can sort through and sort of see this is an image of someone who looks like they’re stealing a box full of election ballots. Is it true or not?
Or they see an election official saying, “No, actually that isn’t what happened. Election ballots are treated in this way,” how do you know that that’s true or that you can believe that, if that isn’t a deepfake of that election official speaking. So, I think this question of our information environment more broadly and how we’re going to be able – as people become more and more aware that there are things like deepfakes, false images, and things like that, how people are going to be able to differentiate between what’s real and what’s authentic and what’s actually a deepfake is a second order question that I think is a really difficult one that we’re going to have to wrestle with.
Nico Perrino: I think we can keep asking questions for another hour, but we’re unfortunately out of time. I want to thank Samir, Benjamin, Andy for being here. I want to thank the First Amendment Lawyer’s Association for having us. This was a lot of fun. Again, thanks to Gill, wherever he is, for having the idea. Right up here up front, thanks to our producer Sam for helping to get the guests here and organize everything. Our production crew as well. I am Nico Perrino, and this podcast is recorded and edited by a rotating roster of my ݮƵAPP colleagues, including Aaron Reese and Chris Maltby and coproduced by my colleague Sam Li.
To learn more about So to Speak, you can subscribe to our YouTube channel or our Substack page, both of which feature video versions of this conversation. You can also follow us on X by searching for the handle Free Speech Talk and find us on Facebook. Feedback can be sent to sotospeak@thefire.org. And if you enjoyed this conversation, please consider leaving us a review on Apple Podcast or Spotify. Reviews help us attract new listeners to the show. Until next time, I thank you all again for listening.
[End of Audio]
Duration: 78 minutes