Section 230 of the Communications Decency Act says that services — from Facebook and Google to movie-review aggregators and mommy blogs with comments sections — shouldn’t face liability for most material from third parties. It’s fairly easy in these cases to distinguish between the platform and the person who’s posting. Not so with chatbots and AI assistants. Few have grappled with whether Section 230 provides protections to them.
Consider ChatGPT. Type in a question, and it provides an answer. It doesn’t merely surface existing content such as a tweet, video or website originally contributed by someone else, but rather writes a contribution of its own in real time. The law says a person or entity becomes liable if they “develop” content even “in part.” And doesn’t transforming, say, a list of search results into a summary qualify as development? What’s more, the contours of every AI contribution are informed substantially by the AI’s creators, who have set the rules for their systems and shaped their output by reinforcing behaviors they like and discouraging those they don’t.
Yet at the same time, ChatGPT’s every answer is, as one analyst put it, a “remix” of third-party material. The tool generates its responses by predicting what word should come next in a sentence based on what words come next in sentences across the web. And as much as creators behind a machine inform its outputs, so too do the users posing queries or engaging in conversation. All this suggests that the degree of protection afforded to AI models may vary by how much a given product is regurgitating versus synthesizing, as well as by how deliberately a user has goaded a model into producing a given reply.
So far there’s no legal clarity. Supreme Court Justice Neil M. Gorsuch said during oral argument in a recent case involving Section 230 that AI “generates polemics today that would be content that goes beyond picking, choosing, analyzing or digesting content” — hypothesizing “that is not protected.” Last week, the provision’s authors agreed with his analysis. But the companies working on the next frontier deserve a firmer answer from legislators. And to figure out what that answer should be, it’s worth looking, again, at the history of the internet.
Scholars believe that Section 230 was responsible for the web’s mighty growth in its formative years. Otherwise, endless lawsuits would have prevented any fledgling service from turning into a network as indispensable as a Google or a Facebook. That’s why many call Section 230 the “26 words that created the internet.” The trouble is that many now think, in retrospect, that a lack of consequences encouraged the internet not only to grow but also to grow out of control. With AI, the country has a chance to act on the lesson it has learned.
That lesson shouldn’t be to preemptively strip Section 230 immunity from large-language models. After all, it was good that the internet could grow, even if its maladies did, too. Just like websites couldn’t hope to expand without the protections of Section 230, these products can’t hope to offer a vast variety of answers on a vast variety of subjects, in a vast variety of applications — which is what we should want them to do — without legal protections. Yet the United States also can’t afford to repeat its greatest mistake on internet governance, which was to not govern much at all.
Lawmakers should provide the temporary haven of Section 230 to the new AI models while watching what happens as this industry begins to boom. They should sort through the conundrum these tools provoke, such as who’s liable, say, in a defamation case if a developer isn’t. They should study complaints, including lawsuits, and judge whether they could be avoided by modifying the immunity regime. They should, in short, let the internet of the future grow just like the internet of the past. But this time, they should pay attention.