Opinion | Does section 230 protect ChatGPT? Congress should say.

Comment

The early days of Microsoft’s ChatGPT were something of a replay of internet history — beginning with excitement about these inventions and ending in trepidation about the harm they could do. ChatGPT and other “large-language models” — artificially intelligent systems trained on vast troves of text — can turn into liars, or racists, or terrorist accomplices that explain how to build dirty bombs. The question is: When that happens, who’s responsible?

Section 230 of the Communications Decency Act says that services — from Facebook and Google to movie-review aggregators and mommy blogs with comments sections — shouldn’t face liability for most material from third parties. It’s fairly easy in these cases to distinguish between the platform and the person who’s posting. Not so with chatbots and AI assistants. Few have grappled with whether Section 230 provides protections to them.

Consider ChatGPT. Type in a question, and it provides an answer. It doesn’t merely surface existing content such as a tweet, video or website originally contributed by someone else, but rather writes a contribution of its own in real time. The law says a person or entity becomes liable if they “develop” content even “in part.” And doesn’t transforming, say, a list of search results into a summary qualify as development? What’s more, the contours of every AI contribution are informed substantially by the AI’s creators, who have set the rules for their systems and shaped their output by reinforcing behaviors they like and discouraging those they don’t.

See also ChatGPT hints at potential for artificial intelligence in government

Yet at the same time, ChatGPT’s every answer is, as one analyst put it, a “remix” of third-party material. The tool generates its responses by predicting what word should come next in a sentence based on what words come next in sentences across the web. And as much as creators behind a machine inform its outputs, so too do the users posing queries or engaging in conversation. All this suggests that the degree of protection afforded to AI models may vary by how much a given product is regurgitating versus synthesizing, as well as by how deliberately a user has goaded a model into producing a given reply.

So far there’s no legal clarity. Supreme Court Justice Neil M. Gorsuch said during oral argument in a recent case involving Section 230 that AI “generates polemics today that would be content that goes beyond picking, choosing, analyzing or digesting content” — hypothesizing “that is not protected.” Last week, the provision’s authors agreed with his analysis. But the companies working on the next frontier deserve a firmer answer from legislators. And to figure out what that answer should be, it’s worth looking, again, at the history of the internet.

See also Will ChatGPT and other AI tools replace journalists in newsrooms?

Scholars believe that Section 230 was responsible for the web’s mighty growth in its formative years. Otherwise, endless lawsuits would have prevented any fledgling service from turning into a network as indispensable as a Google or a Facebook. That’s why many call Section 230 the “26 words that created the internet.” The trouble is that many now think, in retrospect, that a lack of consequences encouraged the internet not only to grow but also to grow out of control. With AI, the country has a chance to act on the lesson it has learned.

That lesson shouldn’t be to preemptively strip Section 230 immunity from large-language models. After all, it was good that the internet could grow, even if its maladies did, too. Just like websites couldn’t hope to expand without the protections of Section 230, these products can’t hope to offer a vast variety of answers on a vast variety of subjects, in a vast variety of applications — which is what we should want them to do — without legal protections. Yet the United States also can’t afford to repeat its greatest mistake on internet governance, which was to not govern much at all.

Lawmakers should provide the temporary haven of Section 230 to the new AI models while watching what happens as this industry begins to boom. They should sort through the conundrum these tools provoke, such as who’s liable, say, in a defamation case if a developer isn’t. They should study complaints, including lawsuits, and judge whether they could be avoided by modifying the immunity regime. They should, in short, let the internet of the future grow just like the internet of the past. But this time, they should pay attention.

See also Is ChatGPT a threat to education?

The Post’s View | About the Editorial Board

Editorials represent the views of The Post as an institution, as determined through debate among members of the Editorial Board, based in the Opinions section and separate from the newsroom.

Members of the Editorial Board and areas of focus: Opinion Editor David Shipley; Deputy Opinion Editor Karen Tumulty; Associate Opinion Editor Stephen Stromberg (national politics and policy, legal affairs, energy, the environment, health care); Lee Hockstader (European affairs, based in Paris); David E. Hoffman (global public health); James Hohmann (domestic policy and electoral politics, including the White House, Congress and governors); Charles Lane (foreign affairs, national security, international economics); Heather Long (economics); Associate Editor Ruth Marcus; and Molly Roberts (technology and society).

Add Comment