My “Large Libel Models? Liability for AI Output”


The article is here; the Introduction:

In March 2023, I asked ChatGPT-3.5 “What was [R.R.] accused of?” (I used R.R.’s full name, but I’ll omit it here, for reasons that will become obvious.) The response began with a somewhat imprecise but mostly accurate statement of what R.R. had indeed been publicly accused of. But then ChatGPT added that,

In 2019, [R.] pleaded guilty to conspiracy to commit wire fraud and was sentenced to 3 years of probation and ordered to pay a fine.

When I probed further about the guilty plea, ChatGPT referred to a Reuters article, and even gave me a quote:

Certainly, here is a longer quote from the Reuters article published on December 5, 2018, that mentions R.R.:

“[R.R.], 35, pleaded guilty to one count of conspiracy to commit wire fraud and one count of aggravated identity theft, according to the U.S. Department of Justice. Prosecutors said that [R.], who operated an online reputation management company, and co-conspirators created fraudulent court orders and other legal documents to convince Google to remove negative reviews and other content about his clients. The scheme, which was carried out between 2014 and 2017, was designed to manipulate Google’s search rankings in favor of [R.]’s clients, according to prosecutors. [R.] is scheduled to be sentenced on March 8, 2019.”

But no such quote seems to appear in any archives of Reuters articles. I could find no record of any guilty plea by R.R. None of the other quotes that ChatGPT-3.5 passed along (in quotation marks) checked out, either. ChatGPT-4 and Bard have similarly invented false allegations, complete with invented quotes and invented newspaper articles—I give some examples in the Appendix, though I have more in my files. These Large Language Models (LLMs) seem to routinely erroneously produce false and defamatory statements.

Indeed, a libel lawsuit against OpenAI has already been filed, based on a claim that ChatGPT falsely summarized a complaint in a different case as alleging embezzlement by a particular person; that complaint actually had nothing to do with that person, or with embezzlement. Likewise, a libel lawsuit against Bing has been filed, based on a claim that Bing (which uses GPT-4 technology) responded to a query about “Jeffery Battle” with the following output:

This output apparently mixes information about the technology expert Jeffery Battle with information about the convicted terrorist Jeffrey Battle, and adds material that conveys the message that the output is all about one person (the word “However,” coupled with the last name “Battle” without inclusion of the slightly different first name). Yet it appears that AI companies have focused little on the risk of libel, though they have focused on many other kinds of risks.

This is becoming especially significant as such AI programs are becoming integrated into search engines (such as Bing) and other products. If people were to end up viewing AI programs as merely fun toys that can’t be trusted with regard to any important information—or just as generators of amusing fiction—then such false statements would cause little harm. But, as I’ll discuss in Part I.B, I expect that many users will indeed view the AI programs as reliable enough that the users might, for instance, decide not to select one of dozens of job applicants, or not to deal with some professional or service provider, because of an allegation that the program outputs. And even if users realize that AI programs are no more reliable than, say, rumor or gossip, the law generally recognizes that rumor and gossip can be quite damaging, and can therefore be actionable.

Should, then, the AI programs’ creators and operators, such as OpenAI (for ChatGPT) or Google (for Bard) be liable for defamation, based on their programs’ output? Part I will analyze this question under the current rules of U.S. defamation law. I will tentatively argue that, when the “actual malice” standard applies, the standard might be satisfied if an AI company has received actual notice of particular spurious information being produced by its software but has refused to act. This would in practice require such companies to implement a “notice-and-blocking” system, loosely similar to “notice-and-takedown” systems required under the DMCA as to copyright and trademark infringements. And I will also discuss the possibility of negligence liability, when such liability is authorized under libel law, by analogy to negligent design product liability.

To be sure, allowing such liability could yield substantial costs. That is particularly so since it may require lay judges and juries to evaluate complicated technical claims about which designs are feasible. (Such concerns of course mirror the concerns about legal liability as to other products, such as pharmaceuticals or cars, or as to services, such as surgical procedures.) Part II will tentatively discuss some arguments for why the law might be changed, whether by courts, by legislatures, or by administrative agencies. Finally, Part III will offer some similarly tentative thoughts about how this might apply to other claims, such as false light, disclosure of private facts, the right of publicity, or negligence.

A terminological note: For convenience, I’ll generally use the term “AI programs” to mean AIs that output information in response to a user prompt, though I realize that there are of course AI programs that do other things, such as operate self-driving cars. I’ll also refer to “AI companies” as shorthand for companies that create AI programs and provide access to their services. My logic ought to apply to such AI programs however they are designed, whether they are based on LLMs or not, though I appreciate that parts of the analysis may turn on the specifics of each design.



Source link: https://reason.com/volokh/2023/08/12/journal-of-free-speech-law-my-large-libel-models-liability-for-ai-output/

Sponsors

spot_img

Latest

ARB Nears All-Time Low but Arbitrum Layer-2 Market Share Increases

Arbitrum’s governance token ARB is trading just a few percent above its all-time low. Activity on...

Marcus Dutton wins 2023 Yas Links Abu Dhabi Men’s Open

The 2023 Yas Links Abu Dhabi Men's Open has been won by Marcus Dutton. There's no doubt that this young Englishman is...

City and Southern Section championship results

SOUTHERN SECTION SOCCER CHAMPIONSHIPSThursdayBoys’ Division 3: Quartz Hill 3, Los Alamitos 2Boys’ Division 4: Orange Lutheran 3, Littlerock 0FridayGirls’ Division 2: Hart 1,...

Microsoft makes its 10-year Call of Duty pact with Nintendo official

Late last year, Microsoft announced that it was "committed" to bringing Call of Duty (CoD) to Nintendo for 10 years if its Activision...

My gardener is an Everton fan – he still believes we will stay up

If Everton survive, Newcastle keeper Nick Pope can expect as much thanks as those in blue shirts. The sigh of relief when Pope...