Can ChatGPT be accused of defamation if it says something false about you?
Cross-analysis of American case law and French press law on AI defamation.
Cross-analysis of American case law and French press law on AI defamation.
The rise of large language models (LLMs) has opened up an unprecedented field of uses… and risks. Among these risks, “hallucinations” - those responses that invent facts - raise a cardinal question: who is legally responsible when a tool like ChatGPT wrongly attributes delinquent behavior to an identified person?
A decision rendered in May 2025 by the Superior Court of Gwinnett County (Georgia) in the case of Mark Walters v. OpenAI, L.L.C. \\ [1.\")\\] provides concrete insight from the American side: the court granted “summary judgment” to OpenAI, meaning it ended the litigation in its favor, without trial, based on the evidence and applicable law.
We propose here a structured analysis of this judgment, followed by a comparison with French defamation law (Law of July 29, 1881), whose balances are quite different.
On May 3, 2023, Frederick Riehl, editor-in-chief of Ammoland (a news/advocacy website on the Second Amendment), received a press release and the complaint from the Second Amendment Foundation (SAF) against the Attorney General of Washington State (SAF v. Ferguson). Wishing to write an article, he asked ChatGPT to summarize passages from the complaint; for the pasted sections, the summary was accurate.
At other times, the tool provides inaccurate responses, a phenomenon that technical literature now refers to as “hallucinations.”
The judge expressly notes that, by their generative nature, major LLMs can produce information contradicting the source, and recalls this technical vocabulary.
Crucial point: Riehl was aware of these limitations (he had already observed “flat-out fictional” responses) and, on May 3, he had accepted ChatGPT’s terms of use, which warned him: probabilistic output, potentially inaccurate results, necessity of human review. He also saw several on-screen warnings stating that “_ChatGPT may produce inaccurate information_.”
The person pointed out by these erroneous responses, Mark Walters, is a well-known media figure in the “Second Amendment” sphere: daily radio host, author, public personality within pro-gun organizations. These elements form the basis of the court’s analysis of his “public figure” status (at least limited).
The court grants _summary judgment_ to OpenAI; in other words, no material question of fact warranted a trial.
The “reasonable reader” filter is decisive: Riehl was not a layperson; he knew ChatGPT’s limitations and had explicit warnings before his eyes, urging him to verify responses in case of editorial use.
In this context, the Court considers that the incriminating outputs did not, in law, convey a defamatory meaning likely to engage OpenAI’s liability: for a reasonable user in these circumstances, the output had limited and conditional informational value, requiring verification. The materiality of these warnings is documented in the file (Terms of Use, interface messages, etc.).
Even assuming potential reputational harm, the action fails on two levels: (i) negligence (ordinary standard of a “reasonable publisher” under Georgia law) and \\*\\*(ii) the aggravated degree of _actual malice_ required for a public figure.
Regarding negligence, the Court finds that OpenAI deployed reasonable organizational and technical measures: design and training aimed at reducing hallucinations, clear and repeated warnings for the user, recommendation of human review before editorial use. These elements defuse the allegation of a failure of due diligence in the provision of the tool and its user interface.
Regarding \"_actual malice_\", the plaintiff had to prove -with “_clear and convincing_” evidence- that the publisher knew the assertions were false or acted with deliberate disregard for the truth. The Court, on the contrary, notes the absence of evidence that anyone at OpenAI was aware of a specific hallucination concerning Mr. Walters, or intended to ignore identified risks for this particular case. The record rather shows proactive steps to recall uncertainty and frame sensitive uses.
In French law, defamation is defined by Article 29, para. 1 of the Law of July 29, 1881: “_any allegation or imputation of a fact that harms the honor or reputation_” of a person (physical or legal). The incriminated fact must be precise, verifiable, and the identification of the person possible.
Doctrine and jurisprudence emphasize the precise nature of the imputed fact: it must have content capable of proof and adversarial debate. Failing this (vague statements, opinions), the qualification shifts to insult. Courts assess the meaning of statements taking into account the intrinsic and extrinsic elements of the message (context, references, associated media).
Defamation is an intentional offense. However, the “intent to defame” in the subjective sense -the desire to harm- is not a constitutive element that the victim would have to prove: the moral element results from the will to utter the incriminating statements; it is not required to demonstrate “malice” in the American sense. The burden shifts, after the offense is constituted, to means of exoneration available to the author.
Practical consequence: unlike the American standard of _actual malice_, the victim does not have to prove that the author knew it was false or was indifferent to the truth. The mere act of publicly alleging a precise fact harming honor is sufficient, it then being up to the defendant/accused to invoke their justifications.
Where American law sometimes conditions the very existence of liability on the victim’s proof of actual malice, French law constitutes the offense more easily, then opens avenues of exoneration for the author.
Ultimately, the Walters v. OpenAI judgment creates neither general immunity for generative AI, nor an exceptional regime: the court simply recalls that, for a public figure, defamation is not characterized if, when placed in its context of use, the output of an LLM does not convey a defamatory meaning for a reasonable reader, that no fault -and _a fortiori_ no _actual malice_- is demonstrated, and that no compensable damage is established.
The visible warnings and the design oriented towards human verification weighed heavily in this assessment, as did the user’s profile and the absence of evidence of an internal knowledge of falsity.
About the author
Partner
Admitted to the Paris Bar, Maître Raphaël MOLINA is a co-founding partner of INFLUXIO and has specialized in intellectual property law and digital law for several years.
Contact
Would you like to schedule a meeting or get a quote?
We respond within 24 hours.