By: John Baek
United States statute defines defamation as false statements that harm a person’s reputation. In Australia, defamation can occur by any form of third-party publication. ChatGPT is now one such third party, as it generates responses to user prompts.
Now, a mayor in Australia may bring the first defamation claim against OpenAI and ChatGPT for allegations raised by members of the public after ChatGPT made statements regarding the mayor’s alleged involvement in a bribery scandal. ChatGPT generates its answers by synthesizing information provided by users on the Internet, and continues to impress with its versatility, ability, and occasional tendency to make errors. The falsity required for a defamation claim may arise when users assume that ChatGPT’s information is correct, even though the software openly admits its unreliability and cannot easily access information past September 2021. This holds especially potent as ChatGPT can provide information on specific individuals. Hence, even with the awareness of ChatGPT being no gospel, its ability to provide false information on specific persons in the public eye makes us question why or how it provides such information, and whether a claim of defamation is applicable.
Defamation originated in Ancient Rome and England, found in spoken words by individuals. ChatGPT, while applying information made and requested by human beings, is a separate entity, which invites consideration besides the United States Supreme Court’s reconsideration of Section 230 for algorithmic publishers such as YouTube. For the Australian mayor, he would not be suing the individuals who asked ChatGPT about him, but ChatGPT and its creator OpenAI, or Microsoft, which licensed the software under Bing. The question specific to AI or algorithmic content is new, and the Internet freedoms that foster AI user-interactive services like ChatGPT face legal challenges; ChatGPT may not say it is one hundred percent accurate, but certainly looks so. As society makes further use of Internet-based platforms in information generation and facilitation, the AI horizon may not be one with silver humanoids come to threaten us, but it is one very near in terms of what constitutes speech, especially as the defamation statutes do not make mention of specifically human speech or words.
The elements of defamation in Australia generally do not make mention of the specific source of the false content but only its harmful effect on the person in question, the guise of fact, and the publication of that information. These elements are distinct from United States law, where defamation against public officials requires a showing of actual malice. While similar language resides in Australian criminal defamation, the mayor must decide under which type of law to bring suit. The means or intent of publication does not matter for a defamation claim to prevail in Australia, and neither does the knowledge of the information’s falsity, which fully allows for a platform such as ChatGPT to produce defamatory statements, just as a newspaper or written work might.
Ultimately, the mayor must decide whether to sue ChatGPT if it does not correct its statements. If the mayor does bring suit, ChatGPT will likely evade consequences as specific information or algorithm monitoring looks to withstand similar challenges in Gonzalez v. Google and the increasingly widespread use of the tool, even in the legal sphere. However, ChatGPT and similar services may face backlash, at least in Australia, as they may be as liable for defamation as any other means of publication known to produce false information generally. With the field of artificial intelligence law still being nascent, ChatGPT’s potential mistake of committing defamation may soon impact those who use it and other information-generating AI services, the AI-tech-business community’s interests in AI going forward, and more broadly to the law’s relationship with AI. The torts of language may be exploring far more than the land down under with ChatGPT as it continues to impact business law and beyond, and heralds higher standards of accuracy for its developers.
 See 28 U.S.C.S. § 4101(1).
 See Defamation Law, Arts L. Ctr. Austl., https://www.artslaw.com.au/information-sheet/defamation-law/ (last visited Apr. 7, 2023).
 See ChatGPT, supra note 1; see OpenAI, Introducing ChatGPT (Nov. 30, 2022), https://openai.com/blog/chatgpt.
 Byron Kaye, Australian mayor readies world’s first defamation lawsuit over ChatGPT content, Reuters (Apr. 5, 2023, 2:52 PM), https://www.reuters.com/technology/australian-mayor-readies-worlds-first-defamation-lawsuit-over-chatgpt-content-2023-04-05/; see also ChatGPT, supra note 1.
 See, e.g. Samantha Murphy Kelly, ChatGPT passes exams from law and business schools, CNN Bus., https://www.cnn.com/2023/01/26/tech/chatgpt-passes-exams/index.html (last updated Jan. 26, 2023, 1:35 PM) (noting ChatGPT’s ability to pass graduate-level exams and its tendency to make basic mistakes).
 See ChatGPT, supra note 1 (providing the above when asked for a measure of its own accuracy); see also ChatGPT, Britannica, https://www.britannica.com/technology/ChatGPT (last updated Apr. 11, 2023).
 See ChatGPT, supra note 1 (last visited Apr. 14, 2023) (providing three paragraphs each for “Putin,” “Rowling,” “Adam Sandler,” and “Frodo”).
 See Kaye, supra note 5.
 See defamation, Britannica, https://www.britannica.com/topic/defamation (last updated Apr. 13, 2023) (detailing punishments that included removal of the tongue, death for abusive chants, and so forth).
 See Austin Fitzgerald, Symposium at Reynolds Journalism Institute asks: What happens when AI creates defamatory content?, Reynolds J. Inst. (Mar. 17, 2023), https://rjionline.org/news/symposium-at-reynolds-journalism-institute-asks-what-happens-when-ai-creates-defamatory-content/; e.g. Wall St. J., ISIS, YouTube and Section 230 at the Supreme Court, https://www.wsj.com/articles/isis-youtube-and-section-230-supreme-court-google-internet-platforms-facebook-twitter-moderators-ai-recommendation-55aa7509 (last updated Feb. 20, 2023, 5:47 PM).
 Kaye, supra note 5; see also Devin Coldewey, Can AI commit libel? We’re about to find out, TechCrunch (Apr. 6, 2023, 3:24 PM), https://techcrunch.com/2023/04/06/can-ai-commit-libel-were-about-to-find-out/.
 See Coldewey, supra note 12. Cf. TechFreedom, Don’t Gut Section 230, TechFreedom Urges Supreme Court (Jan. 18, 2023), https://techfreedom.org/dont-gut-section-230-techfreedom-urges-supreme-court/ (last visited Apr. 7, 2023).
 See generally Terminator 2: Judgement Day (Carolco Pictures 1991); see also Defamation, Wikipedia, https://en.wikipedia.org/wiki/Defamation (last visited Apr. 13, 2023) (providing overviews of defamation law in other countries and noting the emergence of online defamation law in South Korea). Cf., e.g. Billy Perrigo, Elon Musk Signs Open Letter Urging AI Labs to Pump the Brakes, Time, https://time.com/6266679/musk-ai-open-letter/ (last updated Mar. 29, 2023, 12:25 PM) (detailing Musk and other AI executives’ warning of AI’s threat to the future of humanity).
 See, e.g. Legal Info. Inst., Defamation, https://www.law.cornell.edu/wex/defamation#:~:text=To%20prove%20prima%20facie%20defamation,entity%20who%20is%20the%20subject (last visited Apr. 11, 2023); Tort Law in Australia, Wikipedia, https://en.wikipedia.org/wiki/Tort_law_in_Australia (last visited Apr. 9, 2023).
 New York Times Co. v. Sullivan, 376 U.S. 254, 298 (1964).
 See Criminal Code 1899 – Sect 365 Criminal Defamation, Queensland Consolidated Acts, http://www5.austlii.edu.au/au/legis/qld/consol_act/cc189994/s365.html#:~:text=(b)%20intending%20to%20cause%20serious,commits%20a%20misdemeanour.&text=Maximum%20penalty%E2%80%943%20years%20imprisonment (last visited Apr. 9, 2023).
 Defamation Law, supra note 3 (noting that the defamatory content must only be “capable” of harming reputation).
 Phil Mercer, Mayor in Australia Ready to Sue over Alleged AI Chatbot Defamation, VOANews (Apr. 9, 2023, 10:16 AM), https://www.voanews.com/a/mayor-in-australia-ready-to-sue-over-alleged-ai-chatbot-defamation/7042792.html (providing such statements on Friday, April 7, 2023).
 143 S. Ct. 762 (Jan. 2023).
 Id.; see Lauren Feiner, Supreme Court justices in Google case express hesitation about upending Section 230, CNBC, https://www.cnbc.com/2023/02/21/supreme-court-justices-in-google-case-hesitate-to-upend-section-230.html#:~:text=Supreme%20Court%20justices%20in%20Google%20case%20express%20hesitation%20about%20upending%20Section%20230,-Published%20Tue%2C%20Feb&text=Supreme%20Court%20Justices%20voiced%20hesitation,Google%20on%20Tuesday (last updated Feb. 22, 2023, 1:56 AM).
 See Mercer, supra note 19 (noting competitors Bard, Blenderbot, and Ernie by Google, Meta, and Baidu, respectively).
 See Fitzgerald, supra note 11.
 See, e.g. Gianna Maria Balli, AI and the Law: Exploring Applications and Challenges of ChatGPT, 31 Miami Bus. L. Rev. 2 (Mar. 29, 2023), https://business-law-review.law.miami.edu/ai-and-the-law-exploring-applications-and-challenges-of-chatgpt/; see also Down Under, Wikipedia, https://en.wikipedia.org/wiki/Down_Under (last visited Apr. 9, 2023).