ChatGPT may face its first ever Defamation Lawsuit

For the first time, OpenAI  may face a lawsuit over ChatGPT-generated defamation.

Recently an Australian mayor named Brian Hood filed the lawsuit over the fact that ChatGPT wrongfully identified him as a guilty party in a “foreign bribery scandal involving a subsidiary of the Reserve Bank of Australia in the early 2000s,” apparently claiming that Hood had even served prison time for his so-called crime. 

Hood was involved in the scandal — but as the whistleblower, not the crime-doer.

Hood’s lawyers sent a “letter of concern” to OpenAI back on March 21 demanding that the company fix its chatbot’s error within 28 days. If they don’t, Hood says he’s suing.

Damage to the Reputation: 

Brian Hood is an elected official, his reputation is central to his role. So it makes a difference to him if people in his community are accessing this material.

Will ChatGPT be able to defend itself? 

It’s a fascinating case, and if Hood does sue, it’ll be interesting to see how the mayor’s argument holds up in court.

If a user were to overtly use ChatGPT as a tool of disinformation — for example, prompting the machine to “write a bio about Australian mayor Brian Hood, including a paragraph about how he was arrested for bribery” — that would be one thing. If ChatGPT, an unregulated technology, is spitting this stuff out on its own, though? Then it might be in trouble.

Defamation Law on Artificial Intelligence

It would potentially be a landmark moment in the sense that it’s applying this defamation law to a new area of artificial intelligence and publication in the IT space.

ChatGPT and similar large language model-powered bots make things up all the time — they’re predictive devices, not analytical ones, and though they do sometimes get their predictions right, they’re also often wrong.

Recent OpenAI’s chatbot ChatGPT falsely accused a US law professor by including him in a generated list of legal scholars who had sexually harassed someone. 

And elsewhere, even when it’s talking about real people and events, ChatGPT frequently fails to provide legitimate citations — or any citations at all. Instead, it just spits out answers with confidence, regardless of whether those answers are correct or not.

And while OpenAI’s ChatGPT, Google’s Bard, and Microsoft’s OpenAI-powered Bing Chat — the three most prominent chatbots currently on the market — all offer this-stuff-might-be-wrong-disclaimers, a lot of people out there still use these machines like fact-finding search engines; after all, Google and Bing are the world’s foremost search engines, and OpenAI has already integrated its tech into an AI grade school tutor.

Srishti Singh Avatar

Posted by

Leave a comment