Who's Liable for AI Misinformation With Chatbots Like ChatGPT? | WSJ Tech News Briefing

preview_player
Показать описание
Generative artificial-intelligence chatbots like ChatGPT are known to get things wrong sometimes, a process known as “hallucinating.” But can anyone be held liable if those incorrect responses are damaging in some way?

Host Zoe Thomas talks to a legal expert and an AI ethicist to explore the legal landscape for generative AI technology, and the tactics companies are employing to improve their products. This is the fourth episode of Tech News Briefing’s special series on generative AI, “Artificially Minded.”

0:00 Why Australian mayor Brian Hood is thinking about suing OpenAI's ChatGPT for defamation
4:44 Why generative AI programs like OpenAI’s ChatGPT can get facts wrong
6:26 What the 1996 Communication Decency Act could tell us about laws around generative AI
10:20 How generative AI blurs the line between creator and platform
12:56 How lawmakers around the world are handing regulation around AI
14:13 Why AI hallucinations happen
17:16 How Google is taking steps to create a more factual chatbot with Bard
18:34 How tech companies work with AI ethicists: what is red teaming?

Tech News Briefing
WSJ’s tech podcast featuring breaking news, scoops and tips on tech innovations and policy debates, plus exclusive interviews with movers and shakers in the industry.

#AI #Regulation #WSJ
Рекомендации по теме
Комментарии
Автор

Determining liability for AI misinformation in chatbots like ChatGPT is a complex issue, as it involves multiple stakeholders such as developers, platform providers, and end-users. Legislation and guidelines are needed to clarify responsibilities and implement accountability measures, ensuring ethical AI practices and mitigating the spread of misinformation.

jonneye
Автор

Thank you for shining a light on this important issue. Not many people talk about this for now, sadly

RainerBrunotte
Автор

Why are people going to AI for information? Thats not what this is for! As far as I'm aware, or how I use it, the software helps with writing and stuff, it's not training on truth. I may be wrong and OpenAi can be advertising it as an information source, but I don't know if it's fair to blame those companies for their product being misused.

bulgna
Автор

People has to realize that AI is not 100% accurate and it can also be quite delusional. I know because my team works on Tammy AIand in many occasions, AI just pretends it has the right answers. There is no way to know unless you fact check all the time. We do think that the accuracy level will improve over time though.

SteveMoore-nv
Автор

Blaming AI creators for AI-driven fake news/info isn't the best idea since they can't fully control how their tech is used. Take ChatGPT, for example. It learns from loads of good and bad examples to get better, but it can't catch all the false info out there. Luckily, there's stuff like fine-tuning that helps AI act right in specific situations. It's like a mini crash-course to make 'em more accurate and lower the chance of spreading bogus info.
Plus, AI peeps are already tackling the fake news problem. Lots of tech companies are using fact-checkers or teaming up with outside fact-checking groups to make their platforms more legit.

So, let's not blame AI creators for everything. Instead, let's focus on making AI smarter and working with fact-checkers to cut down on all the false info flying around.

Toam
Автор

ChatGPT has given me loads of incorrect information, so this doesn't surprise me at all.

arjaygee
Автор

I don’t see how the AI creators would be liable unless there was a piece of code that clearly and explicitly altered the AI results in a bad way

alexander
Автор

Misinformation should be illegal, the people who would regulate such things can’t be trusted.

e.thomas
Автор

I suppose one can try suing for anything, but realistically anyone doing such probably does not have a firm understanding what AI is and / or how it works. I think that a better understanding is that Chat-GPT is not intended to be a fact-based software tool, but instead to be a creative-based tool that outputs responses that are more so opinion based. As such there's really no need to hold a Chat-GPT response liable any more than one would hold liable the response of a crazy homeless person off the street. i.e. Most people would not consider suing the street person.

karlisern
Автор

Until they can fix the wildly incorrect information, it is nothing more than a toy.

shadowofpain
Автор

the data protection act is there to prevent breach of confidentiality

official_ashhh
Автор

What about AI firms being held liable for recommendations their bots make - for instance, what if i say somebody wronged me, I ask chatgpt what my response should be, it says i should murder them and i go out and do it. I don't think anyone would hold a social media to blame if a user reponded to me and suggested this.
This is an extreme example given just to make a point but milder ones are applicable as well. Has this been discussed anywhere?

marcusbarry
Автор

Interestingly, we so concerned about potential misinformation spread by AI when in fact, at the same time, we don’t seem capable of holding politicians and lawmakers who lie to the people they are supposed, accountable. Currently, lying to the American people is not a crime. Wouldn’t it make sense to hold public servants liable for the misinformation content they spread?

romandevivo
Автор

Why would you assume that anything ChatGPT says is factually correct?

falconJB
Автор

Your self - don't believe everything you hear and read.
You're not blaming Google when you find misinformation on a indexed page.

CHMichael
Автор

Who’s liable for misinformation? What about who’s liable for being guilible and not doing their own due diligence?

Viviko
Автор

Zoe, may I ask you what Section 230 of what the law of? And what the meaning of what it is. The big question for you, Zoe, do you think all of the countries arround the world adopt it or make the same Act. The system of law of US is so different with Indonesia even the progress of law of Indonesia have adopted the Jurisprudence as one of the source of law but the judges of Indonesia isn't absolutely to follow what the decision of the first judges regarding the same case.

SorminaESar
Автор

the software is designed to make things up. For a human that would be called lieing, or fabrication, or deciet.

importantname
Автор

Americans always need to know who they can sue!

eanerickson
Автор

Estou impressionado com a tecnologia utilizada pelo ChatGPT-4! É fascinante como a Inteligência Artificial evoluiu nos últimos anos, permitindo que máquinas possam entender e responder às nossas perguntas de uma forma quase humana. No entanto, é importante lembrar que, assim como qualquer tecnologia, a IA também pode ser perigosa. A capacidade de aprender e evoluir rapidamente significa que as máquinas podem se tornar imprevisíveis e tomar decisões que podem ser prejudiciais para os seres humanos. É crucial que os desenvolvedores de IA levem em consideração a segurança e a ética em suas criações, para que possamos aproveitar os benefícios da tecnologia sem comprometer nossa segurança - Texto criado pela IA

L_Christ_BR