Do Large Language Models have a Duty to Tell the Truth? with Brent Mittelstadt, PhD

preview_player
Показать описание
In this thought-provoking talk, Professor Brent Mittelstadt, PhD, explores the emerging issue of "careless speech" created by large language models (LLMs). He examines how LLMs, despite their helpfulness, can generate responses filled with factual inaccuracies, misleading references, and biased information, posing long-term risks to science, education, and democratic societies. Mittelstadt proposes the need for a legal duty for LLM providers to ensure their models “tell the truth” and outlines potential pathways to mitigate these harms.

As Director of Research at the Oxford Internet Institute, Professor Mittelstadt is an authority on AI ethics, algorithmic fairness, and technology law. He discusses the implications of careless speech, hallucinations, and misinformation within LLMs, while offering solutions grounded in EU human rights law.

#AI #MachineLearning #DeepLearning #NaturalLanguageProcessing #LLM #ArtificialIntelligence #DataScience #EthicsInAI #AlgorithmicFairness #NLP #TechLaw #ReinforcementLearning #DataEngineering #DataViz #DataScienceCareers #AIethics #BrentMittelstadt #LLMRisks #EmergingTechnologies #ODSC

Timecodes:
0:00 - Introduction
1:14 - The Problem of Hallucinations
5:44 - Combatting Hallucinations
8:23 - Carless Speech
20:17 - Legal Duties to Tell the Truth
25:40 - Conclusion

You can also follow ODSC on:
Рекомендации по теме
Комментарии
Автор

Thank you for sharing. An example I ran into the other day is the LLM was bringing up Reynaud's Syndrome or disease. Im not by any means knowledgeable about it, but It was adamant that it was an Autoimmune disease. I know my grandmother had this and it just meant that her fingers didn't get enough circulation. She has never had any autoimmune issues. Had I not known this, it would have seemed completely plausible and believable to me. Because I knew this I searched for further information.

andrew.sandler