Risks of Large Language Models (LLM)

preview_player
Показать описание

With all the excitement around chatGPT, it’s easy to lose sight of the unique risks of generative AI. Large language models (LLMs) -- a form of generative AI -- are really good at creating prose that sounds like a native speaker. But because they’re so good at it, large language models may give a false impression they possess actual understanding. They don't! In this video, Phaedra Boinodiris explains the potential risks of relying on large language models to your business, brand, or even society. She also presents mitigation strategies for reducing these risks.

#watsonx #llm #llms
Рекомендации по теме
Комментарии
Автор

00:31 Risks of large language models (LLMs) include spreading misinformation and false narratives, potentially harming brands, businesses, individuals, and society.
01:03 Four areas of risk mitigation for LLMs are hallucinations, bias, consent, and security.
01:34 Large language models may generate false narratives or factually incorrect answers due to their ability to predict the next syntactically correct word without true understanding.
03:00 Mitigating the risk of falsehoods involves explainability, providing real data and data lineage to understand the model's reasoning.
03:59 Bias can be present in LLM outputs, and addressing this risk requires cultural awareness, diverse teams, and regular audits.
05:06 Consent-related risks can be mitigated through auditing and accountability, ensuring representative and ethically sourced data.
06:01 Security risks of LLMs include potential misuse for malicious tasks, such as leaking private information or endorsing illegal activities.
07:01 Education is crucial in understanding the strengths, weaknesses, and responsible curation of AI, including the environmental impact and the need for safeguards.
07:32 The relationship with AI should be carefully considered, and education should be accessible and inclusive to ensure responsible use and augmentation of human intelligence.

Zale
Автор

Great insight into the risk and mitigation strategies of LLMs. Thank you.

juliusgodslove
Автор

Great Explanation! I think the transparency and fair use of training data would be crucial for foundation model

Jexep
Автор

Excellent explanation. However, in terms of bias and audits as a mitigation you did not say who would be doing the audits. The assumption is that it is easy to find unbiased auditors and you immediately run into the problem of "quis custodiet ipsos custodes?" To my mind this is a much greater risk as the potential for misuse and harm is huge.

conorpodonoghue
Автор

Glad you added the three dots via Aftereffect. Was a gamechanger.

FreehuntX
Автор

I hope IBM acknowledges that these risks apply to IBM Watson. If not, go into great detail how you mitigated such risks.

How does ibm Watson differ and compare to LLM?

nelsonmacy
Автор

Well done! Remarkable content here thank you

justinpermar
Автор

great video and high quality content, thank you ..

asamirid
Автор

IBM stopped being a computer company decades ago. This is a perfect reflection of what IBM has become. It is a great legal and financial company.

logan
Автор

Very Nicely explained the risks and mitigations!! It can't be more simpler than this.

vycnyrd
Автор

This video raises some very valid points my thoughts are that technology will ultimately be empowering when it is open source and decentralized and ultimately authoritarian when it is proprietary and centrally controlled.

chillonfunsmart
Автор

Love the energy! Educate ... best way to end this presentation as it is really an invitation to press on an learn more. AI is not going away so we need to learn how to use it properly and responsibly. This is not different then any other major advancement humankind has accomplished in the past.

rmm
Автор

Quick poll. If companies making LLMs we're going to buy IBM mainframe hardware to train them on and run them on in inference mode, how quickly do you think IBM would pull this video down?

logan
Автор

In all this hype on generative ai. It seems like we are running before we could even crawl. The new tech comes a roaring like a lion. Great work on achieving but why did Watson not do the same considering it won jeopardy more than a decade ago and project debater. Wow that was revolutionary. The transparency of all these models are datasets we choose. Maybe ensuring that all models met a strict criteria. Hence auditing I guess. I have heard alot of concern from people and tend to agree with these legitimized concerns. It should be able to do what Watson did and not produce an answer till it is ready to run. Watching the jeopardy challenge was an eye opener. Based on percentage an answer was given. Or not at all. That was a good solution. Keep it up and open folks we all need to have the talk. This is new and what we lack is the experience. Sad but aging seems that way too. Just the way of the world I tend to observe. It’s time that will tell this story. Hope we can get it right. Great job folks as always.

toenytv
Автор

I can save you all money by telling you to download Ollama, then offload LLM’s onto local systems. There’s your 100% lineage overview capability that you usually don’t get with the wider net of training data

XiangYu
Автор

I asked bing chat a tax return question and it gave me the wrong answer and the sources it used disagreed with it too even 🤷‍♂.

radfaraf
Автор

It's nice to be cautious about new innovations. However, her tone seems to be largely pessimistic, instead of celebrating the cumulative achievements of many scientists which led to this point. While LLMs are not the endpoint, a combination of providing GPT models access to a myriad external APIs coupled with AutoGPT variations is a technology that is here to stay, instead of "going nowhere"

XShollaj
Автор

I think positive and negative abstractions is a better way to say hallucination in this regard.

DJWESG
Автор

Pretty much. "Use with care".

Автор

We need to revisit the meaning of "Proof"-- philosophically, semantically, and in everyday usage. Greater attention needs to be paid to the history of the methods and of the data -- the equivalent of a "digital genealogy" but without the "genes." So much of what I see written about AI today reminds me of a quote in Shakespeare's Troilus and Cressida -- "And in such indexes, through but small pricks to their subsequent volumes, lies the giant shape of things to come." Finally, the process of recycling data in and out of these systems describes the "Ouroboros." More thought needs to be given to the meanings of the Ouroboros.

Seadancer