Stanford CS25: V3 I Retrieval Augmented Language Models

preview_player
Показать описание
December 5, 2023
Douwe Kiela, Contextual AI

Language models have led to amazing progress, but they also have important shortcomings. One solution for many of these shortcomings is retrieval augmentation. I will introduce the topic, survey recent literature on retrieval augmented language models and finish with some of the main open questions.

Рекомендации по теме
Комментарии
Автор

I love that this content is freely accessible to everyone. Lots of helpful information being shared here

erniea
Автор

The amount of research work on retrieval augmented generation for large language models has exploded in recent times. Thanks to the speaker for directing attention to the most significant bits.

nintishia
Автор

Nicely explained in just right technical details. Thank you!

capucinnolover
Автор

this is what I needed, thank you much!!!!

loopaal
Автор

just the thing i wanted thank you so much.

ronitakhariya
Автор

So many great ideas here! Fantastic resource, thank you.

Arvolve
Автор

Excellent content, thanks for the references!

velociraptor
Автор

Good lecture, offers a good summary of the literature on RAG.

Автор

My lecturers in my university never explain those things, thanks for this free lecture

kingki
Автор

It is good to see the whole spectrum of options, but what would be a practical way to get started on this?
His old colleagues in HF actually have an excellent book that goes through many different ideas including RAG in chapter 7 where they do an excellent job explaining context and giving you options for implementation.

muhannadobeidat
Автор

Hi, about Atlas. You said that we can update the Retriever. At 42:00 is some retriever loss, but what about pair label (question, positive paragraph, negative paragraphs) like normal retrieval model - do they contribute to the retriever loss ? I read codebase of Atlas, and do not find that kind of loss

duongkstn
Автор

Pros: Gives a brief overview of many RAG methods
Cons: No intuition given which is the key reason for why it works for the different methods

Would have preferred more insights rather than just describing the papers, but overall thanks for the video!

johntanchongmin
Автор

3:34 What chatgpt was really about - fix the user interface to LM
12:00 Frozen RAG

ijrmylt
Автор

does anyone have list of research papers metioned in this video?

ritikdua
Автор

Actually we can see a first sign of language models in Shannon's 1948 paper, A Mathematical Theory of Communication.

samferrer
Автор

why there's no framework to make the process was easy

kingki
Автор

THe bit about where this came from was funny, but your search just gave a less silly answer. Go reread Shannon's 1948 paper that invented Information Theory. Yes he did not talk about using neural nets (which did not exist) but he did talk about probability.

deconcoder
Автор

It's not entirely clear what "frozen rag" and "retrieve rag" refer to.

Arthurlvaz
Автор

worth dozens of "shiny" "trendy" "tik-toky" videos over there, without asking you to subscribe and leave comments..

srujpcq
Автор

I love how everyone tries to hide the fact that OpenAI is 100% the reason everyone is watching this video

jstello