Learning to Retrieve In-Context Examples for Large Language Models

preview_player
Показать описание
This paper proposes a framework to train dense retrievers that can identify high-quality in-context examples for large language models (LLMs), improving their learning performance. Experimental results show significant enhancements in performance and generalization ability to unseen tasks.

00:00 Section: 1 Introduction
03:21 Section: 2 Related Work
06:07 Section: 3 Preliminaries
09:16 Section: 4.2 Reward Modeling
11:54 Section: 4.4 Evaluation of LLM Retrievers
15:04 Section: 5.2 Main Results
18:47 Section: 6.3 When does LLM-R Work and When Does it Not?
21:39 Section: 6.5 Scaling the Number of In-Context Examples and Retriever Size

PODCASTS:
Рекомендации по теме