Large Language Models As Optimizers - OPRO by Google DeepMind

preview_player
Показать описание
OPRO (Optimization by PROmpting) is a simple and effective approach to leverage large language models as optimizers, which was presented by Google DeepMind in a research paper titled "Large Language Models As Optimizers".
In this video, we dive into the research paper, to understand how the OPRO framework works, focusing on prompt optimization as the optimization problem.
We show how LLMs can be used to come up with a strong prompt that outperforms human-designed prompts such as chain of thought prompting.
We then review interesting results from the paper that show the effectiveness of this method.

👍 Please like & subscribe if you enjoy this content

----------------------------------------------------------------------------------

----------------------------------------------------------------------------------

Chapters:
0:00 Introduction
0:53 Prompt Optimization with OPRO
2:37 Meta-prompt Example
3:51 OPRO Framework Overview
5:18 Results
Рекомендации по теме
Комментарии
Автор

I Appreciate your work! Sir but you just tell me simply... In the leyman term how do I Prompt the LLMs to get the desired results according to the paper with a real life example?

That would be great 😃

ak
Автор

Hey can you make a video on "Language Modelling is Compression"

TommyJefferson