filmov
tv
Prompt Optimization and Parameter Efficient Fine Tuning
![preview_player](https://i.ytimg.com/vi/VqIloIgqvvs/maxresdefault.jpg)
Показать описание
As the generalizability of large language models (LLMs) rapidly expands, prompting and prompt design have become an increasingly important field of study. For expressive LLMs, well constructed prompts can elicit remarkable performance on a wide variety of downstream tasks. However, there is significant variance in such performance depending on prompt structure, and manual optimization of prompts is often quite challenging. In this talk, we'll discuss several state-of-the-art prompt optimization techniques, including both discrete and continuous approaches. Continuous prompt optimization approaches fall under the more general category of parameter efficient fine-tuning (PEFT) methods. We'll briefly consider two such approaches in the form of Adapters and LoRA, the latter of which produces similar or better performance to full-model fine tuning on many tasks.
David Emerson, Applied Machine Learning Scientist, Vector Institute
David Emerson, Applied Machine Learning Scientist, Vector Institute