filmov
tv
Build Prompt Tuning & Prefix Tuning for LLMs: Soft Prompt Engineering Beats Fine Tuning
![preview_player](https://i.ytimg.com/vi/HkZOGGvZzg4/maxresdefault.jpg)
Показать описание
Learn to apply Prompt Tuning and Prefix Tuning, automate Prompt Engineering to improve LLMs.
I show you how Advanced Soft Prompt Engineering techniques, being Prompt Tuning and Prefix Tuning, automate Prompt Engineering and also are as good as or even better than fine tuning LLM ( large language models ).
This is the 2nd video in my LLM series. Watch this 9-minute video to Elevate your LLM models! I show you how to implement them with PyTorch so you can try them for your own LLM use case.
I'll also explain the LADDER of LLM models, from off-the-shelf options to completely pre-trained models, and the various interactions with LLMs, from hard prompt engineering to Retrieval-Augmented Generation ( RAG LLM ). By the end of this tutorial, you'll understand what Prompt Engineering is, why Prompt Tuning and Prefix Tuning are superior to hard prompts (Prompt Design) and Fine Tuning LLM, and how to implement these methods in your own data science projects.
︾︾︾︾︾︾︾︾︾︾︾︾︾︾︾︾︾︾︾︾
︽︽︽︽︽︽︽︽︽︽︽︽︽︽︽︽︽︽︽︽
⏰ Timecodes ⏰
0:00 Intro
0:27 LLMs Ladder - Model Source & Interaction
1:41 When to Use Fine Tuning or Prompt Engineering or RAG
2:23 Prompt Engineering - Hard Prompts (many shot prompting & meta prompting)
3:18 Prompt Drift
3:40 Prompt Tuning (Soft Prompting) and Comparison with Fine Tuning
4:33 Virtual Tokens
4:48 Pytorch Implementation of Prompt Tuning
5:47 FREE Data Science Guid with 100 Python Libraries
7:07 Prefix Tuning
7:47 Pytorch implementation of Prefix Tuning
8:43 DSPy, Langchain, Langgraph, Langsmith
#gpt #openai #ai #pytorch #prompting #huggingface #langchain #ollama #ai #llm #promptengineering #finetuning #rag
#largelanguagemodels #openai
I show you how Advanced Soft Prompt Engineering techniques, being Prompt Tuning and Prefix Tuning, automate Prompt Engineering and also are as good as or even better than fine tuning LLM ( large language models ).
This is the 2nd video in my LLM series. Watch this 9-minute video to Elevate your LLM models! I show you how to implement them with PyTorch so you can try them for your own LLM use case.
I'll also explain the LADDER of LLM models, from off-the-shelf options to completely pre-trained models, and the various interactions with LLMs, from hard prompt engineering to Retrieval-Augmented Generation ( RAG LLM ). By the end of this tutorial, you'll understand what Prompt Engineering is, why Prompt Tuning and Prefix Tuning are superior to hard prompts (Prompt Design) and Fine Tuning LLM, and how to implement these methods in your own data science projects.
︾︾︾︾︾︾︾︾︾︾︾︾︾︾︾︾︾︾︾︾
︽︽︽︽︽︽︽︽︽︽︽︽︽︽︽︽︽︽︽︽
⏰ Timecodes ⏰
0:00 Intro
0:27 LLMs Ladder - Model Source & Interaction
1:41 When to Use Fine Tuning or Prompt Engineering or RAG
2:23 Prompt Engineering - Hard Prompts (many shot prompting & meta prompting)
3:18 Prompt Drift
3:40 Prompt Tuning (Soft Prompting) and Comparison with Fine Tuning
4:33 Virtual Tokens
4:48 Pytorch Implementation of Prompt Tuning
5:47 FREE Data Science Guid with 100 Python Libraries
7:07 Prefix Tuning
7:47 Pytorch implementation of Prefix Tuning
8:43 DSPy, Langchain, Langgraph, Langsmith
#gpt #openai #ai #pytorch #prompting #huggingface #langchain #ollama #ai #llm #promptengineering #finetuning #rag
#largelanguagemodels #openai
Комментарии