What is Prompt Tuning?

preview_player
Показать описание

Prompt tuning is an efficient, low-cost way of adapting an AI foundation model to new downstream tasks without retraining the model and updating its weights. In this video, Martin Keen discusses three options for tailoring a pre-trained LLM for specialization, including: fine tuning, prompt engineering, and prompt tuning ... and contemplates a future career as a prompt engineer.

#ai #watsonx #llm
Рекомендации по теме
Комментарии
Автор

Excellent broad explanation of complex AI topics. One can then deep dive once a basic understanding is achieved ! Thank you

dharamindia
Автор

Really like these summarization videos on this channel. While they do not go into depth, I appreciate the overarching concepts being outlined and put into context in a clean way without throwing overly specific stuff in the mix.

Gordin
Автор

Im stunned how Martin is able to write backwards on this board so efficiently

dominikzmudziak
Автор

Awesome content. Thanks for uploading.

It's great that the video calls out the differences between soft prompting and hard prompting. While soft prompts offer more opportunities for performance tuning, practitioners often face the following issues:
- Choosing between hard prompting with a more advanced, but closed, LLM versus soft prompting with an open-sourced LLM that is typically inferior in performance.
- Soft prompting is model dependent, and hard prompting is less so.

WeiweiCheng
Автор

Excellent job explaining key AI terms!

XavierPerales-zmxx
Автор

A lot to unpack here. Great job explaining.

I have one question about the difference between incontext learning and prompt tuning with hard prompts. Are they synonymous?

johndevan
Автор

You should make a guide on FlowGPT / Poe that delves into operators, delimiters, markdown, formatting, and syntax. I've been experimenting on these sites for a while, and the things they can do with prompts are mind-blowing.

SCP-GPT
Автор

Could you please outline the advantages and disadvantages of fine-tuning versus prompting in the context of large language models?

azadehesmaeili
Автор

What data set for supervised learning is used in prompt tuning

apoorvvallabh
Автор

More important question, what type of smart/whiteboard are you using?? I love it!

datagovernor
Автор

Could you explain labeling done in fine tuning and prompt tuning?

scifithoughts
Автор

how do you get the AI to generate that tunable soft prompt?

Asgardinho
Автор

This that soft prompt is basically a trainable parameters, which also undergoing backpropagation and its weights are updated? Just like LoRA method, where you attach new trainable parameters to the model and train only those new parameters.

eck
Автор

Could you please explain a little detail about the strings of numbers how those are indexed? Are those some sort of abstraction that we fully understand!

Very informative lecture is this one... Probably everyone should have a little expertise in prompt engineering skill in near future.

neail
Автор

So how do I get to those "soft prompts"? Do you have to use prelabeled examples for that?

maxjesch
Автор

Why isn't this more popular if it actually works? All I see is LORAs and RL methods.

pensiveintrovert
Автор

How do you discover the correct soft prompts?

mikegioia
Автор

Hi, nice talk by the way, but what about some examples of soft turning, i understand is human unreadable, but how exactly you achieve that ? by writing some code ? extra tools ? plugins ? thanks a lot for your reply :)

mnbbgiv
Автор

I'm doing a project where I need to categorise the transaction details from transactional SMS to be output in JSON type. Can I prompt tuning or prompt engr with hard prompt?

Abishek_B
Автор

Very concise and information, but tell me, what technology do you use to write backwards so fast? Do you flip the board in post-production?

marc-oliviergiguere