LLM (Parameter Efficient) Fine Tuning - Explained!

preview_player
Показать описание
Parameter efficient fine tuning is increasingly important in NLP and genAI. Let's talk about it.

RESOURCES

ABOUT ME

PLAYLISTS FROM MY CHANNEL

CHAPTERS
0:00 Introduction
1:00 Pass 1: What & Why PEFT
6:27 Quiz 1
7:26 Pass 2: Details
16:20 Quiz 2
17:11 Pass 3: Performance Evaluation
20:49 Quiz 3
21:43 Summary

MATH COURSES (7 day free trial)

OTHER RELATED COURSES (7 day free trial)
Рекомендации по теме
Комментарии
Автор

One of the best PEFT explanations till date. Should deserve more subscribers

deepakkushwaha
Автор

I have a question: the "basic" transformer encoder has a multi-head attention followed by the normalization layer. Why do we add a feed forward layer after the attention (and before the normalization). It looks a feed forward layer for each sub-layer of the encoder is added, but why?

Thank you so much for the content you did! It helped me a lot!

MARCOMARINO-bbun
Автор

I may be missing something but in the second quiz why would full fine tuning increase the number of trainable model parameters by 100%? Wouldn't it just act further on 100% of the original trainable model parameters?

sudlow
Автор

cool, didn't know that PEFT also works with adapters, thanks!

paull
Автор

yes, good question, there is nothing like a peak. To be honest, the only peak is the human imagination to create new incredibles models and maths. Remember before transformers, there was a peak, and before diffusion models the peak was Gans. Now yes we are stucked in transformers and diffusion and everybody adopt them, so we have to wait for someone working on others concepts.

blancanthony
Автор

what are the chances that I was searching for exact same content, and got notification at the same time

souravjha
Автор

We haven't peaked. Now, technology or hardware will have to get stronger, better, faster.

jameslucas
Автор

Of course no.. in cnn era people were saying AI peaked.. now we see it is still improving.. there will be lots of algorithmic and hardware advancements. And lots of..

DrAIScience
Автор

Your question whether AI has peaked is not well formed. What AI are we talking about?
If it's general AI, then no, clearly AI models today don't get close to do everything humans are capable of.
If it's ML and language models, well there seems to be some kind of plateau and there are no clear advantages of one solution over the next one. Maybe the salvation will come from a new learning algorithm but they clearly need a change of paradygm.

Patapom