LoRA - Low-rank Adaption of Large Language Models Paper In-depth Explanation | NLP Research Papers

preview_player
Показать описание
In this video, I have explained in detail the LoRA paper which proposes the method of using low-rank weight decomposition matrices which achieves a very competitive performance by updating the weights of these matrices alone rather than the whole weights of the pre-trained model. This very computationally efficient approach has been proposed for finetuning LLMs.

For any discussions, you can connect with me via the following social links:

Feel free to join the telegram group for discussions using the following link

The code will be available in the following repository:
Рекомендации по теме
Комментарии
Автор

Thank you for this wonderful video, best explaination of LoRa on youtube!!!
Can you please share the python/notebook for us to learn as well. please

sachin
Автор

Hope you all liked the explanation guys. Next week we will be having an another interesting research paper😊

NeuralHackswithVasanth
Автор

Great content to finetuning LLM and get better results in an optimized way.

rohanbagulwar
Автор

explain how decoder only standalone model works like gpt

gokulraja
Автор

Vasanth, can we apply LoRA method to non LLM models too ?

ycquvvj