LoRA: Low-Rank Adaptation

preview_player
Показать описание
Like 👍. Comment 💬. Subscribe 🟥.

0:00 Whistle Testing…testing!
0:46 Microsoft’s LoRA: Low-Rank Adaptation of Large Language Models
2:22 Abstract
6:40 1. Introduction
18:31 2. Problem Statement
23:04 Bonus: Testing…testing!
24:02 2. Problem Statement (continued)
27:30 3. Aren’t Existing Solutions Good Enough?
33:37 4. Our Method
52:28 Audio Problems
55:19 5. Empirical Experiments
1:15:19 6. Related Works
1:23:45 7. Understanding the Low-Rank Updates
1:24:37 Bonus: Buboo Appearance
1:24:50 7. Understanding the Low-Rank Updates (continued)
1:49:03 8. Conclusion and Future Work
1:51:44 Wrapping up

#llm #machinelearning #ai #finetuning
Рекомендации по теме
Комментарии
Автор

The idea behind this paper is quite simple, yet amazing.

aurko-cd
Автор

Hi, I'm from future and LoRa and Transfer Learning is cool again! After I/O and WWDC 2024

kingofutopia
Автор

To train the LoRA adapters you still need to backprop through the pretrained weights right? So we don't really save memory when training the LoRA adapters keeping the pretrained weights fixed?

chiragtrasikar
Автор

Anyone into the Grassmannian want to discuss this aspect of the paper more?

amelieschreiber