Mistral 7B

preview_player
Показать описание
Resources:

0:00 Intro
0:08 Video Overview
0:34 Mistral 7B architecture and design
3:33 Runpod setup
4:24 Mistral 7B Evaluation
6:18 Test 1: Random sequence reversal
7:24 Test 2: Code generation
9:03 Test 3: Passkey retrieval
10:26 Test 4: Fine-tuning
13:06 Evaluation Summary
14:30 EXTRA: Grouped Query Attention
15:31 EXTRA: Sliding Window Attention
Рекомендации по теме
Комментарии
Автор

I have a general question about evaluation of LLMs outputs and appreciate for any comments. Except human review or feedback for evaluating Large Language Models, what is the best method that you may suggest?

saramirabi
Автор

Videos are pretty good. But there's something wrong with the microphone / sound quality, I keep getting noizes making the experience terrible, which is not present when watching videos from other channels

VinMan-qlyu
Автор

I tried to run the code on my local computer and I got " Some modules are dispatched on the CPU or the disk. Make sure you have enough GPU RAM to fit the quantized model. If you want to dispatch the model on the CPU or the disk while keeping these modules in 32-bit, you need to set and pass a custom `device_map` to `from_pretrained`. I have 8GB GPU and even with 'TinyLlama-1.1B-Chat-v0.1' I am getting the same error!

saramirabi