Running Alpaca7B in Colab

preview_player
Показать описание

A quick video and code for how to run a version of Alpaca7B in Colab using a T4 in 8bit.

For more tutorials on using LLMs and building Agents, check out my Patreon:

My Links:

Github:

#deeplearning #llm #largelanguagemodels
Рекомендации по теме
Комментарии
Автор

Amazing! Thank you. As a beginner in AI, this was invaluable.

JohnDahleAL
Автор

thank you. it told me how to cook mdma

heshumi
Автор

Would this LLM utilize 2 GPUs (3060 & 1030)? Is this LLM free to use? Thank you for amazing videos.

ikjb
Автор

Nice video, thanks. How can I make it do "text completion"? Like I give it half of a sentence, and ask it to complete it. I used different prompts, but most of the times it does not continue my sentence, it gets context from that and generates a whole different sentence.

erfanshayegani
Автор

Hi, Sam. What should I do if I just want to output the response only instead of all the text? Thanks.

christophercai
Автор

hello, thanks for sharing. can I use my dataset to finetune from this model? thanks,

alinemati
Автор

This is amazing content and the Colab is extremely fun to play with. Do you know how to make it have longer results? I ask for a 10, 000 word story, and crank up the max tokens, but it tops out around 600 words. Also, is there an easy way to use your larger model in this Jupyter notebook?

fastrocket
Автор

Hey I tried the model with and without the adapters. Without the adapters its really bad the lore adapters make all the difference keep up the good work.

theunknown
Автор

Hello Sam, first of all thanks for all your videos. I find a lot of tutorials about how to fine tunning the alpaca model but... is it possible to fine tunning on google collab and save it just to be runned on a local computer? Do you know how to do that?

javierporronbarahona
Автор

i want to set my own data how can i do using Alpaca??

DeepakPrajapat-jp
Автор

Could you do a tutorial on setting up the model and api to connect stuff to it? I made a discord bot out of python and connected it to openai api. It was extremely fun to have a group chat with openai. I run a query, so it would read the last few messages in the chat. It was combining them or answer two questions at once. We even got rate limited by discord because so many people were using it. Unfortunately, that got expensive quick, so I have been looking for in alternative. I ran across Alpaca but have not got it to run locally. No matter what I do, it seems to have a problem with tokenizer. Your videos have helped out some, and I appreciate all the effort you're putting out

edellenburg
Автор

Thank you for introducing us to this new world of possibilities! Is it possible to run and fine tune a model locally on a PC with RTX 3090/64GB ?

ysy
Автор

Sam, can we run all these on our machines with NVIDIA GEForce RTX 3060?
Or collab is compulsory?

kartikpodugu
Автор

How good is it have you done a bench mark test

DarrenTarmey
Автор

Can I use alpaca model for 0 shot Text Classification??

navneetkrc
Автор

Alpaca could support Chinese language?

xiaojingzhu
Автор

What's going on here? You're taking LlaMa weights, then what? You're fine-tuning it to Alpaca using a couple of tools LoRA/PEFT? From your first Alpaca vid I see it's some kind of fine-tuning using 75(?) human-generated tasks. I can't quite pick up what's going on in this vid.

pi