How to run a llm locally | Run Mistral 7B on local machine | generate code using LLM

preview_player
Показать описание
Generative AI models are the most talked about topic these days. OpenSource models are rocking the repositories.
Huggingface , the largest repo has over 3k+ models in their repository.
In this video made an attempt to show you how and which of the LLMs we can try and run in our local machines.
I have executed the Mistral 7B model to generate a simple python code.
Рекомендации по теме
Комментарии
Автор

Great job bro I was assigned this task and I was struggling a lot but your tutorial helped me a lot

akilsghodi
Автор

Great one! Can you make a tutorial on, how can we finetune our custom dataset locally and use that finetuned model for getting domain-specific results locally?

nasiksami
Автор

Hey, great video, thanks to you I got it working.
Is there a possibility to change the code, so that not the first modelfile is downloaded? I want to download Q4 Mistral for examble but the code gives Q2.
I am pretty new to this, sorry if this is a silly question.

albrechtfeilcke
Автор

Thank you so much, this explanation is great! This really help me a lot but, i'm stuck at adding my own gguf models to my project. Like when i'm trying to add it my code didn't detect it and downloaded the other version of the model id. Can i download the models manually from Hugging Face than downloading it from the script? Because the file i downloaded from the script, is not even a gguf file or any type of that.

wibuyangbaca
Автор

I see you have some good use cases. I've been working on similar projects. How about we have a little chat and exchange notes?

AbhijitRayVideos
Автор

Hi Joy, thank you for this video. What is the advantage of using ctransformers library versus other github libraries available such as OpenLLM? Is it just a matter of personal preference?

vbridgesruiz-phd