Unlock the Power of AI with Ollama and Hugging Face

preview_player
Показать описание
Hey there, AI enthusiasts! Ready to supercharge your local machine with cutting-edge language models? This video is your ultimate guide to using Ollama and Hugging Face together, making it easier than ever to run powerful AI models on your Mac, Windows, or Linux computer!

We'll walk you through the simple process of importing GGUF models from Hugging Face into Ollama, showcasing the new streamlined method that's got the AI community buzzing. You'll learn how to:

• Find and select GGUF models on Hugging Face
• Install models with a single command
• Troubleshoot common issues and create custom modelfiles
• Manage your newly installed models effortlessly

But that's not all! We'll also dive into the technical details, exploring file structures, manifests, and the magic behind this seamless integration. Whether you're a seasoned AI developer or just getting started, this tutorial has something for everyone.

Don't miss out on this game-changing update that's revolutionizing how we access and use AI models. Hit that play button now and join me in exploring the exciting world of local AI with Ollama and Hugging Face!

#AIModelTraining #MachineLearning #OllamaAI #HuggingFace #LocalAI #TechTutorial

Remember to like, subscribe, and share your favorite models in the comments below. Let's build an amazing AI community together! 🤖💡

My Links 🔗

00:00 - Intro
00:32 - How to import from hf
01:40 - Sometimes it wont work
02:52 - Quants
03:08 - Remove a model
03:23 - Create a new model
03:50 - Nothing special about the CLI
04:14 - Whats Different
05:41 - Private Models
05:57 - The old process
06:34 - where is the template in the GGUF?
07:13 - How did they do it?
Рекомендации по теме
Комментарии
Автор

Matt, for each GGUF model listed on HuggingFace there is a black "Use This Model" button. This opens a drop down of providers. Ollama is listed. Clicking that gives the whole "ollama run" command with URL for the model metadata. Also on the right side of each page are links for various Quant sizes. Each of these also has the "Use This Model" button. Pretty handy!

beachfeet
Автор

Whenever there is a command, I would hope to see a terminal with the command on the screen. It is easier to remember if one can see than just hear it.

JJJJ-ru
Автор

Thank you matt! this is such am amazing way for new people to get into models with ollama! thank you for always making the best ollama content ever! have a good one!

BORCHLEO
Автор

Thanks for recommenting the Ollama Chrome Extension. It makes life easier. Maybe you can explain how to find great models on HuggingFaces. I just downloaded the famous classic models and have no idea how to benefit from this huge database of AI stuff. Finding your video I first thought, this video brings the answer how to find in HF the right models.

Lieblingszuschauer
Автор

Fantastic news! Of course, I immediately checked it on OpenWeb-UI and had no problem loading one of my experimental huggingface models from the web interface. Very cool.

RasmusRasmussen
Автор

Thank you for point out the caveats to the setup. I appreciate the time savings and not having to learn some of these lessons the hard way.

Also, love the PSAs to stay hydrated. Reminds me of Bob Barker telling everyone to spay and neuter their pets.

Chris-Nienart
Автор

Matt, thank you for your videos and well explanations! greetings from Ecuador! I was able to build so much stuff thanks to you!

leluch
Автор

Thanks for sharing this breakthrough. Super helpful.

Kk-edgr
Автор

Thanks Matt! Another very interesting video

aristotelesfernando
Автор

This is a great start! That is the single biggest issue I have with ollama, it should not be so complicated to add a custom model in gguf format.

jidun
Автор

I learned something new again, so its another great video. ty

newjoker-ctrl
Автор

Learning more thanks. I like motorcycle repair and maintenance too.

Igbon
Автор

Would be great if Ollama had llama 3.2 11B available. Can you ask your friends for an update on their progress?

dr_harrington
Автор

if only ollama would add support for a mlx backend, text generation performance would go 2x on macs., while it is already quite good atm.

atom_
Автор

i hope there will be feature to support token streaming model like kyutai moshi (they hasn't release any...) but it will be really cool if we have open source locally model that able do overlap conversation with local AI just like openai advance mode conversation do

NLPprompter
Автор

Which is that front end UI for Ollama in the video?

icpart
Автор

Thank you and a question, what if the model has several parts, does it support that?

QorQar
Автор

Hi Matt, thank you so much for such great videos. Is there any way I can use the non-GGUF Hugging Face model in Ollama? I want to use the facebook/mbart model for my translation work, but unfortunately, I can't find a GGUF version of it. Additionally, could you please suggest the best model for translation work with the highest accuracy that I can use in Ollama?

ashutoshanand
Автор

Please create a video on changing context length in ollama... by default it is 2K only
Also changing other parameters settings will be great.

vickytube
Автор

Hi Matt, I’ve been trying understand system prompts. I understand these to essentially be prepended to every user prompt. In this video it seems that some models are trained with particular system prompts. Can you suggest a good site/document to read up on this?

volt