Ollama: Running Hugging Face GGUF models just got easier!

preview_player
Показать описание
In this video, we're going to learn the new and improved way to running Hugging Face GGUF models on Ollama.

Рекомендации по теме
Комментарии
Автор

love your videos, packed with lot of to the point information which get the task does exactly as its supposed to work. thanks a lot.

sanketss
Автор

Thank you for the guidance!

I have a question about the difference between these two commands:

The first command directly pulls the entire project repository of the model, e.g.,

The second command runs a specific GGUF file under the project repository of the model.

However, I noticed something strange. When I visit the same author's model page on Hugging Face, under the "Use this model" dropdown, it only shows options like llama.cpp, LM Studio, Jan, and vLLM, but there's no option for Ollama. Why is that?

Thanks!

jasonnhri