Ollama on CPU and Private AI models!

preview_player
Показать описание
This Ollama Crash Course introduces you
1. Ollama Intro
2. Ollama Run Local LLMs
3. Local LLMs as API End points
4. Local LLMs with Characters
5. Local LLMs with GGUF

❤️ If you want to support the channel ❤️
Support here:

🧭 Follow me on 🧭
Рекомендации по теме
Комментарии
Автор

it works! :) - installed llama2 and mistral on Debian, even without GPU for shorter prompts. Thanx a lot.

davidlepold
Автор

Thanks for another great video. I love how you couldn't just exit the program without thanking the LLM and assuring it that you're okay. Love it!

JimMendenhall
Автор

another awesome video.. I am liking all of your videos. The way you explain is awesome.. pls keep up the good work!

MuthukumarKB
Автор

the era of portable LLM's is here

sanjay
Автор

Now I've gotta try this. I've been impressed by text-generation-webui, so I never got around to trying other interfaces. This definitely looks better for some use cases.

nathanbanks
Автор

After watching your video, I have utilized Ollama in a Windows environment that has an i5 processor and 12 GB of RAM, and it is running decently. I have employed Docker for this purpose.

a_LEGION
Автор

Great video, as always! It would be nice to see a use case of RAG system based on Ollama and Llamaindex.

cscarpa
Автор

You can do ollama on windrows with WSL2

mariuszkreft
Автор

I gladly give up conveniences to avoid macs lol

ArtisanTony
Автор

can llm's run doom though? if not, im not interested.

JohnMcclaned
Автор

Hey thank you for the video. I want to say that all those critiques of yourself me and everyone else watching aren't thinking. We love your content and vibe so keep going bro and don't be so harsh on yourself.

izzyblast
Автор

The github now shows Widows is coming soon!

dkracingfan
Автор

What would be the disadvantages of running ollama ? Is it possible for you to do a comparison of models ? How deep does OpenAI's moat run given that opensource is innovating at a faster rate ?

ghostwhowalks
Автор

i wish there was an open source llm optimized for interacting with autogen!

tech
Автор

Windows users can use WSL easy peasy for techy people ...

Aarifshah-A
Автор

Thanks. Can you show how to finetune mistral 7b on your own system with 3090?

ojikutu
Автор

For Windows user, can't you achieve the same thing with Pinokio ?

mimotron
Автор

Great video! Could you please make a video about how to use new OpenAI TTS via a google colab or any other easy way? Help appreciated.

hp
Автор

Please show for windows as well if possible

mustafabasrai
Автор

How can someone with a windows machine and without a gpu get a llm running as a api. Can you make a video of how to do it in colab or some easy to use cloud server?

vivekpadman