Ollama: The Easiest Way to RUN LLMs Locally

preview_player
Показать описание
In this video, I will show you no-code method to run open source LLMs locally. In this easiest way, we will run Mistral-7B in Ollama and serve it via API.

LINKS:

Timestamps:
[00:00] Intro
[00:29] Ollama Setup
[02:22] Ollama - Options
[04:14] Ollama API
Рекомендации по теме
Комментарии
Автор

This is fantastic. Congrats to the team who put this together

Shaunmcdonogh-shaunsurfing
Автор

Very cool. So few videos talk about the API in Ollama. This is a very important function.

whitneydesignlabs
Автор

mind blown look how simple that is! i love this channel.

NLPprompter
Автор

Great vid! Would love to see how to integrate an Ollama model, through their API, to Flowise.

philipsnowden
Автор

Are there plans to integrate LocalGPT with Ollama?

Shogun-C
Автор

Another great video! Thanks for sharing! Will wait till they release the Windows version.

positivevibe
Автор

Great sharing and another great video.

adriantang
Автор

I love the simplicity of this setup! Is there any way to import my own documents for it to reference?

TechnicalTerry
Автор

Thank you very much. How to connect it to my own data only ?

trobinsun
Автор

Sir, can we run quantized version and make inference fast with any method like vllm?

Bcs-Mohtisham
Автор

I was wondering if i could run mistral model on 16gb ram and 4gb vram is it possible?

davidelks
Автор

Hey bro amazing video .Can i integrate on Swift?

SonnySaint
Автор

Hi, I'm just star to learn about the Machine Learning. What is the lowest spec that can I have to run this as a hobby for starter.

nufh
Автор

Running on wsl 2 only my cpu was used. Is this expected on wsl or am I missing something? (Drivers etc are installed)

MrMoonsilver
Автор

What are the specs of your computer in terms of RAM?

MaharshiPandyaJanam
Автор

Also, are these tools safe?
Like absolutely no data leak? What would you recommend using it while connected to wifi or not connected?

This is not only about this one, it is also about whisperer, whisper, text-gen Webui, and many many other offline AI tools!
I would highly appreciate your input/thoughts!
Thanks

positivevibe
Автор

@engineerprompt tried to use with mistral 7b however i am not sure what quantize version since i tried ro compare with the question like the pond of lillies and door push and pull it failed. So can we download the veraion we like and link it or is there alternative way.

cudaking
Автор

I wonder if we could run this under (WSL 2) in Windows since it supports Linux? and does it utilize the machine's GPU(s) if exists

MichealAngeloArts
Автор

I dont get enough token generation speed, how to increase that ?

Nallu_Swami
Автор

Correct me if im wrong but wsl2 is supported on win11 init?

picklenickil