Ollama: Run LLMs Locally On Your Computer (Fast and Easy)

preview_player
Показать описание
With Ollama, you can run local, open-source LLMs on your own computer easily and for free. This tutorial walks through how to install and use Ollama, how to access it via a local REST API, and how to use it in a Python app (using a client library like Langchain).

👉 Links

📚 Chapters
00:00 How To Run LLMs Locally
01:07 Install Ollama
02:45 Ollama Server and API
04:15 Using Ollama Via Langchain
Рекомендации по теме
Комментарии
Автор

Definitely would love to see more videos on training the models, finetunin and adding documents etc.

Larimuss
Автор

Love this channel, your content is clear explanation. Please do one video for fine tuning LLM for any specific task with one real-time use case

iamfine
Автор

You have a great knack of keeping things simple and understandable.

CalsProductions
Автор

First I would like to thank you, As a beginner you have provided a solid platform to me. I would also eagerly wait for your next video on how to train model locally and fine tune . Thank you so much once again .

mayankbhadauria
Автор

Sound quality is much better in this new setup. I just saw your FastApi video (if you are wondering why I am appreciating the sound of the video 😂)

malkitsingh
Автор

Love this channel, very clear and factual explanation of the topic

kacperwodarczyk
Автор

Thanks, the simple codes you showed helped me a lot!

matheussimonacivieira
Автор

Hi,
I really like all your lectures and examples. Just wonder if you could show an example of how to use RAG within a web Ruby-based REST_API architecture ( using, for instance, Llama and with as input a pdf representing a job posting as an example?)
Thanks a lot!

ap-rcpe
Автор

It's Good. Very nice explanation😀

mohamedjamaludeen
Автор

Diving a bit deeper into embeddings would be nice. And vector database. How to know the quality of your embeddings. What made you go to bedrock?

lesptitsoiseaux
Автор

It's interesting to train custom LLM instead of using RAG [2:45]

Screonizma
Автор

when he means locally, can anyone explain what that means? For example, let's say I want to use an LLM that I downloaded on my machine privately. Is this what that means? Not connected to the internet?

zacharycangemi
Автор

Can i install ai town with this. Other method was too complex for me as i am new to alot of this

Thelgren
Автор

can you do new episode combine with "Ollama: Run LLMs Locally On Your Computer (Fast and Easy)" and "Langchain Python Project: Easy AI/Chat For Your Docs"? Which you will just used local LLM to process the Docs

jimmylin
Автор

Super interesting! Do you know what are the RAM requirements to run this locally?

alexandrosanapolitanos-ewox