How To Use Open-Source LLM Models using Langflow & Ollama | Fast & Easy

preview_player
Показать описание
Learn how to run open-source large language models (LLMs) entirely on your local machine using Langflow and Ollama. Follow along as we walk through the steps of downloading Ollama, setting up the Llama 3 model, and integrating it with Langflow for local execution. Perfect for folk looking to use the power of LLMs without sending data outside of their computer!

00:00 Introduction to Running Large Language Models Locally
00:16 Downloading and Installing Ollama
00:27 Setting Up and Running Llama 3 Model
01:19 Integrating Ollama with Langflow
01:31 Creating a Basic Prompting Template
02:19 Running the Model and Generating Responses
03:30 Using Local Embeddings and Vector Stores
03:54 Conclusion: Running a Local RAG App in Langflow
Рекомендации по теме
Комментарии
Автор

Thanks for the content, I've only been using Flowise, but I'll give Langflow a try.

RolandoLopezNieto
Автор

Thanks for the video. I have a question.
Are you doing "ollama run {instruct model} " and another "ollama run {embedding model} on 2 different terminals, then running langflow on a third terminal?
Or do you mean something else by "having the models running". is it just them being downloaded?

Edit: embedding models can't be "run", so I'm guessing for embeddings, it's just them being present? it's weird because "ollama embedding" cells dont hav a dropdown with the available models like the ollama "model" cell does

Madaaguu