filmov
tv
FREE Local RAG Chatbot with Ollama - Streamlit and Langchain. Build with open-source Mistral ai
Показать описание
"Step-by-Step Guide to Building a RAG Chatbot with Ollama, streamlit, Mistral ai and Langchain
Build Retrieval Augmented Generation (RAG) chatbot locally. The app leverages Ollama, a tool that allows running large language models (LLMs) locally, along with the Mistral 7B open-source model for text embeddings and retrieval-based question answering. Specifically, we'll be using the nomic-embed-text model for generating embeddings, which is a high-performing open embedding model with a large token context window outperforming OpenAI embeddings
In this tutorial you will learn :
- how to run ollama locally
- use ollama langchain
- using ollama embeddings
- Using open source model Mistral ai Ollama
- Build RAG ollama
- Use Ollama library python
- Using ollama APIs
- install ollama in python
- how to run ollama run mixtral:8x7b
- run ollama mac
👨 WHO AM I -
I'm Sri Laxmi an AI product Manager who lives in San Francisco, CA. On this channel, we will learn how to build generative AI applications and use AI tools that can help us launch the projects that inspire us and, consequentially, lead the lives we've always dreamed about.
Linkedin -
/ sri-laxmi
Build Retrieval Augmented Generation (RAG) chatbot locally. The app leverages Ollama, a tool that allows running large language models (LLMs) locally, along with the Mistral 7B open-source model for text embeddings and retrieval-based question answering. Specifically, we'll be using the nomic-embed-text model for generating embeddings, which is a high-performing open embedding model with a large token context window outperforming OpenAI embeddings
In this tutorial you will learn :
- how to run ollama locally
- use ollama langchain
- using ollama embeddings
- Using open source model Mistral ai Ollama
- Build RAG ollama
- Use Ollama library python
- Using ollama APIs
- install ollama in python
- how to run ollama run mixtral:8x7b
- run ollama mac
👨 WHO AM I -
I'm Sri Laxmi an AI product Manager who lives in San Francisco, CA. On this channel, we will learn how to build generative AI applications and use AI tools that can help us launch the projects that inspire us and, consequentially, lead the lives we've always dreamed about.
Linkedin -
/ sri-laxmi
Комментарии