filmov
tv
Build a Talking Fully Local RAG with Llama 3, Ollama, LangChain, ChromaDB & ElevenLabs: Nvidia Stock
Показать описание
🔰 Hands-on Tutorial to Build RAG with Ollama, Llama 3, Langchain & ElevenLabs for Nvidia Stock.
3rd video in my LLM series video (Fully Local RAG).
︾︾︾︾︾︾︾︾︾︾︾︾︾︾︾︾︾︾︾︾
︽︽︽︽︽︽︽︽︽︽︽︽︽︽︽︽︽︽︽︽
MENTIONED IN VIDEO
📚 Link to Python Code ➡︎
I show you how to build a fully local Retrieval Augmented Generation (RAG) pipeline for Nvidia Stock Analysis using Llama 3, Ollama, LangChain, and ChromaDB.
Together, we parse PDFs, split text, and create embeddings stored in ChromaDB, a vector database.
You'll learn how to combine RAG with prompt engineering to chat with
complex PDF documents and use ElevenLabs to generate audio from text.
Perfect for those interested in RAG LLM, Ollama, local LLMs like Llama 3 ,Elevenlabs & Nvidia stock analysis with AI (which goes further than OpenAI GPT).
Extra effect: This a Hands-on tutorial where you learn what is rag & what is langchain in practice (using large language models (LLMs) in practice)
⏰ Timecodes ⏰
0:00 Introduction Build a Talking Fully Local RAG with Llama 3, Ollama, LangChain, ChromaDB & ElevenLabs | Stock Advisor
0:42 Parsing PDFs with Langchain
2:14 Text Splitting with LangChain
3:35 ollama python install & Ollama Embeddings & Nomic (ollama tutorial)
6:10 Storing Embeddings in ChromaDB Vector Databse
7:32 FAISS & Qdrant Vector databases (langchain tutorial)
8:29 MultiQueryBuilder with Llama 3 & Ollama (how to run llama 3 locally)
10:56 RAG + Prompt Engineering for Nvidia Stock chatting with Llama 3 (local llm)
12:41 how to use elevenlabs & Generating Audio with ElevenLabs
14:47 Hugging Face
#llama3 #llm #ollama #langchain #elevenlabs #vectordatabase #chromadb #ai #nvidiastock #python #genai #embedding #huggingface #languagemodels #largelanguagemodels #openai #gpt #promptengineering
3rd video in my LLM series video (Fully Local RAG).
︾︾︾︾︾︾︾︾︾︾︾︾︾︾︾︾︾︾︾︾
︽︽︽︽︽︽︽︽︽︽︽︽︽︽︽︽︽︽︽︽
MENTIONED IN VIDEO
📚 Link to Python Code ➡︎
I show you how to build a fully local Retrieval Augmented Generation (RAG) pipeline for Nvidia Stock Analysis using Llama 3, Ollama, LangChain, and ChromaDB.
Together, we parse PDFs, split text, and create embeddings stored in ChromaDB, a vector database.
You'll learn how to combine RAG with prompt engineering to chat with
complex PDF documents and use ElevenLabs to generate audio from text.
Perfect for those interested in RAG LLM, Ollama, local LLMs like Llama 3 ,Elevenlabs & Nvidia stock analysis with AI (which goes further than OpenAI GPT).
Extra effect: This a Hands-on tutorial where you learn what is rag & what is langchain in practice (using large language models (LLMs) in practice)
⏰ Timecodes ⏰
0:00 Introduction Build a Talking Fully Local RAG with Llama 3, Ollama, LangChain, ChromaDB & ElevenLabs | Stock Advisor
0:42 Parsing PDFs with Langchain
2:14 Text Splitting with LangChain
3:35 ollama python install & Ollama Embeddings & Nomic (ollama tutorial)
6:10 Storing Embeddings in ChromaDB Vector Databse
7:32 FAISS & Qdrant Vector databases (langchain tutorial)
8:29 MultiQueryBuilder with Llama 3 & Ollama (how to run llama 3 locally)
10:56 RAG + Prompt Engineering for Nvidia Stock chatting with Llama 3 (local llm)
12:41 how to use elevenlabs & Generating Audio with ElevenLabs
14:47 Hugging Face
#llama3 #llm #ollama #langchain #elevenlabs #vectordatabase #chromadb #ai #nvidiastock #python #genai #embedding #huggingface #languagemodels #largelanguagemodels #openai #gpt #promptengineering
Комментарии