getting started with llama3.2 running on locally hosted ollama - GenAI RAG app

preview_player
Показать описание
Part 2/5 This blog series is for beginners and young Entrepreneurs who want to build Gen AI RAG driven applications.

Hands on experience to build Gen AI RAG based Pro Apps, running 100% locally/hosted or API based, using API / tools of your choice.

Vector DB: TryChroma, SQLLite, Supabe or any VectorDB of your choice
Progamming: Python 3.12+
Application: Ollama WebUI or Taipy or Flutter
IDE: Jupyter Lab, Ollama
LLM: Gemini | llama 3.2 | OpenAI ChatGPT | Anthropic | Local models
Рекомендации по теме
Комментарии
Автор

Love your approach here. I'm definitely using this in my project. I'm curious to know if there is a possibility we can connect on discord or something?

CrazyIndianGuy