Local GenAI LLMs with Ollama and Docker | DevOps and Docker Talk Ep. 262

preview_player
Показать описание
Learn how to run your own local ChatGPT clone and GitHub Copilot clone by setting up Ollama and Docker's "GenAI Stack" to build apps on top of open source LLMs and closed-source SaaS models (GPT-4, etc.). Matt Williams is our guest to walk us through all the parts of this solution, and show us how Ollama can make it easier on Mac, Windows, and Linux to setup custom LLM stacks.

Matt Williams
============

Nirmal Mehta
============

Bret Fisher

=========

Join my Community 🤜🤛
================

Chapters
========
00:00 Intro
01:27 Understanding LLMs and Ollama
03:13 Ollama's Elevator Pitch
08:36 Installing and Extending Ollama
17:14 HuggingFace and Other Libraries
19:21 Which Model Should You Use?
26:25 Ollama and Its Applications
28:54 Retrieval Augmented Generation (RAG)
36:41 Deploying Models and API Endpoints
40:35 DockerCon Keynote and LLM Demo
47:41 Getting Started with Ollama
Рекомендации по теме
Комментарии
Автор

People who found this is in 1 minute 👇

eggo.waffle