Self-Hosted LLM Chatbot with Ollama and Open WebUI (No GPU Required)

preview_player
Показать описание
Explore the power of self-hosted language models with us on Easy Self Host! In this video, we demonstrate how to run Ollama with Open WebUI, creating a private server-based environment similar to ChatGPT. We'll guide you through setting up Ollama and Open WebUI using Docker Compose, delve into the configuration specifics, and show how these tools provide enhanced privacy and control over your data. Whether you're using a modest setup or more powerful hardware, see the performance firsthand. Don't miss out on our insights on potential applications beyond chat, like note summarization in Memos. Subscribe for more self-hosted solutions and find the configuration files on our GitHub, linked below!

00:04 Introduction
01:07 Tutorial to run Ollama and Open WebUI (Docker Compose)
03:18 Running Docker Compose on the Server
03:38 Start Chatting on Open WebUI
06:01 Integration with Memos (experimental)

🔗 Links:
Рекомендации по теме
Комментарии
Автор

This is great! Thanks for making it easy. Easy self-hosting!

steve-maheshsingh
Автор

Very cool! Please consider more videos where you integrate this into other apps like home assistant and others. Thanks!

goodcitizen
Автор

Excelente! En especial el utilizarlos junto a las notas

Edupc
Автор

Would you consider doing self-hosting for transcription to notes in memos? 😊 like faster-whisper, pyannote (or local alternative) to meeting minutes, to summarized notes?

nexuslux
Автор

Can you explain usage of softethervpn site-site configuration, , local to vps

MultiMmsh
Автор

Nice tutorial, Could you please tell me How can I add or find the tutorial or docker version that have LLM? Thanks

AFiB
Автор

I have creatednone chatbot on some pdf documents and using mistral but i try to hit multiple query at aame time it gives response in more than 2 to 5 minutes so can i deploy mistral with this methods without GPU and will get response fast?

naveenpandey
Автор

Hi may we know what is the hardware requirements to self host ollama and open webui with llama3?

tanyongsheng
Автор

Is it possible to set a folder to place our own pdfs with information so the chatbot can answer based in them?

y._.
Автор

Great video! Any links on how to utilize a gpu?

manprinsen