Is Open Webui The Ultimate Ollama Frontend Choice?

preview_player
Показать описание
On 04/25/2024 I did a livestream where I made this video...and here is the final product. It’s a look at one of the most used frontends for Ollama. It's not perfect, but there is a lot to like.

Someone clarified something that I missed... It seems that you can specify the model to use in the prompt using the @ sign. This is great. They should highlight that in the docs and make it a bit more discoverable.

(they have a pretty url because they are paying at least $100 per month for Discord. You help get more viewers to this channel and I can afford that too.)00:00 Introduction
02:47 Getting Started with Open WebUI
04:01 Let's setup Open WebUI
04:51 How Often is Open WebUI updated
05:16 The Actual Install Process
06:52 The Parts of the UI
07:17 Setting the Settings
09:59 Connect to Multiple Models
11:19 Working with Prompts
13:04 Talking to a Website
13:36 Talking to Documents
14:54 What do you think?
Рекомендации по теме
Комментарии
Автор

Wow... this is the kind of detailed, helpful and to the point app review we should see more of from people. Thanks!

IdPreferNot
Автор

Have my subscription Matt. I like your highly clear and structured way of speaking.

jayd
Автор

I really appreciated this video. I've only been using this tool for about a week and was really excited to get answers to all of the confounding and non-working features I kept running into...only to find out that they're actually confounding or non-working. 😂

barneymattox
Автор

When you set additional hosts in the 'Connections" settings, they will act as redundancy assuming you have the same models installed on each host. So if I serve to multiple users, all using the same model at the same time, it will queue up requests to the current unoccupied host, in sequence. I've tested it locally with 3 separate hosts, and it works quite well. BTW thank you for the great video!

matthewbond
Автор

Thanks for teaching me how to get started. The only downside of Ollama is that it's unable to integrate with HuggingFace, but it is able to import the raw GUF files or whatever they are called, by manually filling out a Modelfile. It's amazing.

I basically fill out FROM, TEMPLATE, PARAMETER context size and PARAMETER stop words. Then import it. The result is perfect.

I even imported inside a Docker environment. Just place the image folder inside the mounted colume path. Then use "bash" inside the container and then you can do the import.

MyAmazingUsername
Автор

The new required login doesn't go to any remote site, it stays on the local computer. This way multiple users can store chat history and settings. I agree that it should be optional, but at least it's local.

JoeBrigAI
Автор

Good review, I have been using open-webui for a while and learned a bunch of new stuff, thanks. It appears to get better all the time which should continue especially after you've uncovered areas for improvement. BTW, I like the new chat archive feature.

joeburkeson
Автор

Excellent review.
Your voice and mannerisms were made for this.

TheRealTannerThoman
Автор

There’s a tiny button after the response that gives you data on tokens per second etc, I love that about this particular UI, easy to compare speeds

OliNorwell
Автор

I deployed open webui on my kubernetes cluster and I am pretty happy with it. It makes it easy to test some LLMs and compare their output. I wish one could add langchain code and select that as a model in the dropdown. Then it would be easy to integrate your own RAG/agent pipeline.

Thank you for your videos! Your content is awesome!

wilhelm
Автор

The user login default worked well for me - at a company that can’t use cloud based LLMs for security reasons, the default workflow allows you to immediately install this tool and share it with regular users (who don’t know what a command line is). But I agree maybe there ought to be a “dev” switch that turns it off.
Really great video, looking forward to more.

tdorisabc
Автор

Thanks for this Matt, very easy to work with this tool!

xJarry
Автор

What would make a great addition to this, would be a RAG backed to load bulk documents . The way to do this would simply mount an external volume to the docker image. Then have a file watcher to load up any new document which are added to the external directory . All documents are avaiable to all users of Web-UI for RAG use.

aamira
Автор

I use open webui some but also use the command line. I'm not familiar enough with advanced usage of either, though. I appreciated this video and am looking forward to learning more. At this point, I'm just a sponge. Thanks!

jimlynch
Автор

Love your videos, mate. Even if we are on opposite sides of the fence re. Dark mode! Cheers.

bigpickles
Автор

I love Open WebUI. I can download a GGUF model in hugging face and convert directly into Ollama format in minutes using the GUI. And TTS is fantastic, hands free, I can talk and listen. I even installed new voices. And I can web search, RAG, many features indeed ! ❤❤❤

DihelsonMendonca
Автор

User management is actually a good thing if you want to share your LLM among other ppl without giving them an ability to mess with your stuff.

alx
Автор

hi Matt, amazing content. Thank you for sharing your thoughts with us and chatting with me during your stream.

anotherhuman
Автор

I use Open WebUI every day and I love it! I love how it formats results nicely and stores the conversations for easy reference. The login page works with my password manager so it's not that inconvenient and I feel better that my conversations are kept private this way because privacy is such a huge motivation for running a private AI after all.

grokowarrior
Автор

I’ve been looking at this and other tools and the one thing I find elusive is the ability to fine tune a model with desired prompt/inference examples to help fast track the usefulness of a newly downloaded model. Including this in your reviews would be amazing if possible.

Treewun