Ollama UI - Your NEW Go-To Local LLM

preview_player
Показать описание
Open WebUI is a fantastic front end for any LLM inference engine you want to run.
Aura is sponsoring this video

Join My Newsletter for Regular AI Updates 👇🏼

Need AI Consulting? 📈

My Links 🔗

Media/Sponsorship Inquiries ✅

Links:
Рекомендации по теме
Комментарии
Автор

As a contributor (merged one single PR 😊) but mostly a very early adopter of this project, I'm always stoked to see people talking about open-webui

guinea_horn
Автор

This is definitely my local go-to now. What an amazing project.

AZisk
Автор

I've been with you for over a year and it's been amazing watching you dominate this LLM news space. For example your snake game has become a standard in the industry now!!! Like you, I'm constantly in the LLM lab and I'm constantly coming across your name with a quote regarding a large language model. Awesome job carving out a niche - MUCH LOVE FROM NEW ORLEANS 🔥💪

bpdxmmq
Автор

Few more cool features

1. Image generation. I hooked this up to both local automatic1111 and DallE3 with api key. It's a bit of an odd workflow. You prompt it and the response will have a little pic button under it. I loaded a model finetuned for image prompts so response is cool.
2. Hook up openai models for chat choices with your api key.
3. Pull any llava model and you can hit the plus button to load a picture and ask questions about it.

jimigoodmojo
Автор

I don’t usually post messages, but your video changed that. Very well done! I followed your steps, and within minutes, I had LLama3 running on Open-webUI with Docker on Ollama on a Windows computer. Thank you, Sir. Keep up the great work!

jlccVPServ
Автор

You can actually skip the git clone step. everything is contained in the docker image.

retromancer
Автор

Really awesome. This is something I’ve been looking for for a long time. The one I built myself is terrible.

spencerezralow
Автор

I was fishing yesterday... unbelievably, thank you! Ps.: Can you make a video about WebUI + Open Interpreter + LLM Local or LM Studio? Thanks

garibacha
Автор

Thank you. Best tutorial on youtube. Very clear.

PJ-higz
Автор

Thank you for your clear documentation, really helpful to setup a complete system in a few steps. Great job!

giox
Автор

for those who use raycast on mac there is an extension that you can do most of these features with a single shortcut. its very cool. you can use custom modelfiles too.

metobabba
Автор

A well deserved sub and like my good man!

Ruoall
Автор

Supposedly they are working on implementing a perplexity style search too! Pretty slick

hicamajig
Автор

if you using docker you dont have to clone that git repo to run it.

marcelbloch
Автор

It would be great to see a comparison between ollama and lm studio explaining the benefits and reasoningg of when to use each. The one thing i havent seen much of is how to leverage (if possible) other models from huggingface within ollama. This is easy to do in LM studio. For most other thigs i prefer ollama but i tend to use LM studio to test new models that ollama might not have readily available.

trezero
Автор

I been using this combo for a very long time, as wellive edited my docker container with owu and and customized the ui with my titles features etc. Thx for the vid, ppl r going to love this.

SiliconSouthShow
Автор

Wow I just built a virtual AI girlfriend using ollama. I'm trying it on Llama3 model, and recently migrated to one of the uncensored models. Good-bye wife, hello AI - LOL.

annonymous
Автор

What we need is multi-prompt templates (series of prompts, one at a time), including step repeat. This way we can have the LLMs reflect on its previous answer before executing the next step in the series.

bdjblng
Автор

Yes I knew talking about anticipation would put you over the top to release😂

zippytechnologies
Автор

just got this installed, thanks for the quick tutorial 🙂
can't wait to explore all the features

kyrilgarcia