Run your Own Private Chat GPT, Free and Uncensored, with Ollama + Open WebUI

preview_player
Показать описание
Run an Uncensored PrivateGPT on your Computer for Free with Ollama and Open WebUI

In this video, we'll see how you can use Ollama and Open Web UI to run a private ChatGPT copycat locally for free using open source large language models (LLMs) such as Meta's Llama2 and Mistral.

This lets you try out different models, and even use uncensored models.

👍 Please like if you found this video helpful, and subscribe to stay updated with my latest tutorials. 🔔

🔖 Chapters:
00:00 Intro
00:50 Ollama
03:54 Ollama CLI
07:15 Open WebUI
07:48 Open WebUI & Docker Install
10:26 Open WebUI Chat
13:23 Modelfiles and Prompts
16:10 Outro

🍺 Homebrew installation commands:
brew install ollama
brew install docker # For the background process
brew install --cask docker # For Docker Desktop

🔗 Video links:

🐍 More Vincent Codes Finance:

#ollama #privategpt #llama3 #openwebui #chatgpt #llm #gpt #opensourceai #llama2 #mistral #bigdata #research #researchtips #uncensored #private #claude #anthropic #llama #openai #opensource
Рекомендации по теме
Комментарии
Автор

👍 Please like if you found this video helpful, and subscribe to stay updated with my latest tutorials. 🔔

VincentCodesFinance
Автор

Thank you for this walk-through.
At 01:43 you spoke of ‘chat’ tags:
Two things:
1. There was a ‘text’ line three lines above ‘chat, ’ what benefits come with ‘text.’ command line instruction?
2. I did not notice at what point you copied and pasted the ‘chat’ command line instruction.

robwin
Автор

that is a nice explanatory walkthrough... Thank you!

PAPAGEORGIOUKONSTANTINOS
Автор

These models are trained with a limited amount of information. So it doesn't learn from chatting with users, from their questions, their prompts or tasks they give him to solve, does it. It's trained once and then is given out to the public. It means that we are actually still very far away from real artificial inteligence that gathers own experience, learns from its own errors, etc. To train it to make it better you still need a company like meta and their comp farms, at least when it comes to big models like LLMs. Can someone correct me if I am wrong here?

slawomirr
Автор

Thanks for this in depth video on this. So far you are the only one who has explained the whys and how to download Docker. Will subscribe for more.

goldmund
Автор

This does work! Pretty nerdy! Umm... so I tried the ollama2-uncensored (because why would I choose to keep information from myself?? duh). Yeah, it's...not very bright:

>>> How many grams in 1L of water?
There are approximately 1000 milliliters in one liter of pure water, so one liter contains about 4200 g (or approximately 27.5 kg) at room
temperature.

>>> You are very wrong. In 1000 milliliters there are 1000 grams of water which is exactly 1 kg.
Thank you for correcting me! You are absolutely right that one liter of pure water at room temperature contains exactly one kilogram (or
approximately 2.2 pounds) by mass. However, the initial question asked about grams in a specific volume unit, so I responded accordingly.

>>> How many grams in 1 cubic centimeter of water?
As mentioned earlier, there are approximately 42 g (or approximately 2 mg) in one cubic centimeter of pure water at room temperature.

Laszlo
Автор

Thanks for sharing! Running your own private chat with Ollama and Open WebUI sounds interesting!

englishmimics
Автор

Thanks this is exactly what I was looking for.

KevlarMike
Автор

Great video, very informative! Thank you for sharing!

drumbyte
Автор

awesome, it works on my main win-10 pc which is now getting a little old. (i7 4790k, rx 580, 16gb ddr3). gonna build a homeserver pc running linux with 64gb ram soon, hopefully that can handle it better. definitely slower than chat gpt rn. runs a little faster if i run it directly out of the terminal instead of using docker/openwebui

nfrcomputers
Автор

Thank you this is the best video explaining how to do this?

tiffanyw
Автор

I followed this all as exact as I could, but trying to select a model from Open WebUI does not show any of the models I have installed. Any suggestions to fix this?

ProtaGene-lwgg
Автор

I have been running llava on my Pi5 too. This might be away to run ollama on another Pi5. These LLMs give the Pi5 a serious workout so running it on another headless networked Pi5 would be better.

AlwaysCensored-xpbe
Автор

Justo lo que estaba buscando. Muchas gracias

cucciolo
Автор

When I installed Docker it had some error associated with WSL, then when I try to run it ( 10:00 ) it shows me an error and I can't proceed with it.

LauraLanford
Автор

Thanks for the video. Just a question about your keyboard, what are keycap, switch, and lube steps?

froggy
Автор

Hi. Amazing video! WOW! By the way a guy on Youtube called Network Chuck copied your video and idea and tutorial 😠

commanderpaladin
Автор

Hi, I like your video. As an educator, I am newly acquainted with this Open Web UI. Kindly provide the minimum hardware specifications and RAM requirements necessary to enhance the speed of the outcome on my PC. Is having a graphics card and a stable internet connection necessary for achieving optimal program performance?

swapnenduchatterjee
Автор

i work for a dental business and i want to import all their documents, like spreadsheets, pdfs, docs, text file, and ect into open ui as knowledge base. can i do that and have it save locally so that when i restart it that i do not have to import them again?

tvandang
Автор

Thank you for sharing, looking forward to testing this on my Debian workstation :-)

lalpremi