How To Host AI Locally: Ollama and Open WebUI

preview_player
Показать описание
AI chatbots are taking over the world. But if you want to guarantee your privacy when using them, running an LLM locally is going to be your best bet. In this video I collaborate with The Hated One to show you how, and to explain some AI terminology so that you understand what's going on.

00:00 Your Data is Used to Train Chatbots
01:11 Understanding AI Models
01:28 LLM
02:23 Parameters
03:34 Size
04:51 AI Engine and UX
07:15 Tutorial, courtesy of The Hated One
08:07 Ollama
12:40 Open Web UI installation
13:23 Docker
15:30 Setting up Open Web UI
16:38 Choose to Take Control of Your Data

The biggest advantage of open source models is that you can fine tune them using your own instructions while keeping all data private and confidential. Why trust your data to someone else when you don’t have to?

A huge thank you to The Hated One for his tutorial.
You can find his playlist for staying anonymous here:

The first video in our AI privacy series, about private cloud-based AI:

Brought to you by NBTV team members: The Hated One, Reuben Yap, Lee Rennie, Cube Boy, Sam Ettaro, Will Sandoval and Naomi Brockwell

To support NBTV, visit:
(tax-deductible in the US)

Visit our shop!

Our eBook "Beginner's Introduction To Privacy:

Beware of scammers, I will never give you a phone number or reach out to you with investment advice. I do not give investment advice.

Watch this on Odysee!
________________________________________________________________________
Here are a bunch of products I like and use. Using these links helps support the channel and future videos!

Recommended Books:

Beginner's Introduction To Privacy - Naomi Brockwell

Permanent Record - Edward Snowden

What has the government done to our money - Rothbard

Extreme Privacy - Michael Bazzel (The best privacy book I've ever read)

No Place to Hide: Edward Snowden, the NSA, and the U.S. Surveillance State - Glenn Greenwald

Some of my favorite products to help protect your privacy!

Faraday bag (signal stopping, to protect your fob, credit card, computer, and phone)

Data Blocker (if you're charging your phone in an unknown port, use this so that no data is transferred)

Camera tape (electrical tape is the best tape for covering phone and computer cameras)

USB-C to ethernet adapter:

Privacy Screens (use your phone and computer in public? Keep your information safe!)

Computer: (Search for the size right for your computer)

Phone: (Search for the size for your phone, decide whether you want glass or plastic!)
Рекомендации по теме
Комментарии
Автор

I've been waiting for your take on AI and I was not disappointed. Great work as normally. Currently using Msty locally.

Saintel
Автор

This topic just randomly appeared in my feed, yours is the 3rd video I clicked on and its the only one that made sense to me, you explained everything so well, thank you.

AndiCee
Автор

I really grad to see this 2 privacy hero collaborating togather to make us aware of our privacy.

funZ
Автор

YES! This is literally on my to-do list for next week, and now I have a great source of info to start from. You're the best Naomi :D <3

ninjanape
Автор

You are doing a great job. I was surprised to see my other favorite channel. :D good luck. My other favorite IT Channels: NetworkChuck, David Bombal, The Hated One, and An0n Ali. :D

baloo
Автор

I honestly can't appreciate this enough!

terrorbilly
Автор

I’ve done this with locally based AI image processing software like comfyui, can’t wait to now try chat as well. I really appreciate the background and tutorial.

tikishark
Автор

Best content for Nov 1, 2024. You get my vote. Great job!

markldevine
Автор

Great video! Kinda glad this was recommended by Google...wonder how Google knew I was interested.

GregBressler
Автор

Love it! Using this I won't need to agree to terms that allow someone to own MY data. It is mine.

johnlegend
Автор

Omg I forgot about this! Glad you made another video for it!

johnlegend
Автор

This right here can replace the internet

Warframeplayer-sl
Автор

I did some experimentations on the usage of differents olamma AI models, considering harddrive/SSD usage in giga octets, memory (RAM) usage, time in seconds to get the answer, using 3 LAMMAS models size (in Billions/Milliards of parameters/variables).

CONCLUSION: I suggest to stick on the smallest model (7Billions) for User that are having an average computer laptop.
If you have more than 16GB of RAM, you can uses the 33B Olamma model, but it take some time (a while on my old CPU) to get the answer.


Here are the raw data I got in my experience :

[ { "Model": "LLaMA-7B", "Time_Calculation_seconds": "5-15", "RAM_GB": "10-12", "Disk_Size_GB": 13 }, { "Model": "LLaMA-33B", "Time_Calculation_seconds": "15-30", "RAM_GB": "30-40", "Disk_Size_GB": 65 }, { "Model": "LLaMA-70B", "Time_Calculation_seconds": ">30", "RAM_GB": "70+", "Disk_Size_GB": 130 } ]

raphaelcazenave-leveque
Автор

If you have an Apple Silicon Mac, LM Studio is another option that also has a CLI tool (lms). It supports MLX (optimized for Macs) and GGUF model formats.

theaugur
Автор

Nice to see you covering this topic! I already have a few of these installed but I'm glad you are covering the topic so i stopped in to hit like.

shabadooshabadoo
Автор

Comprehensive overview, thanks. A few months back I saw a YouTube video by Data Slayer running one of these models on an 8GB Raspberry Pi 5 with impressive results.

keithwhite
Автор

I love this channel. Clear and easy instructions, even for people that aren't amazing with computers. I now have a private Ai!

karlamellow
Автор

Really helpful! I love the privacy videos! 💪

BibleOSINT
Автор

I was suggested to watch and it totally worth it. Thanks. write down the notes from video too for future reference. and I came here for bolt.diy setup in local but ollama required docker and then run ollma while I have to run bolt.diy and without doker option

xkeshav
Автор

They should put the naming clear with image generation models like this, i love the fact you know the size from the B thing at the end

generated.moment