Easiest way to get your own Local AI: Ollama | Docker WSL Tutorial

preview_player
Показать описание
With Ollama Web UI you'll not only get the easiest way to get your own Local AI running on your computer (thanks to Ollama), but it also comes with OllamaHub Support, where you can find Prompts, Modelfiles (to give your AI a personality) and more, all of that power by the community.

00:00 Prerequisites
00:47 Install it on WSL
02:06 Docker Installation (Linux/WSL)
03:21 Activate GPU Compatibility
04:11 Installation
05:00 How to update it
05:19 Ollama WebUI
05:42 Install a New Model
06:36 Use your new model
07:17 OllamaHub
09:00 Windows Limitations

or Join this channel to get access to perks:

You can find me on:

Hope this was useful and if you have any questions, write me a comment below
Thank you for watching (~ ̄▽ ̄)~
Рекомендации по теме
Комментарии
Автор

Great Video, very easy to follow, I like your style with the video description and the command in your blog. One thing that is missing in the video is that your Virtualization settings need to be enabled in the Bios otherwise the Ubuntu installation will fail. They are most likely not enabled by default on most PC's.

martinzipfel
Автор

My God this is absolute gold. You have really given the world something fantastic here. The cut and paste commands with parameters are just above and beyond anything anyone has done. It really is fantastic when a professional like you helps the masses have something special like this. Thank you for all this time and effort. There are so many ways that I want to deploy this.

BillHawkins
Автор

One of the very few tutorials which just ran perfectly, thank you! Subscribed.

alexanderpopov
Автор

great stuff, thank you for your contribution to the home assistant community

johnpy
Автор

Amazing tutorial! Keep up the good work.

VairalKE
Автор

Thanks for this great tutorial.
3:23 If anyone has problems with GPU activation, just enter the commands one by one. It didn't work for me to execute the entire block at once.

ThePSYBORG
Автор

Great instructional video - Big Kudos 👍; one of the few that run out of the box and also tnx for the accompanying web-page

fzuern
Автор

Thanks for the great video! Could you please let me know how I can make the api accessible for Homeassistant when running on a different machine?

gaborwraight
Автор

Hi, thanks for the video, really great help. Wanted to ask, if I am to only use CPU only mode, do I still need to do the "Activate GPU compatibility" section or can I go straight to installation?

ikoyski
Автор

This is great! Thank you! I am CPU only. I am running this on my home server and seeing ollama using lots of CPU. I did just start using it a few minutes ago. My question is will I see the CPU demand decrease when I am not asking questions and the container has been up for a while or will it always have a high CPU draw when the container is running? I am wondering how this will tax my system as I have about 20 other containers running as well.

EDIT as soon as I posted this CPU usage is down :)

donnyf
Автор

WSL users: Don't forget to run "sudo service docker start" before the hello world test. And also, restart docker command is different because wsl has no systemd. So run "sudo service docker restart" instead.

niquedegraaff
Автор

One issue I can’t access this from another machine on my network, either using the host IP, the WSL IP or the Docker IP

shuntera
Автор

Can I install docker desktop directly instead of using linux commands?

AngelicShimmer
Автор

Do you know how to run Crew Ai with Ollama on Docker?

atesone
Автор

I run unto a problem, after the code for nvidia or GPU suport, I get this error message:

Status: Downloaded newer image for
docker: Error response from daemon: failed to create task for container: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: error running hook #0: error running hook: exit status 1, stdout:, stderr: Auto-detected mode as 'legacy'
nvidia-container-cli: initialization error: WSL environment detected but no adapters were found: unknown.

I have a 3060 and im not sure what to do next. Can you help me?

rodssantos
Автор

my localhost:3000 doesn't have the Llama icon, it looks like a OI and doesn't let me load Llama LLMs. I followed all the prompts. Any ideas?

Electroxd
Автор

My nvidia container goes to exit status after a while and I can't restart it. Is it just me?

zephirusvideos
Автор

Can you show us how to open ollama wsl to other computers make it like a server.

phuongdang
Автор

I am getting this error:"docker run --gpus all nvidia-smi
docker: Error response from daemon: failed to create task for container: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: error running hook #0: error running hook: exit status 1, stdout:, stderr: Auto-detected mode as 'legacy'
nvidia-container-cli: initialization error: load library failed: libnvidia-ml.so.1: cannot open shared object file: no such file or directory: unknown.
ERRO[0000] error waiting for container: context canceled". Do you know how to fix it? Thanks

gigiipaq
Автор

sudo nvidia-ctk runtime configure --runtime=docker нет такого пути.

rvsn