Run A.I. Locally On Your Computer With Ollama

preview_player
Показать описание
Get up and running with large language models using Ollama.
Run Llama 3.1, Phi 3, Mistral, Gemma 2, and other models. Ollama is available on Windows, Mac and Linux.

REFERENCED:

WANT TO SUPPORT THE CHANNEL?

DT ON THE WEB:

FREE AND OPEN SOURCE SOFTWARE THAT I USE:

Your support is very much appreciated. Thanks, guys!
Рекомендации по теме
Комментарии
Автор

To those who wants the one click approach.. There's a flatpak that provides a Gui with a chat interface and downloading, changing or removing models. Name : Alpaca

jacksonreventan
Автор

Hey DT, ollama is in the arch linux's official repo, BTW!

gnulinuxunilung
Автор

I have been running Ollama and Open WebUI on a dedicated server/machine in my house. This way I can use it with any device that is connected to my network.

Teklynk-tq
Автор

aardvark comes before addax and agouti alphabetically, ollama has got it in third place, not very bright of it, naughty llama

derekr
Автор

I am able to run it on raspberry pi5, a small llama 3.2 1B model, pi5 is able to run it with docker and openweb ui. Really awesome!. Without a GPU it runs fine on CPU though it uses 100% cpu if we query, but thats just a small time burst while its being used but workable.

sridhartn
Автор

Oatmeal-bin in the aur gives you a nice TUI front end for ollama

tylerdean
Автор

Recently started using Ollama with Gemma 2 (9B) on my main PC (AMD Ryzen 5900H + 32gb RAM) Really good so far, the responses are very solid.
I’ve just started using it and haven’t really dove into the complex use-case.
So much better than paying for subscription use of LLM that depend on internet!

marqueluxuriante
Автор

For web UI- openweb UI can be used for ollama and access it from any device inside your network.

sridhartn
Автор

I’m building a budget monster PC with 2 old 14 core xeons a titan xp GPU for AI (Skyrim herika mod) and maybe an AMD RX 6800xt or something to actually run my games and hopefully give it 128gb (shooting for 256gb) of RAM.

P.S. I love the sound of your keyboard 😂

robotron
Автор

The "alphabetical" list of mammals:

1. Addax
2. Agouti
3. Aardvark

tiamem
Автор

this was simple to understand bro thanks!

kiwibruzzy
Автор

Hollama is an ollama client if you prefer a GUI

arghya_
Автор

I wonder if you could get the smart ass comments of Ryan Reynolds

Vhoover
Автор

Hey DT, please make video on how to integrate ollama with vim, neovim, emacs etc.

GhostCoder
Автор

Can it draw or interact with pictures and video, is it able to "see" things, or is it strictly text based?

MeMyself-gffn
Автор

it is in the extra repos in Arch under the name ollama

zeocamo
Автор

Can it interact and learn from the web on its own or does it only work from within the files you download for it? Can it grow organically, adding and organizing information on its own? Can it remember and infer things from its previous interactions indefinitely?

MeMyself-gffn
Автор

to keep locally installed ones updated is pain for me, i use iimg generation models locally but i rather keep using chatgpt or others as online services it makes more sense for my usage. especially in the case of chat ai, i want them to be updated to latest, because my questions are about mostly new tech etc.

denizkendirci
Автор

Ollama is the best piece of Local AI, I hope it will be integrated better on Linux Desktop

luigitech
Автор

I have been playing with ollama for a while now, I dont have a powerful GPU, so I am running it on a pretty dedicated server, (I used docker in a VM) just one more VM on that one... Its not fast, but its fun to play with. I also have Stable Diffusion, and automatic installed. I hope some one updates how to get them to talk to each other soon...

benstechroom
visit shbcf.ru