Build your own LLM AI on a Raspberry Pi

preview_player
Показать описание
You know, you don't need really expensive hardware to run your own LLM. In today's tutorial, I will be walking you through the process of setting up and running Ollama, an open-source machine learning model, using Open-WebUI on a Raspberry Pi (these things are under $100 - seriously). By the end of this video, you will have a fully functional Ollama installation that can be accessed through Open-WebUI from your home LAN or even through a WiFi network created by the Raspberry Pi itself. (Note if connecting to the Pi via "Pillama-Wifi" using a phone, you may need to turn off your mobile carrier's internet)

Here's what we'll cover in today's tutorial:

Prerequisites - We'll go over the necessary hardware and software requirements for running Ollama on Raspberry Pi.

Setting up your Raspberry Pi - the initial setup of your Raspberry Pi, ensuring it's ready for the installation process.

Installing Docker, and setting up the WiFi using the Ansible playbook - This bit is really easy with Ansible, but you can run the commands manually if you don't have Ansible.

Next, I'll show you how to access it through a web browser, download your first model (tinyllama) and start using the AI.

Accessing Ollama via Open-WebUI - Finally, we'll demonstrate how to access your running Ollama instance using Open-WebUI from the built in "Pillama-WiFi" network (actually no internet required - if connecting to the Pi via "Pillama-Wifi" using a phone, you may need to turn off your mobile carrier's internet).

Please keep in mind that this tutorial assumes some basic familiarity with Linux command line interfaces and SSH. If you need more detailed explanations of any of these concepts, please feel free to leave a comment below, or check out my previous videos on the subject.

Ready to get started? Let's dive right into it! Don't forget to like, share, and subscribe for more exciting content in the future. If you have any questions or encounter any issues during this tutorial, please let us know in the comments below.

Happy learning, and see you in the next video!

(This description was largely written by the very LLM I built - I tweaked it a bit, but if it sounds more YouTube-y, now you know why...)

Links

Рекомендации по теме
Комментарии
Автор

Great video 👌 full of useful information.. thanks 🙏

techtonictim
Автор

Great video, lots of fun here !
Do you think the Pi AI kit could run a bigger model ?

slevinhyde
Автор

Is it possible to run the tiny llama model without the open-webUI and docker? I want to do a tiny bit of reinforcement learning on the model and then put it in my pi, and integrate it onto my local website.

johnfinlayson
Автор

I would like to run this in docker swarm as a service on rpi's, any help there?

Tech-iHub-yj
Автор

Would I be able to connect an lcd screen, microphone module, speaker module, and etc. and run the llm as a handheld device?

Also what changes in the code would it require?

IndraneelK
Автор

tinyllama and Coral or Hat AI (Hailo 8L)??

galdakaMusic
Автор

i have created my own llm so how i can deploy it on google cloud and use it on raspberry pi plzz tell me

rachitrastogi
Автор

That is so painfully slow, doesnt look worth it

ApidaeY