Local ChatGPT AI Server with LM Studio + Open WebUI

preview_player
Показать описание
This is video explains how to run your own LLM AI server like ChatGPT, but locally in your home network. That means it will run only on your computer and no one is able to access that information besides you, if you got a secure network of course ;).

The tooling I'm using are LM Studio and Open WebUI on Fedora Linux 39.

I do NOT use Docker in this installation for Open WebUI, but I DO use a Python virtual environment.

PC:
AMD RX 7700 XT
Fedora Linux 39

Links:

Chapters:
0:00 Intro
0:10 The idea
0:54 Install LM Studio
1:30 Give LM Studio AppImage execution permissions
2:00 Start and config LM Studio Server
2:30 Install Open WebUI
3:20 Install Python 3.11
3:55 Download repository Open WebUI
4:09 Create Python Virtual Environment
4:44 Install Python packages and run Open WebUI
5:24 Connect Open WebUI with LM Studio API
6:30 IP address in your network
6:44 Running the Open WebUI on your mobile
Рекомендации по теме
Комментарии
Автор

Thank you for the information. I ran it successfully on my Linux.

kaanxwebp
Автор

how to install Webgui on PC windows server?? please!!!!

batboyboy
Автор

GPU (LM Runtime Dependent)
This is the message when viewing resources in the settings, on the left CPU you can see some code, and on the right where the GPU is you can't see anything..
No LM Runtime found for model format 'gguf'!
Error loading the model..

Windows..

MrCans