LM Studio: How to Run a Local Inference Server-with Python code-Part 1

preview_player
Показать описание
Tutorial on how to use LM Studio without the Chat UI using a local server. Deploy an open source LLM on LM Studio on your pc or mac without an internet connection. The files in this video include the function to initiate a conversation with the local-model and establishes roles and where the instructions come from. The setup allows the script to dynamically read the system message from text file, making it easy to update the system message, system prompt or pre-prompt (known in Chat GPT as custom instructions) without changing the script's code.

LM Studio is a platform that allows you to discover, download, and run large language models (LLMs) locally on your own machine. Start a local HTTP server that behaves like OpenAI's API by running LM Studio's Local Inference Server feature. Load a model, start the server, and run these example python scripts in your terminal. Point to the local server and chat with an intelligent assistant locally, in your terminal.

This has advantages over using cloud-based LLMs, such as:

* **Privacy:** All your data stays local, which can be important for sensitive tasks.
* **Customizability:** You can fine-tune the LLMs to specific tasks or domains.

LM Studio also offers an alternative way to interact with LLMs. You can use the simple interface for experimenting with LLMs through text prompts using the in-app Chat UI.

Overall, LM Studio is a powerful tool for anyone who wants to work with LLMs.
It's especially useful for developers, researchers, and creative professionals.

00:00 Introduction
00:38 Brief Overview of LM Studio
02:18 Setup / Local Inference Server
02:45 Problem with the Sample Code ?
04:00 Fixed code version 2.0 ? /Python Code-Option 1
07:36 Code version 2.1 /Python Code-Option 2/Custom Instructions-Inline/Embedded
12:20 Code version 2.2 /Python Code-Option 3/Custom Instructions-External Reference

⌨ My GitHub Repo for this video. Please consider following me on GitHub:

🌐 My website with bonus code:
Рекомендации по теме
Комментарии
Автор

That's wonderful. Thank you very much. 🎉❤

DihelsonMendonca
Автор

Thank you! OpenAI changed the way you interact with their key when using API so it made most of the videos around YT from 3-2 months ago kind of missing out on how to use your key especially when using local machine such as the LM Studio.. great help!

Raspupin
Автор

Great video! This really helped me a lot especially since I've been looking for a more detailed explanation!

hannahpadilla
Автор

Bro you are the best, keep making videos,

ashaghar
Автор

i Like the aproach of your content. Congrats!

Alex
Автор

This is so helpful!! Thank you for sharing! :)

ChuCannon
Автор

Hopefully Open WebUI will support this server interface in the future.

mountainmonkey
Автор

Re we suposed to download a library beforehand because I got the error " ModuleNotFoundError: No module named 'openai' "

hzedhtl
Автор

ur so funny lol you also sound like a teddy bear

kaashen
Автор

Is there the possibility of choosing in which language the model responds to me?
greetings!

jhin
Автор

Hi glad i found this video. Does this mean I can use LM studio to be connected online like Chatgpt using Local Server so it should give me answer from realtime internet? like "which party get the most vote Democrats or Republican right now?"

relexelumna
Автор

Do we have to install the openai package? I mean we are running locally, why do we need that?

genericwannabe
Автор

getting an error that system_message.txt nit found

vedforeal
Автор

Great video man. I believe the issue you had with the original code is to simply install the openai third party library using "pip install openai" in your vscode cmd terminal. That worked for me.

igbomeziemichael