AutoGen with Local LLMs | Get Rid of OpenAI API Keys

preview_player
Показать описание
Let's see how to use AutoGEN without incurring the cost of OpenAI API.
Discover, download, and run local LLMs.

We have one major problem with using AutoGEN and that is the huge cost associated with OpenAI API key cost. But fear not, we have a solution in place now. Follow the video.

AutoGen Videos:

#ai #chatgpt #Autogen #lmstudio #chatgpt #gpt-4 #gpt3 #gpt35

With LM Studio, you have the flexibility to harness the power of language models on your own laptop, entirely offline, offering convenience and control. Whether you prefer using the in-app Chat UI or an OpenAI-compatible local server, LM Studio allows you to interact with these models seamlessly. Furthermore, the platform enables you to access and download compatible model files from HuggingFace repositories, expanding your range of possibilities. Additionally, LM Studio provides a curated space where you can explore new and noteworthy Language Models (LLMs) directly from the app's homepage. This combination of features empowers users to engage with and utilize language models in a manner that suits their specific needs and preferences.

🔔 Don't forget to hit the bell icon to stay updated on our latest innovations and exciting developments in the world of AI-powered entertainment!
LINKS:

Other Interesting Videos on my Channel

TIME STAMPS:

0:00 Intro
0:36 Costs of GPT-4 API
0:56 LM Studio
2:48 Downloading models in LM Studio
4:31 Chat Feature in LM Studio
6:42 Visual Studio Introduction
7:22 Previous Code Explanation
7:46 No OpenAPi API is required
9:13 Setting up Server in LM Studio
10:56 Try Running code
13:28 Summarize
15:14 Future Aspects

If you have any questions, comments or suggestions, feel free to comment below. Subscribe and press the bell icon for latest videos.
Рекомендации по теме
Комментарии
Автор

I just tried LM Studio not Autogen yet. But Using LM studio was easy and fun. Later I will work with Autogen

philq
Автор

Got error
Error: 'messages' array must only contain objects with a 'content' field that is not empty when running it locally. Any thoughts?

DanielWeikert
Автор

As far as I can see oobabooga can also expose the open api, so we can also use an existing oobabooga installation like that, to run autogen?

PerFeldvoss
Автор

Yeah, autogen burns through my budget in a rush. Would be nice to have a frustration factor where the ai tries local models, then cheap models and only reverts to gpt4 when it decides it cant do it otherwise, and then it's only a handful of calls not hundreds.

zyxwvutsrqponmlkh
Автор

My main complaint about most of the 'make a working program' bot is that they are one-shot and you're done. Are there any that build something and then iterate on that project with further input from the human?

robjdrum
Автор

Great tutorial! what are you systems specs? and, do you know where I can find the min- requirements to run each of those LLM's locally? (ram, gpu, etc)

TheNUMB
Автор

why it shows "Failed to load model 'TheBloke • mistral instruct v0 1 7B q5_0 gguf'" when I add the model in LM studio. thx!

faketalkshow
Автор

Great video! Could LM Studio work with auto GPT? So we have access to the web for research

AIWriterSEOTools
Автор

I exceeded my quota for the first time after playing with autogen + gpt4 hehhahah.. great tutorial.

rcalastro
Автор

I need some help. I downloaded the Llama model and after loading the model, server logs is not showing in my LM studio ?

sketchingbyyash
Автор

Hi, I'm getting error when integrating with LM studio and following your code. The error is- TypeError: Completions.create() got an unexpected keyword argument 'api_type'. Any suggestions would be helpful. Thanks

srishtinagu
Автор

Where can I get those codes you used? I love the pace at which you teach. Thanks

JustTech_info
Автор

Autogen does not work correctly with LMStudio. Periodically errors occur, error format does not match the expected Autogen. When configuring the server there is no possibility to specify how to work with the window, therefore an error occurs when the limit is reached. The conclusion is that LMStudio requires serious improvements of the server functionality and fixing bugs. It is not ready for such use yet.

whoareyouqqq
Автор

Watch the sequel AutoGEN with Local LLMs (RunPods) 😎 on Google Colabs

PromptEngineer
Автор

The problem i encounter is that it is unable to use tools, because the agents are configured to use openai flow

ppbroAI
Автор

Great content. The background music is too distracting. Not a good music track either. Sorry.

LoneRanger.
Автор

Isn't Kaggle offering GPU's for free? Can we run Autogen there?

sayanmukherjee
Автор

Hello. I got a problem with building aiohttp when I install pyautogen in VS Code.

note: This error originates from a subprocess, and is likely not a problem with pip.
ERROR: Failed building wheel for aiohttp
Failed to build aiohttp
ERROR: Could not build wheels for aiohttp, which is required to install pyproject.toml-based projects

This stands at the end when I try to install it. :((

suessiboi
Автор

Autogen is a interesting concept, but for local hosting - not worth it. Way too slow from my experience.... GREAT STUFF MAN

thegooddoctor
Автор

Can't it be done without Visual Code?

SAVONASOTTERRANEASEGRETA