How To Use AutoGen with ANY Open-Source LLM Tutorial

preview_player
Показать описание
Hello and welcome to an explanation and tutorial on building your first AI Agent Workforce with AutoGen and LMStudio using any Open-Source LLM. I will go through with you how it's done, choosing the LLM, and then running the server with a custom prompt. The prompt will have an AI team create a YouTube Script for us!

Side Note: My computer didn't seem to handle the server very well with trying to record, so I had to stop recording to run it, but I show you the results at the end.

Other Source Information:

Chapters:
00:00 Introduction
00:13 The Plan
00:33 Download LMStudio
00:49 Find LLM
01:48 Download Open-Source LLM
02:43 Start Local Server
03:01 AutoGen Prompt
05:54 Completed Prompt
07:32 Outro

If you have any issues, let me know in the comments and I will help you out!

Comment your games or software you build, I can't wait to see what you make!
Рекомендации по теме
Комментарии
Автор

What LLM have you tried? What did you think of LMStudio? Let me know!

TylerReedAI
Автор

Jesus Christ you are the only one who actually shows how to set up a workflow! The rest of these creators makes supposed "tutorials" for beginners and are constantly switching between Google Collab and local python environments every other video. Nearly impossible to translate everything constantly! Thanks for this, really. I have no use for tutorials that require calculating the next bloodmoon. I don't understand what the point of a tutorial is when you're just showing me you're only just learning everything yourself and are just pointing out steps rather than actually explain what it is you're doing. Makes me feel most of these content creators have no clue what they're doing :/

resonanceofambition
Автор

great tutorials man. how about a video implementing open source LLM + AutoGEN + memgpt + text generation web ui

karanv
Автор

This works great, the only change that was made was that I didn't want the task hard coded to the script so I changed it so I could change the task from the cli when the script was run. made it easier to work with for me. so far I have only been able to use the llama model as the only other model I used was dolphin GGUF but I kept getting errors. thanks for the video!!!

jesusjim
Автор

This is such an amazing video. I want to create a team of agents that create a lifestyle routine, plan and programs with the aim of turning an ordinary 22-year-old man, into a super-fit athlete. I have never coded before, but I feel I can get it done. Thanks for such a great tutorial.

firetownplatformfinders
Автор

Thank you for making the script available. Simple copy and paste lets me sample what you're describing and I can follow along. Thanks again

MacPaulos
Автор

Excellent video, Tyler. You've described the things way better than the most other YouTuber. It runs quite fast for me, but I am on Windows machine with GPU

ExperiencedWebmaster
Автор

Really super coool lesson, quick effiecient, understandable, nice job. I am particularly interested in LLM's and AutoGen.
Good work.

chrismachabee
Автор

No one: How to use LLMs for free.
A Random Youtuber with less than 1K subs: Here is how you do it

Thanks mate

naadodi_tamilan
Автор

Your video is super clear, well done!

Ancle
Автор

Interesting video. One thing that didn’t make sense to me at the end, was the remark of the code running much slower / faster during the night time. Because the whole conversation runs locally (!), you could turn off the Internet connection and it would still be running without interruption! So what could the night time or day time have to do with it running slow or not? I am just wondering 💭 ….

scitechtalktv
Автор

Getting this error while following the same steps, Can someone please help?
"ValueError: Please either set llm_config to False, or specify a non-empty 'model' either in 'llm_config' or in each config of 'config_list'."

MohdBilal-ykyn
Автор

I'm running the Mistral Instruct 7B (The Q6_K) When I run it with Autogen I keep getting an error in LM Studio saynig that the context window of 1500 tokens has been exceeded "[2023-10-26 14:09:05.509] [ERROR] Error: Context length exceeded. Tokens in context: 1500, Context length: 1500" Anyone know what this is about? I'm running a single agent 🤷🏻‍♂

BunniesAI
Автор

can you please create one using ollama ? autogen memgpt ollama?

InsightCrypto
Автор

I want to know the spec to run this project comfortably.

nufh
Автор

Please make a tutorial on how to use AutoGen with GPT4Free

denisblack
Автор

thanks for the video.
When running the exact code I get AttributeError: 'str' object has no attribute 'get'. In LM Studio I get: [2023-10-29 15:47:49.044] [ERROR] Error: 'messages' array must only contain objects with a 'content' field that is not empty.
I am running on MacBook Pro M1 64Gb with n_ctx=4096. Any idea on how to complete the task without error?

JuanOlCr
Автор

hi can you show how to implement local models with function calling pleasse

ew
Автор

Have you found a way to run multiple LM Studios in parallel without having to waste resources with VM Ware?

Also, so far cant let autoGEN connect to multiple LLMs

EddyLeeKhane
Автор

it runs better at night? With just the local models on your PC? Not sure what that might be due to

eightrice