AutoGEN + MemGPT + Local LLM (Complete Tutorial) 😍

preview_player
Показать описание
In the video, we are going to use AutoGEN with Local LLMs using Runpods.
This is a much awaited video.
We are going to replace the AutoGEN agent with a MemGPT agent.

So the chain is complete now AutoGEN+MemGPT+ Local LLM.
What projects are you going to do with this now?

Let’s do this !!

Join the AI Revolution !!

#ai #memgpt #localllms #agi #runpods #autogen #textgenwebui

All video related to MemGPT and AutoGEN!

CHANNEL LINKS:

TIME STAMPS:

0:00 Intro
1:00 Problem Statement
4:26 Python
4:40 Editor VS Code
5:00 RunPods
6:10 Virtual Env
6:30 New python File
7:15 Code Explanation
14:40 Code Summary
16:18 RunPods Use
20:25 Running the code with AutoGEN
21:40 Running the code with MemGPT
22:47 Summarize

If you have any questions, comments or suggestions, feel free to comment below.
🔔 Don't forget to hit the bell icon to stay updated on our latest innovations and exciting developments in the world of AI!
Рекомендации по теме
Комментарии
Автор

This is a great achievement, and I think soon, the big channels will begin to copy this solution. Now you joined them all. Congratulations 🎉🎉❤

DihelsonMendonca
Автор

Hey, just wanted to let you know that you're doing great! This is an awesome video, very informative . The only thing I'd have hoped for is spending a bit more time with actually seeing it all work together with examples towards the end... Like once you got it working 😊 and no... It's not too long. Whenever topics get complicated like this one here, longer videos are totally worth it

CynicalWilson
Автор

Super proud of you, man!
You got it done.
Thank you, and congratulations.
Your future is bright, with such ambition.
Cheers, Chap!😁👍

_SimpleSam
Автор

Much Awaited video of AutoGEN + MemGPT + Local LLM ! This is insane

PromptEngineer
Автор

I’d love to see a full walkthrough of oobabooga fine-tuning and data prep. Data dump > preprocessing > processed > LoRA > Model selection > Finetuning > Alternative fine-tuning methods > evals

Sinsholian
Автор

Some great work. I spent hours trying to make it work with various local LLMs, but was unable to make it function either. Keep going!

MikeTheBard
Автор

There needs to be a live map of AI tech. I don't know what fits where, what is a minor variant in some subcategory and what is a category in itself. There are just so many terms and the landscape is changing so much. You have so many new terms every day its overwhelming Stable Diffusion, LoRA, Control Nets, VAE, HyperNetworks, Automatic1111, Mirstral 7B, LLaMA, MemGPT, AutoGPT, AutoGen, HuggingFace, AutoEncoder, OpenPose, Zephyr, LMS, Dolphin, is just the number of new terms I come across every five when I read about AI these days.

lhxperimental
Автор

Hope kept you going... for 4 whole hours. Inspirational!

I joke. I will watch the whole thing now. That just made me chuckle after coming off a 6 minth project.

mwdcodeninja
Автор

Very inspiring work thank you for putting this together. It would be even better to see some examples of what it can do.

learningsystems
Автор

My main target is to build a companion AI that I can chat with while helping me in daily task. Like chat GPT but have personality, make jokes, and helping me coding or learning new language. I've been playing with your uncensored chatbot codes, really neat.

nufh
Автор

I do not know if there is a change in the template or something but I followed this video and another video exactly but could not get a the port 5001 to be working. Then I talked to Runpod discord and they have asked me to add the following environment variable to the pod
environment variable called UI_ARGS to your pod with a value of --extensions openai --api-port 5001.

Then it is working. Hopefully it will help those who might face with the same issue.

snuwan
Автор

This is purely original content❤. Can not be found anywhere. People will obviously copy later. Great job 👏 👍

tapobratapaul
Автор

This is a fantastic video, I’m learning a lot. Thank you very much for sharing your knowledge. Well done.

asithakoralage
Автор

Some kind of fan example at the end of all this would be very nice.
Also you can use that as a catchy thumbnail.. Just a friendly advice. Great job!

WolverineMKD
Автор

Unfortunately, with memgpt’s latest update, it broke this fix. BUT at least now the update they pushed, officially supports Local LLMs with Autogen+MemGPT, you just need to follow their official example

Jirito
Автор

I think it's possible to set this up with multiple local llms on different ports if it fits the runpod. Hypothetically you can do a wizardcoder autogen coding agent with a zephyr memory agent to pick up where zephyr falls short 🤔

tech
Автор

I'd love to see whole Marketing Team working on totally tailored to my use case Strategy. AutoGEN + MemGPT + Local LLM. can you imagine that? how powerful this thing might be? 😍😍

KamilKaczmarekSolutions
Автор

When he says “worked four hours, ” I think about the 36 hours I spent trying to get other much more simple things working 😆

threepe
Автор

You are the boss! This video made me an instant subscriber to your channel. Thank you so much!

MrMoonsilver
Автор

your "Local LLM's" are not Local if they are using Runpods. Can you make a video with trully local LLM's without using Runpod. Event if it's extremely slow on your PC, it wont be on others.. Or at least run it on a Cloud PC that will do it.
👍 Thanks for the video thought.. ❤

ScottzPlaylists