How to Use ANY Local Open-Source LLM with AutoGen in 5 MINUTES!

preview_player
Показать описание
Hello and welcome to an explanation and tutorial on using ANY Open-Source Local LLM with your AI Agents. We will be using LM Studio to connect an open-source LLM to your workflow.

Don't forget to sign up for the FREE newsletter below to give updates in AI, what I'm working on and struggles I've dealt with (which you may have too!):

=========================================================
=========================================================

Chapters:
00:00 Introduction
00:17 What we are doing
00:29 LM Studio
02:21 Start Linking it Up
03:46 Testing
04:24 My Thoughts

If you have any issues, let me know in the comments and I will help you out!

Comment your games or software you build, I can't wait to see what you make!
Рекомендации по теме
Комментарии
Автор

Thank you for sharing this, but i want to see the complete prompt to the LLM when i am using OpenAI models, is there a way i can check that? I understand chat_results gives me a content role and any other information, which i believe is just openAI chat completion output. I want to print the input to the LLM and the output from the llm, ofcourse not the mbeddings but the text. is there a way to expose that infromation in autogen?

anubhavjaiswal
Автор

Hi Tyler, I want use function calling with autogen using Open source model. Since the time you uploaded this video, did any solution came for this problem?

herambblue
Автор

When the code finishes it's run and you see this: "Provide feedback to assistant. Press enter to skip and use auto-reply, or type 'exit' to end the conversation: exit", pressing enter would cause this error "penai.BadRequestError: Error code: 400 - {'error': "'messages' array must only contain objects with a 'content' field that is not empty."}" in vscode. Not too sure why.

kelv
Автор

Hey Tyler! I'm enjoying this series a lot! I've been working on a script to incorporate LM Studio with AutoGen for a few months now. The issue I'm currently running into is that pylance cannot access autogen. The module cannot be correctly imported. I'm running Python 3.11, I've just upgraded pyautogen, and I also just ran 'fetch origin' for the autogen repo. I have been able to import AutoGen and its components in the recent past. I'm not sure what else to do. Google search results don't return much help. I appreciate any suggestions you might have! Thanks for the video series, it's easy to follow along with.

EDIT: I recently switched to VS from VS Code. The imports appear correctly in VS Code, but not in VS.
EDIT 2: I am also getting a syntax error

assistant = "agent", llm_config={"config_list": config_list})
^
SyntaxError: invalid syntax

Phirebirdphoenix
Автор

Hello and best regards from Austria,

I'm sorry if I'm jumping all over you like this, but I have a question or two and hope you can help me.

I am working on a project for my authority and unfortunately I have never implemented anything with LLMs and agents. So far I've only worked in "normal" software development and forensics.

It's about connecting several LLMs with different tasks via agents. Unfortunately, I don't know which framework and which OpenSource models are best suited for this. Can you please give me a tip on how and with what I can best start?

Thank you and best regards

lowkeylyesmith