Autogen: Ollama integration 🤯 Step by Step Tutorial. Mind-blowing!

preview_player
Показать описание
Discover the incredible journey of integrating AMA with Autogen using Ollama! This video is your gateway to unleashing the power of large language open-source models. Dive into the core of Autogen and see how seamlessly it synergises with Ollama through a hands-on tutorial. 🛠️

🎯 What you'll learn:

How to set up and integrate Autogen and Ollama effortlessly.
The magic of large language models and how they can be harnessed in your projects.
The structure and essentials of Autogen code for a smooth integration.
The steps to activate and utilise LightLLM for a mind-blowing AI experience.
How to verify your setup and ensure your Autogen app is ready for action!

🔥 Timestamps:
0:00 - Introduction
0:02 - Integrating AMA with Autogen
0:06 - Adding large language open-source models with Ollama
0:19 - Text generation web UI & LM Studio with Autogen
0:28 - Setting up API base and creating LM Config
1:04 - Using LightLLM for integration
1:19 - Activating virtual environment and installing LightLLM
1:34 - Running the application and verifying setup
2:04 - Verifying the base AI model used

Hashtags: #Autogen #Ollama #AI #LargeLanguageModels #Integration #LightLLM #OpenSource #VirtualEnvironment #APIBase #LMConfig #TextGeneration #WebUI #LMStudio #AssistantSetup #Chat #Question #Program #PortNumber #Application #AppPy #Python #MrAI #Model #ModelIntegration #Terminal #Command #PipInstall #Activate #Environment #Assistant #UserProxy #Bot #Directory #Timeout #ConfigList #Language #LanguageModel #LanguageModelIntegration #Ollama
Рекомендации по теме
Комментарии
Автор

Hi! Nice video, what about deploy ollama on cloud, like RunPod or similar? Thanks

marianosebastianb
Автор

From your experience, which API (Ollama, LM Studio, Text Gen Web UI) is best to use for switching fast between multiple models?

GlenBland
Автор

Hi! Nice video!!
I have this problem: I run the command "lite --model..." And it's all well and good if openai's version is the latest (1.3.3). However, when I run the script, I get the following problem: "ImportError: please install openai and diskcache to use the autogen.oai subpackage." I have both packages installed, but checking it could be that pyautogen requires a version of openai < 1, however, doing so would not work the first command. How do I fix it? Thank you

carlshod
Автор

I was able to run the example with 2 small changes. Use 'base_url' in the place of 'api_base' and comment the 'request_timeout' parameter.
But it is very slow. Something like 3 minutes to give me an answer with mistral . Using ollama directly i got a answer in seconds. Someone knows what can be the issue ?

danilolr
Автор

Could this work with Autogen UI? I appreciate your expertise and concise videos. Thanks..

SA-lhbx
Автор

can you share the package versions for autogen and openai? looks like openai arguments have changed

miribeard
Автор

I'm unable to get the api base server...Its the same port as earlier 8000

AachalGupta-mx
Автор

Thanks for doing this.. I needed this. Does this work on fast api endpoints. Can litellm give me openai endpoints from my fastapi endpoints. Sorry if the question is weird. I’m not well versed in restful web services.

neoblackcyptron
Автор

it will be helpful if u can provide link for the code you are using

rastinder
Автор

Is this ollama though or is it litellm proxying ollama ? cause this doesnt look to query ollama directly, ollama's api is not openai compliant like litellm

BadIdea