Complete AI Agent Tutorial with Ollama + AnythingLLM

preview_player
Показать описание

AI Agents and locally run LLMs are revolutionizing workflows, offering unmatched privacy, speed, and cost efficiency. In this video, we explore how tools like AnythingLLM and NVIDIA GeForce RTX GPUs enable individuals to harness the next evolution of AI right on their personal machines. @NVIDIADeveloper @TimCarambat #AIonRTX

Business Inquiries:

Timestamps:
----------------------------------------------------------------------------
Intro: (0:00)
Why People Are Running AI Locally: (0:50)
Why You Need a Good GPU To Run AI and AI Agents Locally: (1:31)
What are AI Agents?: (2:57)
How To Run AI Agents Locally: (4:13)
AI Agent Tutorial: (5:34)
Why AI Agents Are So Powerful: (10:40)
Outro: (12:50)
Рекомендации по теме
Комментарии
Автор

I'm so happy I found this video because docker won't open on my mac although i chose the correct model to install it never worked, so thank you so much for the upload.

PetersonCharlesMONSTAH
Автор

Wish I had an nvidia 4k series to be able to better play with this stuff. New goal unlocked!🔓

parsival
Автор

Good shit Kenny, haven’t seen ya since the independence village days 💪💪

NickyDiesel
Автор

Hello, nice video. I was just wondering, since the agent can do get http requests, they might be able to execute other things. Im looking for some more IT admin executable agent, any tip? I was trying to create it yesterday and at the end of the day I found out that ollama (api) doesnt remeber the individual chats, but I have to send the whole coversation to it with wvery request. Wanted to create multiagent system to process helpdesk tasks...is there something "ready to go"?

sychrov
Автор

How do you clear the chat in AnythingLLM?

PetersonCharlesMONSTAH
Автор

Hoping for a lotto windfall so I can purchase 2 Nvidia Digits

CynicalMournings
Автор

Nvidia is advertising through influencers?

rahuldinesh
Автор

This was just a giant Nvdia ad. Didn't even showcase why the cards are better than my M2 MAX Macbook with highly efficient chips that can store an entire LLM in memory. Who is this guy?

investmentanalyst
Автор

Thanks for sharing such valuable information! I have a quick question: My OKX wallet holds some USDT, and I have the seed phrase. (alarm fetch churn bridge exercise tape speak race clerk couch crater letter). How can I transfer them to Binance?

Charles-lzj
Автор

it is not free. What was your initial hardware/software cost, and weekly electricity cost ?

venuev
Автор

I get NVIDIA GPU's are probably the best for this kind of application but comparing your laptop to a desktop and saying there is a night and day difference... OF COURSE THERE IS WHAT but the difference IS NOT because NVIDIA... How... there must be other talking points NVIDIA gives you (sorry but this made me so mad)

marclrx