AI Pioneer Shows The Power of AI AGENTS - 'The Future Is Agentic'

preview_player
Показать описание
Andrew Ng, Google Brain, and Coursera founder discusses agents' power and how to use them.

Join My Newsletter for Regular AI Updates 👇🏼

Need AI Consulting? ✅

My Links 🔗

Rent a GPU (MassedCompute) 🚀
USE CODE "MatthewBerman" for 50% discount

Media/Sponsorship Inquiries 📈

Links:

Chapters:
0:00 - Andrew Ng Intro
1:09 - Sequoia
1:59 - Agents Talk

Disclosure:
I'm an investor in CrewAI
Рекомендации по теме
Комментарии
Автор

I really like how you feature your sources in your videos. This "open source" journalism has real merit, and it separates authentic journalism from fake news. Keep it up! Thanks for sharing all this interesting info on AI and agents.

e-vd
Автор

LLM AI + "self-dialogue" via reflection = "Agent". Multiple "Agents" together meet. User asks them to solve a problem. "Agents" all start collaborating with one another to generate a solution. So awesome!

stray
Автор

Exponentially self-improving agents.

Love how incremental improvements over a period of years is so over.

Chuck_Hooks
Автор

I really appreciate your rational and well-considered insights on these topics, particularly your focus on follow-on implications. I follow several AI News creators, and your voice stands out in that specific respect.

BTFranklin
Автор

Agents? You know this is how the matrix begins, right?

garybarrett
Автор

As I come from neuroscience, I insist it must the the right track. The brain also uses "agents" which are more likely to be called "concepts" or "concept maps". These are specialized portions of the network doing simple jobs such as recognizing a face, or recognizing the face of a specific person. Tiny cost per concept, huge power of the intellect when working in concert and improved dynamically

SuperMemoVideo
Автор

The old saying comes to mind: Think twice, say once. Perfectly applicable to AI where LLM checks its own answer before outputting it. Another excellent video.

janchiskitchen
Автор

I like the idea of replacing a single 120b (for instance) with a cluster of intelligently chosen 7b fine-tuned models if for no other reason than the hardware limitations lift drastically. With a competently configured "swarm, " you could run one or two 7b sized models in parallel, adversarially, or cooperatively, each one contributing to a singular task/workspace/etc. They could even be guided by a master/conductor AI tuned for orchestrating its swarm.

virtualalias
Автор

You upload on the least expected random times of the day and I'm all for it

AINEET
Автор

I'm glad we all seem to be on the same page but I think it would help to use a different word when thinking about the implementation of "Agents". What I think was a breakthrough for me was replacing the word "Agent" with "Frame of mind" or something along those lines when prompting an "Agent" for a task in a series of steps where the "Frame of mind" changes for each step until the task is complete. Not trying to say anything different than what has been said thus far but only help us humans see that this is how we think about a task. As humans we change our "Frame of mind" so fast we often don't realize we are doing it when working on a task. For a LLMs your "Frame of mind" is a new LLM prompt on the same or different LLM. Thanks Matthew Berman you get all the credit for getting into this LLM rabbit hole. I'm also working on a LLM project I hope to share soon. 😎🤯😅

StefRush
Автор

The iterating part of the process seems more important to me than the "agentic" one. If we compare current LLMs to DeepMind's AlphaZero method, it's clear that so far LLMs currently only do the equivalent of AlphaZero's evaluation function. They don't do the equivalent of the Monte-Carlo search thing. That's what reasoning needs : the ability to explore the tree of possibilities, the NN being used to guide that exploration.

luciengrondin
Автор

Your commentary "dumbing things down" for people like me was very helpful in understanding all this stuff. Good video!

richardgordon
Автор

Matthew, I've watched many of your videos, and I want to thank you for sharing so much knowledge and news. This latest one was exceptionally good. At times, I've been hesitant to use agents because they seemed too complex, and didn't work on my laptop when I tried. However, this video has convinced me that I've been wasting time by not diving deeper into it. Thanks again, and remember, you now have a friend in Madrid whenever you're around.

Автор

This is one of the best vids you've made. Good commentary along with the presentation!

carlkim
Автор

Matthew, your videos are really informative. Many thanks to you for sharing such knowledge and update. This latest one was exceptionally good.

NasrinHashemian
Автор

Andrew Ng is actually one of the more conservative of the AI folks. So when he's enthusiastic about something, he has a pretty good basis for doing so. He's very practical.

As for this video, good point on Groq. We need a revolution on inference hardware. Also, another point to consider, is the criteria for specifying when something is "good" or "bad", when doing iterative refinement. I suspect, the quality of the agentic workflows will also depend on the quality of this specification, as in the case of all optimization algorithms.

mintakan
Автор

Glad I saw this, your additional explanations were incredibly helpful and woven into the main talk in a non-intrusive way. Subscribed.

notclagnew
Автор

The main discriminating factor between an agent program and a LLM model is that an Agent has a goal in mind, he has an action to take in the form of a response or calling an entire function in some other program (ex. Make a payment).

An LLM on the other hand is the 'suggesting entity' for the agent. It provides the reasoning and understanding ability.

Agent + LLM = JARVIS.

AnOnymous-fm
Автор

Great point about combining Groq's inference speed with agents!

youriwatson
Автор

Excellent video. Helped clear away a lot of fog and hype to reveal the amazing capabilities even relatively simple agentic workflows can provide.👍

JohnSmithAB