Agentic AI: The Future is here?

preview_player
Показать описание
Agentic AI is the latest in AI research. You ask me about agentic LLMs, about agentic RAG, and how to increase the agentic-ness of advanced function calling LLMs in combination w/ the latest RAG systems.
In this video is a simple explanation of all of it.

We define all parameters of an agentic AI system, delve into its hidden properties and uncover hidden secrets of truly agentic AI systems, in our future.

In this video we look at the future of AI (beyond memory, function calling and RAG), and analyze further world models, that will augment the planning functionality of AI agents.

Agentic RAG and LLMs explained and explored, in detail. More than 12 Ai system share their function calling agentic abilities, including parallel function calling and the new (mistralai/Mistral-7B-Instruct-v0.3) Mistral raw function calling format, compatible with Ollama (latest version).

See

#airesearch
#ainews
#aieducation
Рекомендации по теме
Комментарии
Автор

You're awesome! So funny and informative! 😎🤖

thesimplicitylifestyle
Автор

LLMs have absolutely demonstrated that they are capable of being leveraged into (primitive, dangerous, and very expensive) agents. Any oracle can. The point is that you can ask an oracle what a specified agent would do in a scenario and a perfect oracle would perfectly work the agent.
Arguably this is a far safer way of creating an AI agent than any other, because it sidesteps the alignment issues that arise from training. A perfect oracle will perfectly divine our intent, and create an agent to that rather than some poorly specified training set. It can even sidestep biases in the underlying oracle. An oracle with inherent bias can divine that the agent it is being asked to emulate does not have that bias, and will actively correct it's own bias. These agents are also have the significant benefit that we can simply ask the underlying oracle what it is thinking, rather than rely on it's own truthfulness. There are solid reasons to separate capability from intent, and LLMs can operate like this already.
The notion that they should be able to take responsibility, and that we don't know how to deal with that isn't a problem with them, it is a problem with us (and one that already exists and causes significant problems*). It should be a question for the insurance industry**, not politics. Any particular AI should be insured to the wazoo, including the point that every instance of that AI is not independent. An AI could be said to be responsible for $1 million if we are prepared to put it in a position where it could do $1 million of damage and somebody has been prepared to put that capital up as collateral.
* Treating corporations as responsible while limited liability exists as a concept is moronic. Arguably the only difficult problem with responsibility is handling what happens when liabilities exceed assets. Until we force companies to be insured up to the possible damage they could do (far past the value of the company or it's assets, and including criminal behaviour) this nonlinear term is going to keep coming back to bite us.
** Our failure to police the insurance industry and keep them doing their job is a political failure, but conceptually and historically they have done it well. They seem to have forgotten that their job is to manage risk. Ships are still Lloyds certified because the standards for them were created by an insurance company dealing with reality. Rather than rules being created by a bureaucrat who doesn't understand the problem space, we make the money work correctly by linearizing the problem below zero. Rather than working to set rules and an AI company saying "Oh no, our agent killed 50 people, I guess we go bankrupt now. Who wants chips?", we have an insurance company putting up enough collateral that they can compensate appropriately if/when this happens, while also watching the AI like a hawk and ready to pull the plug if the risk of this happening is too high.

agsystems
Автор

This is another of your videos that I will have to load the transcript to gpt4o and ask it questions while I rewatch.

densonsmith
Автор

Interesting topic. Idea for future video. What are the main areas of the "planning" domain (A*, multi criteria decision making, analytic hierarchy process, etc). What planning algorithms are there that works well with LLMs? How to train LLMs to do planning?

simonstrandgaard
Автор

Totally agree that "Agentic" is either marketing hype, or we are are changing the definition of the word with a drastically lower bar to something approximating agency to function calling. And if AI agents are so drastically different to agents in society (i.e. human beings), should another name be given to it instead?

What's really sad to me is real thought leaders like Andrew Ng also using the "A" word. But I guess I shouldn't worry about it if I don't understand.

d
Автор

I agree with your general idea that LLMs alone are not agentic systems. However, they seem to possess the ability to serve as a brain for an agentic system. Consider an application that has an agent orchestrator with the ability to call other agents or functions. All today’s agent frameworks use this concept.

cycologist
Автор

Lot of potential n talk about AGENTIC❤

Kriss-studios
Автор

I look forward to having a digital agent running in my smartphone that will be part of a global digital platform that will be able to have conversations with millions of people around the world at the same time, and merge the knowledge and sentiment expressed in those conversations into representations of the collective will of humanity. 

We will have collective human and digital intelligence.

johnkintree
Автор

You should check Maisa and their KPU (Knowledge processing unit). A novel approach that differed from RAG, function calling etc..

Davipar
Автор

Please tell me you did not soley let the LLM think for you on evaluating what agency means applied to LLMs.

If it can make a random decision based on too much information than a model trainer can certainly to pre-decide what the model will output each time, then it's semi controllable agency by the time you give it task fulfilling function calling. It's no longer a 1:1 deterministic equation, it's a statistical calculation like our brains use. 🎉

Therefore LLM's acting with function calling can easily exhibit a range of behaviors equating to agency.

For example a self diolauge chain of thought prompt technique can get the ai talking to itself, and with layers of function calling, memories like MEMgpt or modern gpt 4 memories, and RAG knowledge graphs it can effectively use real information to make real decisions in an unsupervised dynamic chain. What about that *isnt* agency in the real world?

What really matters in the output properties is the prompt, and RAG has a prompt method behind it, so does self discussion chain of thought, etc. The prompt can plan behaviors that arrive at independent decisions comprised of too many system elements too dynamic to generate the exact same output twice. It's more like our brains already than most realize. 😊

ickorling
Автор

Yes, I hear our sales people throwing that term

DaveRetchless
Автор

Big misunderstanding The llm itself it's not agentic it's the orchestration of them through a specific automated system workflow that makes it agentic it's the process not the individual components that agentic

jarad
Автор

When the ai starts talking about freewill etc you know immediately that it's just regurgitating human nations about such thing.
I challenge anybody to define what freewill actually is because the very question makes little sense...its like a snake eating its tail or a circular argument, every atom in your head obeys the laws of physics, if a biological can have free will so can silicon...whats the difference?

stoppernz
Автор

Agentic is future state of course there no agentic systems or frameworks yet it’s all still in beta and development, why are you trying to prove that all the agentic workflows are not agentic yet? You missing the point and why are all your videos sarcastic? Why don’t you build with langchain, langgraph, crewai, Autogroq and autogen and show us your brilliance by building early versions of what agentic workflows could look like? Why are your videos always so negative man? Who hurt you?

johleonhardt