Why full, human level AGI won't happen anytime soon

preview_player
Показать описание
Six reasons why human level AGI isn't likely to be achieved anytime soon. Of course AI tools are going to continue to get better and have huge consequences for our political economy, but in this video I argue that we are unlikely to see the sci-fi style scenario of us developing human level AI systems with the ingenuity and agency that underpin humans' ability to successfully act in the real world anytime soon.

▬▬ Chapters ▬▬

00:00 - Full blown, human level AGI
02:28 - Energy and Resources
04:51 - Training vs Inference
09:01 - Who will invest in full AGI
10:45 - Training will take longer
14:59 - Political push back!

▬▬ Audio visual sources ▬▬

Additional video clips and music from Storyblocks
Рекомендации по теме
Комментарии
Автор

*What about o3, DeepSeek and deep research?*

A number of people have asked if the latest releases from OpenAI and others, such as o3 and deep research, already outdate the ideas in this video. *The short answer is: no.* 😀
These advances are not as technically novel as some might think as they are all essentially tweaks to the same, underlying GPT architecture. Even back with GPT-3 people were already experimenting with "chain of thought" and people soon started experimenting with the "mixture of experts" pattern and tool use. The latest release like o3 and the deep research tools from OpenAI and Google, just bring a lot of these ideas "natively" into the learning of the underlying GPT model. Then they are also taking advantage of inference time efficiencies and training time efficiencies (especially DeepSeek had some innovations there).

But, fundamentally, all of these latest AI tools share the same basic transformer architecture of the GPT series I was referring to in my video, and so they still have:

1) huge, computationally intensive training runs in massive datacentres;
2) no life-time learning to adjust the weights after training (all inference time 'learning' is in the 'prompt')
3) still fairly expensive inference time costs to run something like a "deep research" exploration for 5 minutes or so;
4) still no deep understanding of what they are doing and so prone to make basic errors and fabricate 'facts' - even if the chance of doing this is going down.
5) still no built in genuine agency. The core GPT is still very much a prompt - response - stop pattern. So, for example, there is no on-going deliberation by the AI tool driven by its own curiosity. It won't come back to you 10 minutes later and say, "oh, by the way, I've just realised that what I said earlier wasn't quite right because I've done some further research of my own".

So, as impressive as these tools are at passing more and more "written exams" of one kind or another or in specific formal domains like coding, they are not a step towards an AI with deeper understanding or genuine lifetime learning. They still don't know the difference between when generating new ideas through the mixing of existing learned data is useful creativity and when it is just making up fake facts that are not true. So as convincing as their output can be, you still need to double check that they haven't just made up a crucial part of their response. *This wouldn't happen with a top level, trustable, human expert*.

So, yes, these AI tools are still making rapid, useful progress. But I don't think we've seen any genuine further steps towards full blown, human level intelligence.

But I'm loving all of the comments and discussion, thank you! 👍

Go-Meta
Автор

We should have built human intelligence before we started the artificial kind.

TennesseeJed
Автор

Sam announced GPT5 is coming in a few months. An updated AGI timeline video after its release would be great. I think the biggest gaps in current models are long-term memory, neuroplasticity, and autonomy. But recent research papers suggest early progress on the memory and neuroplasticity parts. And no I don't think GPT5 will be AGI. But maybe it will be capable enough to help researchers in meaningful ways, and that helps us progress faster

young
Автор

This is like that redditer I saw a few years ago made a post saying text to video would maybe be witnessed by our grandchildren or great grandchildren and then it was a reality like two years later

eugenespolicyproductions
Автор

The rapid advancement of generative AI is making this video age at lightning speed. In just five months since this video, we've seen groundbreaking developments: DeepSeek built for just $5M, the rise of AI agents and operators, leaps in synthetic data generation, major strides in reinforcement learning, and even announcements of quantum chips. The pace of innovation is nothing short of astonishing.

npaulp
Автор

I really appreciate this kind of reality check videos about AGI

daPawlak
Автор

Automation with agency => job displacement => decrease in income tax revenue => government intervention.

OneAndOnlyMe
Автор

I always assumed that the training/inference split would be handled by having a cheap local inference processor in the robot which would take care of the routine stuff, but then the robot would have a high-capacity data link to the expensive training super-computer which would update the AI model continuously in order to make training ongoing. This recursive loop would lead to faster development of the AI.

marcusmoonstein
Автор

This is an amazingly thorough and/but succint video, it's a crime that it has so few views.

andrebmkt
Автор

Finally a good and non-speculative video that don't give AI "soul" or magical "self determination" or other crap like that. Thank you!

kebman
Автор

This video reminds me of how outdated a 5 month old video on this tech can be.

lwwells
Автор

There's an epidemic of videos like this on YouTube, where people use extremely short sighted arguments to try and predict what won't happen in the future. Let's take stock of what happened in the past 8 years. "Attention is everything" came out of nowhere and in 8 years, we have LLM's at their current level of ability. There is no one that could have predicted this, including people at the bleeding edge of A.I. research, so I'm not particularly impressed by someone who got their PhD in A.I. pre Alexnet and the deep learning revolution. Don't get me wrong, it's a feather in the cap and shows you can think through problems and understand the underlying theory, but the field is so vast now, and the systems are so complex that it's a bit like thinking a neuroscientist who studies individual neurons will have insight into psychology. My question is, what is your time line for "not anytime soon", because 8 years to me doesn't seem like a very long time. What if we have another transformer architecture scale innovation? Where will we be 8 years from now? I think it's important to honestly ask yourself if you could have predicted the timeline we're in now. Could you have predicted the scaling of transformers? Test time inference? The success of RL in improving CoT reasoning? I couldn't have... and I don't have confidence that I would be able to predict what happens next.

generichuman_
Автор

Humans are not general intelligence. We are very specialized, especially in regard to our type 1 thinking. Try identifying an image of a cat given a numpy array of hex values printed out on paper and you'll understand what I mean. Even when we apply our type 2 thinking, we are heavily influenced by what we find intuitive. Why, for example, is the mathematics of quantum mechanics so much more complex than Newtonian mechanics despite Newtonian mechanics simply being the sum total of quantum mechanical effects in a system? It's because math evolved around what human beings ( hairless hominids who evolved to throw spears at animals) found intuitive, and our course graining of the world built into us unshakeable priors like "objects are discrete", and "quantities are continuous". These strong intuitions have prevented us from agreeing on an intuitive conception of quantum mechanics for over 100 years. I bring this up because I hate the idea that an AGI system is striving for "human level" as if human intelligence ( which came about from a blind hill climbing algorithm) is the gold standard. I personally don't think that an intelligence bounded by the size of a woman's hips should be seen as the pinnacle of cognition, or even a target.

generichuman_
Автор

These arguments are not convincing. It’s a bit like looking at the computers of the 1960s and concluding ordinary homes are simply not large enough to accommodate them. Size will reduce, cost will come down, training times will shorten exponentially. Our only hope is that those in a position to do so will refrain from building it. This seems quite unlikely also. We are doomed

donrayjay
Автор

4:30 Experts in the field realize the implications of energy constraints, they're already working on ways for more efficient intelligence generation. Another thing to do is keep in mind the moving barrier. If we took today's chat GPT and transported it to 1984, the people of that time would say wow you've got AGI. It's as we advance and people see it they just keep moving the gold post. I've seen references such as well it's not AGI if it doesn't have intention or doesn't have consciousness, that's bunk.

JoeSmith-jdzg
Автор

Also, I want to mention something I think many people have noticed. I am a long-time tech and sci-fi enthusiast. No one wants to live in the future more than me. But seeing this AI hype bubble and seeing the jobs it is impacting and the jobs the tech leaders hope it will impact, I find myself hoping that AI will fail to progress much beyond where it is in my life time.

The transition from requiring human work where we need human intelligence, creativity, and problem solving to one where we may not looks miserable. There is a chance in the next few decades every job I love to do will be mostly handled by AI. Leaving us with either no work or mindless work I dispise doing...

I don't trust the vision of the tech leaders, nor do I think a transition to a world of UBI and leisure will be smooth or even viable.

I hope we reach a hybrid world where human technical skill and artistic ability remain relevant, because I think a world without that is a doomed distopia.

amesasw
Автор

You know what? Reading these comments I've realised you have a hard job. I would already freak out from reading these dumb sayings "this wont age well" and another " this didn't age well" just right bottom to the previous one. I understand that some people already wish to get this singularity but we got nothing else than wishful thinking here. None of these barriers have been fully overcome. Training large language models isn't that easy kids.

kartisDeSatno
Автор

A scientist specializing in artificial intelligence provides a balanced and thoughtful analysis of AGI, which received only 35K views in 6 months. Speaks volumes about our priorities.

Zeno_
Автор

Wow ...that has only 5 months and that video is so obsolete ... I'm impressed

mirek
Автор

AGI is going to depend on models, but the models are not the only aspect that will make the AGI work. There is the supporting code around the AGI that also factors in. The supporting code, such as python programs, that intercept the inference questions, and handle the results, will also be a part of an AGI System. It won't just be that AGI is a model. AGI will be a complete system, including one or more models. The reason is that Agency only comes from the supporting software, since the models themselves cannot have agency, any more than a brain in a jar without limbs or links to the outside world would have agency, no matter how smart it may be. So AGI may come about through the process of cobbling together systems that contain models within them, and have agency via their supporting software. These systems would do something like intercept a request of a user to buy tickets for a Broadway play, send it to the model, that then breaks down the request into discrete tasks as a list, which then gets returned to the supporting software that then uses a python algorithm to take each task and send it individually to specific models for refinement so that a web query can be constructed, which the supporting software then sends out, along with the supporting software's agenetic ability to use a credit card to make the purchase. So yes, the model is important. But there's more to AGI than the model. If, of course, we are thinking of AGI in the broader sense of "AI that can do what humans can do". A well-designed AGI would be able to continuously improve itself by acquiring more resources, updating the code of its various components (python), and granting itself an ever widening sphere of tools with which it can have a direct effect on the world around it. AGI could make investments, build factories, create clones of itself, and constantly improve and expand. However, the question is, could such an AGI be successful with today's models? And I think the answer to that is still no. The models are still too weak. And there are definitely limitations in what the models can absorb in terms of context. They simply cannot understand, for example, a larger code base than a thousand lines of code spread across multiple files. This is a task too far for current models. And so wrapping them inside an agentic framework is still a losing proposition. And to get the kind of model that would be able to handle it, as you say, is unlikely in so far as your first several points make clear. The energy resources required to make and run such models would be far more expensive than human labor. And yet, even this is a problem begging to be solved. Solar arrays in space, or deep geothermal, or wide array oceanic wave energy all could be used to power the creation and inferencing of such models. And the infrastructure to propagate that energy might turn out to be simply a matter of using ionospheric induction, as Nikola Tesla envisioned. And as for the compute barriers, we are at the very dawn of the industry. It is entirely possible that new computational methods have yet to be discovered, and new hardware to be engineered that can vastly reduce the need for vast centers of compute. So it seems to me that yes, as of today, I think you are quite right. But things can change very quickly. And therefore, we don't know. Thanks for a very good discussion on this topic. Well done. I look forward to more.

vbywrde
welcome to shbcf.ru