AGI is COMING and OpenAI Knows How to Achieve it!

preview_player
Показать описание
Sam Altman predicts AGI will arrive in 2025! He then explains that OpenAI knows how to get there and there is a clear path towards ASI (Artificial Superintelligence).

0:00 - Intro
0:23 - Superintelligence in a few thousand days
1:15 - The path to ASI is clear
2:33 - New scaling paradigm with o1
3:00 - Is scaling slowing down?
3:51 - OpenAI knows how to achieve AGI
5:01 - OpenAI’s Levels of AGI progress
8:00 - Outro

Today’s Sources :
Sam Altman FULL Interview with Gary Tan, Y Combinator
Sam Altman "The Age of Intelligence" Blogpost 🤯
TheInformation article GPT scaling slowing down
referenced tweets: (or X's wtv)

#ainews #chatgpt #openai #samaltman
Рекомендации по теме
Комментарии
Автор

2025, sure - I'll believe it when I see it.

MetalGearMk
Автор

He said the GPT series progress was halted, not O1. He said O1 could be scalled larger for years

Theforeveraloneguy
Автор

Great Work Bro Hope i Can make my AI YouTube for education Soon. These channels are gonna be revolutionary when AI take over the world people are sleeping on AI and how impactful it gonna be

Sumi.Sol.
Автор

Not sure why people keep pushing this AGI idea so much when its clear even regular narrow AI progress has stalled. No, its not about just increasing the scale of computation. A completely different, non-LLM approach is needed to get to AGI. Let me give you an example of why there will be no AGI in 2026 or 2036.

LLMs have a problem of information. We can calculate that 2+2=4 manually. We can say that we got that information from our teacher who taught us how to add numbers. If we use the calculator, the calculator got that information from an engineer who programmed it to add numbers. In both cases the information is being transferred from one place to another. From a person to another person, or from a person to a machine. How is then an LLM-based AGI supposed to solve problems we can't solve yet, if the researchers need to train it upfront? The researchers need to know the solution to the problem upfront in order to train the system. Clearly then, the LLM-based approach leads us to a failure by default.

Narrow AI is undoubtedly useful, but in order to reach AGI, we can't use the LLM-based approach at all. An AGI system needs to be able to solve problems on its own and learn on its own in order to help us solve problems we yet aren't able to solve. An LLM-based AI system on the other hand, is completely useless if it is not trained upfront for the specific task we want it to solve. It should then be clear that an LLM-based AGI system by definition can't help us solve problems we don't know how to solve yet, if we first have to train it to solve the problem. This is the Catch 22 problem of modern AI and I've been writing on this lately, but the amount of disinformation is staggering in this industry.

immmersive
Автор

Bit this is old news. We know that the rate of improvement for the past 3 years has indeed come toman end.

Rami_Elkady
Автор

Is he drumming up hype? Hell yes. Is it completely without merit? No. Progress is being made by multiple parties, closed and open source. And the progress could very well be compounding. But predictions (not theory based predictions that are falsifiable but the kind people throw out for fun and profit) are like opinions: everyone has one and they aren’t worth as much as the owner values them at.

sullyguy
Автор

Hey all, my 2 cents.

To get a sense about how far we are from AGI…

“We want a computer to have a thought just as smart as a regular person. I also don’t need the thought quickly. I don’t need it until next year. Normally a worker would spend an hour thinking about how to solve this problem, but I will give the computer 8, 760 hours to do the same thinking. I’ll wait…”

In one year, even the Earths most powerful supercomputers, can’t create a single minutes worth of human intelligence.

This is 100% a fact.

I bring this up because it’s not a processing power issue. It’s a “we have no idea how to do this” issue.

And “NO!” current GI can’t comb through all human knowledge for a solution. We haven’t solved it. There is no book to read. Nothing to synthesize. There is not an answer to AGI on this planet.

homebrewfeverdreams
Автор

I have a feeling this is all hype. Not much substance. From my, very much consumer, experience, they are glorified search engines. My analogy is they are not quite as smart as a parrot. A parrot can repeat words. AI can recall a sentence structure, but not really understand what the sentence means. A parrot can actually initiate a conversation to obtain a desired outcome. "Scratch Cocky?". "Polly wants a cracker!"

I remain very skeptical of the hype.

NinjaLifeCrisis
Автор

Open AI should be shut down for the kinds of shit they have done and are doing.

Neoprenesiren
Автор

AI lacks imagination. It will never have it. Ever. The notion that it will invent new things out of thin air to solve our problems is never going to happen. AI is an amazing new form of intelligence. It is a great synthesizer of ideas and it can theoretically apply knowledge across domains, but it does not have the human capacity for imagination and it never will. I own the paid version of ChatGPT and I use for hours every day to solve problems. It is a great sounding board and knowledge base and even a life couch but it is incapable of coming up with anything new that it doesn't already have in its language model. Their latest approach is to have ChatGPT first consider reasoning and then dip back into it model repeatedly for answers but don't confuse reasoning with imagination. And so if we think AGI include imagination, it ain't gonna happen. Ever.

SeanForeman
Автор

nah not even close
the best current LLM on the planet (gpt5) isnt even capable of Basic reasoning. not even the tiniest bit of basic reasoning. we are absolutely nowhere close to agi

HikikoAmore