AI is Slowing Down! What does this mean? — Gary Marcus and Narrowing Status Games — Follow the Money

preview_player
Показать описание
Рекомендации по теме
Комментарии
Автор

Why should we assume that we need AGI for mass job displacement and UBI?
It can happen even without AGI.

jamesjonathan
Автор

lol this guy went from "AGI is here, get ready for Terminator" to "AI is slowing down" in like 2 months.

pier-oliviermarquis
Автор

I still believe that the effect of current technology hasn't been felt yet. It's going to take time for us to build it into our tools properly.

strangeguywithstrangeopinions
Автор

Too bad. I want AI to turn the world and my life completely upside down as quickly as possible.

Andrew-
Автор

What is disturbing is that even without ASI we can make excellent killing robots even with the current technological skill set. Before reaching AGI it would be nice to learn how to cooperate, rather than beat down each other. The whole humanity would be better off.

atomcatINorge
Автор

When you are fast enough to escape the atmosphere, even if you are no longer accelerating, you're still escaping the atmosphete

pandereodium
Автор

I think it’s clear from the arrival of the NSA at OpenAI that a deliberate slowdown is in effect. We’ll get a controlled rollout from this point on.

iamjohnbuckley
Автор

Saying that we can't have AGI because the human brain works a certain way, is like saying we can't build supersonic planes, because bird wings can't flap fast enough to sustain supersonic flight.

The way machine learning and human brains work is different, sure some scientists have drawn comparison between the two, but they aren't at all the same. The question isn't if we can replicate the same mechanisms of the human brain in silicon, the question is simply if we can build machines that can perform as well as humans in a large enough number of tasks, and moreover, if said machines can build next generation of machines that can perform even better. How AI achieves task performance is completely secondary, the important thing isn't the "how" but the "if". If it quacks like a duck....

RonyPlayer
Автор

As someone that's watching what tools people are coming out with, dumpster diving through the code, analyzing the shortcomings, playing around with them to figure out their strengths and weaknesses, and helping companies take advantage of what's out there, I'd say... If things are slowing down, then we are not even remotely in a position where we can determine that. The algorithms overhang is so overwhelmingly large at this point that I wouldn't be surprised if modern techniques applied to GPT-3 could surpass GPT-5.

chadwick
Автор

Just had to come back to this video after the release of o3. I've been saying it for years, and I've been right for years. "THERE WILL NEVER BE AN AI WINTER"

justinwescott
Автор

As someone that worked retail for a few years in the late 2010s, I'm not sure that 20% of humans would pass any given AGI test. It's easy to forget how disturbingly dumb some people are when you aren't often exposed to the pool that contains literally everyone.
It will be hard to displace most human jobs, but the bottom 20% / low hanging fruit could be done with what we have _today_ imo.

gubzs
Автор

But I'm still getting my cat girl robot waifu right?😢

ArmoredAnubis
Автор

I liken things to waking AI from a dream state. Once the AI is lucid (enough) to meaningfully self improve from its own work and agency, then the hockey stick takes off.

It'll be like going from zero to one and everything prior will seem slow.

TRXST.ISSUES
Автор

AGI's arrival is inevitable; I have a low care factor for its exact "birthday" . The important question is: When will we achieve biological immortality or, at minimum, Longevity Escape Velocity?

robertlipka
Автор

Woohoo! Please keep sharing this message so that us stealth startups can build a bigger lead.

AGI pacing hasn't changed. Sonnet3.5 intelligence with finetuning would be enough. Everything after that is gravy.

The trillion dollar clusters will take awhile. But it's not needed for autonomy and agency. Especially within niche domains.

Which is also where the money is at.

The next generation models (this fall) will be more than sufficient for what 99% of people would call AGI.

Altmans idea that it has to beat 250 researchers to be AGi is just him trying to protect that bag from Microsoft.

It's all about the agent communications. The networking layer is where people are underestimating the intersection of exponential gains. You know this better than anybody!

I'm glad you have the spock uniform on again. Your inventions along this journey have shaped the industry. Thanks for all that you do brother 🙏🏾💜

andydataguy
Автор

Any perceived slowdown is an artifact of the “haves” choosing to gate how quickly they release their tech to the public. They are incentivized to stretch the game out as long as possible.

ngbrother
Автор

I didn't think we'd get AGI this year, but I (and many others) expected GPT-5 by now and I think that's the underlying factor people are using to say AI is slowing down. Claude's success is a great sign that things are still moving forward, but everyone is waiting for GPT-5. The improved reasoning / System 2 thinking is what's going to enable a step-function to the next gen model. We don't really need a true AGI, just something that can reason and and brainstorm with us at a very high level to help us think of the next breakthrough idea.

Hopefully the rumors are true and OpenAI is just waiting for the elections to be over and we'll get it (GPT-5) by new years 2025.

magicsmoke
Автор

too bad they got rid of it showing how many people dislike a video

dadehaxr
Автор

The truth is at the current growth in LLMs was a gamble that if you threw enough enough computation and data at LLMs, something magical might happen even though we don't understand and it gives us cognition. The longer harder route is that we need first understand what cognition is before we can start to make machines that approximate it. The dirty secret is it was just a hope that scale would solve the problem without us understanding it. It turns out that taking a chance on maths doesn't always work out. You need to do the hard work of understanding the problem.

BradleyKieser
Автор

In my opinion is not so much that AI is slowing down its that we've reached the limits of what we can do with "Brute Force".
just adding more parameters isn't going to cut it anymore. So now we need to get creative to do more with less.
Stable Diffusion is an excellent example of this, since Stability could only release models every so often.
So in the meantime the open source community went to hell and back to make tools to compensate for its shortcomings.
So until the next explosion, likely adding search into the models.
We'll be refining the tools to use and train them.

viddarkking
join shbcf.ru