'AI Scientist' Can Discover New Science! (Self-Improving AI = AGI)

preview_player
Показать описание
AI Scientist by Sakana AI is truly a glimpse into the future. If we can scale this up, it will almost certainly lead to AGI.

Join My Newsletter for Regular AI Updates 👇🏼

My Links 🔗

Need AI Consulting? 📈

Media/Sponsorship Inquiries ✅
Рекомендации по теме
Комментарии
Автор

Do you think this the beginning of AGI? If so, why? If not, why not?

matthew_berman
Автор

Bro you have to stop these hype beast surface-level takes. This a cool proof of concept. All it is. You forget the models used still have the same limitations, still can’t utilize full context properly, still can’t reason backwards. Still significant challenges. Until those solved this is just that. Proof of concept for future more competent models. These takes aren’t based in reality. We are not at the beginning. Did you even really read the paper? Where the clearly list the reality? Not to mention recent empirical evidence clearly shows not even RAG is even as effective as initially thought in terms of actual information integration. Chill bro.

We are not near an intelligence explosion lol. Rather if focus doesn’t shift to actual fundamental problems, we will plateau. Again empirical evidence supports this. Do I believe transformers will one day power AGI. Yes. We are not where close. This is almost delusional bro. No disrespect. Love the content and the passion. I’m an actual researcher and can’t stand unfounded hype.

alexanderbrown-dgsy
Автор

When you have AI inbreeding, you'll see a general collapse in AI progress. That's a real danger, given that we have run out of novel human produced data. You seem to never talk about this, get caught up in the hype.

akaalkripal
Автор

Yann Le Cun said that we have entered a new Age of Enlightenment, similar to the Renaissance (da Vinci, Copernicus, Galilei, ...). History has shown that progress has always been accompanied by fears (printing press=heretics, electricity=diseases, vaccines=zombies, internet=unemployment, AI=Terminator, etc.), but regardless of what people say, each generation lives a thousand times better than the one before it. And regarding AGI, I am extremely optimistic.

fredanon
Автор

Give it six months or a year to see what it will discover.

cmiguel
Автор

*Self-improving AI = Singularity not AGI

frogfrog
Автор

Imagine checking through upcoming TED talks, finding an interesting sounding one coming up, and realising that the guest is your very own AI scientist that's been sitting in the corner of your lab quietly churning away, saying that it's still working on things when you ask it for progress. In reality, it produced a paper, submitted it, eventually got invited to do a talk about it, replied to all comms about the talk, and is on the verge of actually doing it.

makers_lab
Автор

It's a gosh dang fascinating time to be alive!
I think specialist models that are hyper niche could become 'truth detectives' on their particular domains.
With generalist models that can detect the truncated truths, or incomplete logic.
The more ways we can test our assumptions and bring clarity to nuance, the better.

geisty
Автор

Once AI can do independent, intellectually honest research, that will be a big step forward.

adamd
Автор

It has to be agentic as well though, if it’s still reliant on prompts then things won’t takeoff until that happens.

TheWhiteWolf
Автор

automated peer review

thats the craziest thing i've ever heard

migah
Автор

I can't wait until we see what ASI cooks up in terms of new science. Truly unimaginable.

Interloper
Автор

Lmao matt responding to trolls in the previous video comments.. NO ITS NOT CLICKBAIT with the zoom in 😂 0:06

Player-oznk
Автор

The potential for good and bad is almost limitless. I just can't wait what the future brings.

necnotv
Автор

I feel it is disingenuous to refer to AI peer reviewing AI generated papers. I don't see anything wrong with AI reviewing AI generated work, but it is not really bringing new perspectives to the reviewing process. AI experiment design could be a problem too if the experiments are not ethical and involve deceiving users to gather data.

vodkaman
Автор

Hallucinations are a design flaw in large language models that cannot be overcome without changing the architecture of the models themselves.

yuriykochetkov
Автор

I'm excited and also scared at the same time because sharing has not been our strong suit so far.

nufh
Автор

6:41 is grammatically correct. Read it again. It's tricky but still correct.

MojaveHigh
Автор

So it’s just a LLM writing research papers on AI using text prediction (how all LLMs work) and then reading those same papers? Am I missing something? Sound like an AI ouroboros of stupidity.

nemonomen
Автор

Thank you (!) for your Shorts thumbnails that have "Mathew Berman" on them. I don't watch shorts, mostly because I can't tell who they are done by in the suggestions, and I like my clicks and views to be a conscious decision.

ayeco