I Read The Top 10 AI Research Papers of 2024

preview_player
Показать описание

In this video, I will be going through (more than) 10 research papers in the field of AI/ML with the most citations in the year 2024.

My AI papers newsletter

Check out my Patreon for the full list

This video is supported by the kind Patrons & YouTube Members:
🙏Andrew Lescelius, Ben Shaener, Chris LeDoux, Miguilim, Deagan, FiFaŁ, Robert Zawiasa, Marcelo Ferreira, Owen Ingraham, Daddy Wen, Tony Jimenez, Panther Modern, Jake Disco, Demilson Quintao, Penumbraa, Shuhong Chen, Hongbo Men, happi nyuu nyaa, Carol Lo, Mose Sakashita, Miguel, Bandera, Gennaro Schiano, gunwoo, Ravid Freedman, Mert Seftali, Mrityunjay, Richárd Nagyfi, Timo Steiner, Henrik G Sundt, projectAnthony, Brigham Hall, Kyle Hudson, Kalila, Jef Come, Jvari Williams, Tien Tien, BIll Mangrum, owned, Janne Kytölä, SO, Richárd Nagyfi, Hector, Drexon, Claxvii 177th, Inferencer, Michael Brenner, Akkusativ, Oleg Wock, FantomBloth, Thipok Tham, Clayton Ford, Theo, Handenon, Diego Silva, mayssam, Kadhai Pesalam, Tim Schulz, jiye, Anushka, Henrik Sundt, Julian Aßmann, Raffay Rana, Thomas Lin, Sid_Cypher, Mark Buckler, Kevin Tai, NO U, Gonzalo Fidalgo, Igor Alvarez

[Music] massobeats - honey jam
[Video Editor] @Booga04

[Bitcoin (BTC)] 3JFMJQVGXNA2HJE5V9qCwLiqy6wHY9Vhdx
[Ethereum (ETH)] 0x3d784F55E0bE5f35c1566B2E014598C0f354f190
[Litecoin (LTC)] MGHnqALjyU2W6NuJSSW9fTWV4dcHfwHZd7
[Bitcoin Cash (BCH)] 1LkyGfzHxnSfqMF8tN7ZGDwUTyBB6vcii9
[Solana (SOL)] 6XyMCEdVhtxJQRjMKgUJaySL8cGoBPzzA2NPDMPfVkKN
Рекомендации по теме
Комментарии
Автор

Meta doing "Open"AI's job is still kinda surprising to me, lol

Happness
Автор

The reasons why the technical reports are the most cited is because everytime you use the models in your own research, you reference the technical report. So with 23k published papers, of course the technical reports will be at top

ichbin
Автор

you need to either divide citations by the time it has been out or make a graph showing citations over time where the day each paper is released is shifted to the same place on the x axis. then you would be able to see which papers grew the fastest.

IAGIC
Автор

ByCloud with the amazing AI analysis videos..can’t wait what’s in store for your channel and AI as a whole in 2025

LeBeautiful
Автор

Thank you. I have been learning about LLMs in general. This video helped me alot!

김인영-qx
Автор

just barely missed meta's new paper which seems it'll change stuff in the next year alot. (byte latent transformer) also i'm very surprised nGPT isn't here.

minecraftermad
Автор

9:55 these are distrubition graphs so its showing that there is variance in the accuracy rather than showing that the accuracy is deteriorating

moomanchicken
Автор

Can you cover Meta's Byte Latend Transformer and Coconut (Training Models to Reason in a Continuous Latent Space)?

noctarin
Автор

I hope you make a video on Byte Latent Transformers and Large Concept Models, both from Meta (THE GOAT). These two imo are complete gamechangers!

CantoTheDegenerate
Автор

very interesting.... i wish to know what the future of the ai llm space is going to be, we know that scaling transformers are giving diminishing returns, as seem by top ai labs like open ai, meta, google etc... so i wonder which of these techniques would it be that will be the next big thing that we scale to go further.... will it be mamba... or KAN or maybe diffusion LMs, ... who knows, only time will tell...

npc
Автор

I wonder in how many papers ChatGPT is a ghostwriter author...

RedOneM
Автор

How to sort these papers by citation numbers?

XiangyuChen-tq
Автор

do you think a llama 3.3 7b model will be released?

BertVerhelst
Автор

Do any papers from November (or December at this point) even have any citations yet? I mean, someone has to read the paper and then write and publish a paper of their own for a citation to exist... how much can a paper be worth if it was farted out in less than a month?

Steamrick
Автор

I just found a paper from Meta AI about Large Concept Models.

I'm still a layman but it sounded very promising for coherence and energy consumption.
So far it works with text-to-concept and speach-to-concept encoders and a concept-to-text decoder, but I think it could work with other modalities (e.g. video) too, if you make encoders/decoders for that.

I can't explain it. Just read it for yourself

badizzl
Автор

Where are the weekly posted banger researchs in the community tab though ? I miss them

shadowoftokyo
Автор

Have improvements in pure CV models plateaued? Or are we just not noticing cuz LLMs is what's everyone's been talking about the past 2 years?

callmebiz
Автор

Pretty clear that transformers dominated this year. I'm curious to see the most cited in other fields like diffusion, or RL. After all, the biggest breakthrough usually come where not everyone is looking.

myliu
Автор

Noice video, but you should normalize the citations with cit per day.

ddoice
Автор

"AI and ML" bro it is only NLP in there, or NLP-related paper analysis, maybe with some twist of generating images Xd

Neuroszima
join shbcf.ru