2024's Biggest Breakthroughs in Computer Science

preview_player
Показать описание

0:04 - Can Large Language Models Understand?
Are chatbots "stochastic parrots"? A new evaluation called Skill Mix suggests that the biggest large language models seem to learn enough skills to understand the words they’re processing.

6:14 - Hamiltonian Learning Algorithm
After years of false starts, a team of computer scientists has found a way to efficiently deduce the Hamiltonian of a quantum system at any constant temperature.

Рекомендации по теме
Комментарии
Автор

these videos are like society patch notes

yasne
Автор

Are we going to ignore the Animations? You better pay your animators because ilustrating these complex concepts is incredible work.

atHomeNYC
Автор

İ love this channel this is more exciting than spotify wrapped

kenanabzd
Автор

No one remembered Kolmogorov Arnold Network? LLMs are cool and all (tho not my favourite/specialty area), but I'd love to see people pay more attention to the scientific machine learning field (e.g. physics-informed models, interpretable models like KAN). I kinda feel like so many other fascinating usages of machine learning are being outshined by LLMs nowadays...

tghuy
Автор

No mention of BB(5) being proven? That’s perhaps the biggest CS news of the year, especially since there’s a fairly good chance we’ll never again prove a busy beaver number for a two-symbol Turing machine.

michaelpeeler
Автор

For the people in the back: Computer Science is not programming.

It is way more. It is Algorithms, Artificial Intelligence, Quantum Computing, Computer Graphics, etc

ai_outline
Автор

Not a single link to the actual paper discussed, just a link to your article that also has no links to the paper.

Nex_Addo
Автор

When researchers are so obsessed with AI and quantum, I believe it leaves much less incentive for more classical/pragmatic computer science. But if you've worked in industry before, there are so many systems that are duck tapped together and have so much bloat that it's more cost effective to just add new code rather than touch the old. The fact that programs require thousands of times more space to run than years ago but are not generally thousands of times faster is actually disappointing. And the fact that many things like the banking systems of the world using COBOL will have practically no one with sufficient experience is worrisome. It seems like a genuine solution to managing complexity and being able to safely transpile an entire legacy system into a modern one is a pressing need but I think academics all want to be the next Turing or Church rather than solve a more down to earth but important problem.

EvanMildenberger
Автор

IA company employee publishes a paper that "proves" that its LLM is even more awesome that we all thought it was. solid.

joaoedu
Автор

At 7:31 the chess board in the background isn't set up correctly! The color complexes are flipped, i.e. the board is rotated 90⁰ from what it should be!

Sheldonsheldon
Автор

"Minimize the training loss... that is called Emergence 💫"

Bruh anyone that's actually trained a model understands that's just hype-talk

gamalalejandroabdulsalam
Автор

Is the "skill-mix" paper we are talking about in the first part? I don't see why this paper is a breakthrough, much less the biggest breakthrough this year. BTW, where exactly is the "mathematically provable argument" presented in the paper? You cannot simply call a paper with equations theoretical.

garyz
Автор

A question for somebody that manage this topics, cause I stuggle to understand:

The researcher made a bipart graph, with one part beeing skills. The flaw I see is, how can you prove the skill are different/not equivalent? Because you need to prove this for it to work.

AlfredoMahns
Автор

for those curious the real important stuff starts here 9:05

ccriztoff
Автор

Try to ask LLMs about alternative interpretations of sequences in the OEIS (Online Encyclopedia of Integer Sequences, ) or to summarize a given sequence. They cannot even get the proper ID matched with the proper sequence description (in my case sequence ID A000371). In other words, they cannot even regurgitate what is already in the OEIS comments area. The alternative interpretations - if they can come up with anything at all - sound authoritative but are completely WRONG! LLMs are authoritative sounding noise.

mtw
Автор

The REAL list: o1 (now o3), quantum computing breakthroughs, veo2, successful diffusion world models, Arc challenge success, and robotics advances starting to work with ML. I would be curious one of these for causality and ML research.

DistortedV
Автор

Can we get a list of the biggest breakthrough in Computer Science without mentioning ai? ai should have a separate video

Tau-qrf
Автор

man, this is what I got a degree in CS for, not *shudders* working in salesforce.

kevinmilner
Автор

This video is unbelievably fantastic. Can I just point out one small thing: at 7:32 on the chessboard in the background the king and queen are on the wrong coloured squares. I think the board was set up the wrong way round, since normally for white the king is on the right of the queen, and on a black square. In this case it is on a white square.

OwenWang-gc
Автор

Computer science may be more fundamental to the universe than we know!

ai_outline
welcome to shbcf.ru