Biggest Breakthroughs in Computer Science: 2023

preview_player
Показать описание
Quanta Magazine’s computer science coverage in 2023 included progress on new approaches to artificial intelligence, a fundamental advance on a seminal quantum computing algorithm, and emergent behavior in large language models.

00:05 Vector-Driven AI
As powerful as AI has become, the artificial neural networks that underpin most modern systems share two flaws: They require tremendous resources to train and operate, and it’s too easy for them to become inscrutable black boxes. Researchers have developed a new approach called hyperdimensional computing which is more versatile, making its computations far more efficient while also giving researchers greater insight into the model’s reasoning.

04:01 Improving the Quantum Standard
For decades, Shor’s algorithm has been the paragon of the power of quantum computers. This set of instructions allows a machine that can exploit the quirks of quantum physics to break large numbers into their prime factors much faster than a regular, classical computer — potentially laying waste to much of the internet’s security systems. In August, a computer scientist developed an even faster variation of Shor’s algorithm, the first significant improvement since its invention.

07:14 The Powers of Large Language Models
Get enough stuff together, and you might be surprised by what can happen. This year, scientists found so-called “emergent behaviors,” in large language models — AI programs trained on enormous collections of text to produce humanlike writing. After these models reach a certain size, they can suddenly do unexpected things that smaller models can’t, such as solving certain math problems.

Рекомендации по теме
Комментарии
Автор

1. Higher Dimensional Vector Representation and that driven AI.
2. An improvement on Shor's algorithm, that utilizes higher dimension (Regev's algorithm).
3. Emergent properties of Large AI Models.

krischalkhanal
Автор

I don't think I've ever seen a video on Quanta Magazine's YouTube channel or read an article on their website that I haven't thoroughly enjoyed and learned something from. They always manage to catch the perfect balance between simplifying concepts and using analogies with going into technical detail. Really great stuff!

ZyroZoro
Автор

I really love these year-in-review videos. It's difficult to keep some sense of scale and time when you're being bombarded with the continual advancements of the field, so to see these videos is really helpful in understanding even a fraction of what more we know / can do this year as opposed to last year.

kieranhosty
Автор

It's very interesting that there is some progress trying to combine ML and logic-based AI. Automated inference and logical argumentation is something that statistical methods have major problems with and this dimension of intelligence is very hard to emulate at scale.

Quanta, you should include the actual citations of the papers into your videos for the future. Since this is about new scientific things, paper references are necessary.

iamtheusualguy
Автор

I’m so glad you guys decided to start putting these out again this year!

saiparepally
Автор

I love that we're seeing more and more scientists embrace hyper-dimensionality to solve certain math issues -- it seems that sometimes, due to our own nature, we can struggle to think clearly in those dimensions but it always seems to garner incredible results and, funny enough, seems to indirectly mimic nature itself.

In the first example, I can't help but think of our brain's vector-like problem solving since our brain operations must form extremely complex networks over vast subspaces in the tissue! :)

Shinyshoesz
Автор

Regarding emergent abilities: at this year's NeurIPS, the paper "Are Emergent Abilities of Large Language Models a Mirage?" received the best paper award. The paper provides possible explanations for emergent abilities and demystifies them a little.

chacky
Автор

Improving Shor's Algorithem is insane, though looking back it might have been expected to have happened at some point. Maybe we might even see encryption break in our life times.

Edit: typo.

TheBooker
Автор

Thank you for this. So much useful for a common enthusiast to understand these technologies better.

rsn
Автор

I think the emergent property is up for debate - simply making systems more complex i.e. giving it the ability to essentially calculate/store more data via its parameters, can theoretically be infinite but practically not possible.
An interesting challenge going on right now is what is the smallest yet most powerful “reasoning” AI model we can run, which I think is a slightly more attractive phenomenon than simply just “the bigger the better”.

sidnath
Автор

I've got a buddy that works on an AI mod for Skyrim that utilizes Vector databasing to help provide it with a sense of both multimodality and long-term memory. Her name is Herika. You need to be able to put pieces together from different spheres of conceptualization if you want a shot at reasonability.

austinpittman
Автор

Hyperdimensionality is the way to go, and arguably the latent space of large NNs is approximating exactly this representation. Still, I don’t think the features will be all that more comprehensible, just because they’re vectors — happy to be proven wrong.

anywallsocket
Автор

Super interesting video. I love how these videos are perfectly made to give you just enough information, to put you in a state of wanting to know more.
The scientists were really good at explaining also

hrperformance
Автор

for a hot minute i was convinced they were going to.mention lisp or prolog with Symbolic AI.

despite literally having a company (Symbolics) oriented around the idea and yet its forgotten because of the 1980s AI Winter

spookyconnolly
Автор

Emergent Behavior in AI is so fascinating. How an AI can just develop something new even though it was never trained in it specifically is amazing. Obviously harmful emergent behaviors like harming humans would be a bad thing, but imagining that one day a massive model might have consciousness emerge by accident with no one on Earth knowing it and seeing it coming is wild.

vectoralphaSec
Автор

I’m pretty sure hyper dimensional software techniques have some larger implications we may not have caught on yet.

JoshKings-trvc
Автор

This has to be the best milestone celebration I've ever seen! Also I can't imagine a more incredible gift! You've really done it now, because you'll be very hard at work to find a present for the next milestone 😂🎉.

Thank you all for your hard work and sharing your experiences with us 🙏🏽

hanjuhbrightside
Автор

- Understand AI's current limitations in reasoning by analogy (0:20).

- Differentiate between statistical AI and symbolic AI approaches (0:46).

- Explore hyperdimensional computing to combine statistical and symbolic AI (1:09).

- Recognize IBM's breakthrough in solving Ravens progressive matrix with AI (2:03).

- Acknowledge the potential for AI to reduce energy consumption and carbon footprint (3:29).

- Note Oded Regev's improvement of Shor's algorithm for factoring integers (5:01).

- Consider emergent behaviors as a phenomenon in large language models (LLMs) (7:38).

- Investigate the transformer's role in enabling LLMs to solve problems they haven't seen (8:34).

- Be aware of the unpredictable nature and potential harms of emergent behaviors in AI (10:08).

ReflectionOcean
Автор

It would have been neat to see advancements outside of ai and quantum computing

ARVash
Автор

I've read the title wrong as "The biggest year in computer science breakthroughs: 2023" and thought what a time that I've lived in to see the biggest breakthrough

saats
join shbcf.ru