DeepMind’s AI Breakthrough in Computer Science Explained

preview_player
Показать описание
Purchase shares in great masterpieces from Pablo Picasso, Banksy, Andy Warhol, and more.

Mentioned Videos:

Deepmind Paper "Faster sorting algorithms discovered using deep reinforcement learning":

Рекомендации по теме
Комментарии
Автор

The reported figure of 70% is for lists of about 5 to 10 items - and that's in relation to quicksort - which is already slower than insertion sort on such a small list. The true average improvement for large randomly ordered lists is about 2% vs Sort3 which isn't fully optimized - and this was purely an optimization in terms of generated machine code - algorithmically it's nothing new at all. Radix still beats it for large lists, Insert still beats it for small lists.

JohnnyWednesday
Автор

My aunt had one of the earlier PhDs in AI in the early 1980's. I remember her saying computers would be writing its own software. 40 years later...

billcollins
Автор

I'll age myself. I used to teach Assembly and Hardware Architecture and how machines, CPU and IO with Peripherals, worked. Right down to Clock Pulses and Machine / Bus Cycles. Your doing with Deep Mind what limited resources of early microprocessors forced us Engineers of the 8 bit era to do in our heads. 64K was the Addressing Limit and memory was expensive. Imagine designing Video Games with a 4MHz CPU and 64K Address Space in the 1970's. I think Programmers became lazy as resources and High Level Languages emerged. When your a programer that does not have a clue about Hardware Operation you will not design efficient code. Software Engineering Candidate who were being hired to work in my department had to understand Hardware Operation.
Saying " What's a register? " Would disqualify you. All Hardware Engineers of my generation were also Programmers. How else could you debug the machine?

wmffmw
Автор

It's worth noting that the default C++ standard library sorting algorithms the paper uses for comparison are intended to be general purpose and highly performant for a very wide range of use-cases. More narrow cases can usually see significant improvements from other algorithms; even more so if you can tailor an algorithm to leverage some traits of your data. So, this news is certainly being overhyped, but it's still really cool.

UrSoMeanBoss
Автор

This is where x^4 growth really starts.

TheIgnoramus
Автор

I've been a coder since I got my Vic 20 as a kid (many moons ago). I always wondered if there was a better way to do the hashing algorithm and I played around with it once and I got fooled into thinking I had achieved it, but then when I did more intensive testing, it all fell apart and gave bad results. So I just gave up on it.

It's amazing to see that AI has actually achieved this now.

amj
Автор

Imagine an AI that orchestrate optimization passes on a compiler. Very interesting!

galdutro
Автор

who ever figured out deep learning or machine learning is a freaking genius, seems so simple yet the results are so magical

yoyo-jcqg
Автор

This is the beginning of exponential AI intelligence growth. DeepMind has already incorporated those new algorithms into its network model, like most AI models anything learned is integrated into the model and becomes part of the model. I know its a bit specious to suggest but DeepMind has essentially just improved its own coding.

rrmackay
Автор

Optimizing assembly language and creating a better sorting algorithm are two different things! While advancements in sorting algorithms have not changed for decades the same is not true in the field of language optimization.

KZgunhire
Автор

I haven’t listened yet, but as a holder in computer science I classify AI as an ultimate “front end” for a computer. This means user interface, nothing more except the logical extensions as extended by modern computing. I’d like to discuss alternative meanings we discussed for “AI” back in 1995 but that’s another discussion and it did not even happen that way. Congratulations whoever came up with the neural networks part. The world is forever changed now. Ok now I listened and I’ve studied machine language, this is incredible. Also we call it Big O notation to calculate cpu speed efficiency that is.

scottgreen
Автор

This is so interesting ! I wonder how far we can go with this optimisation approach.
I love those improvement in basic elements. It is like you optimise a single transistor and suddenly get 4K video running at your phone.
Further, it would be interesting to know how much they spent for training.

dchdch
Автор

It will be very interesting to see just how much performance can be gained through AI optimisation of chip architecture, layout and traces.

WarmVoice
Автор

I often marvel at the fact that there is - even today - probably not a single person on earth who in detail understands/knows everything that is going on in an everyday computer. But an AI might get there. Imagine how far away we will be from understanding the design of our computers in some decades when we will have optimized them on various levels using AI.

foolwise
Автор

70% speed improvement when the length of the array is exactly 5!!
WOW! The amount of time it takes to sort 5 elements is so long that 70% improvement is indeed a breakthrough!

alexfrank
Автор

Back in the days when hardware was much slower, it would pay off to use a profiler to see where your code spends most of it time and optimize that code, sometimes even replacing it with hand optimized assembly code. However, as processors got faster and more advanced, that pay off got smaller, and developers now more rely on their compilers and runtimes to optimize their code for them, which sometimes also gives better result than trying to use assembly, because compilers can take most of the time take better advantage of the optimizations that the CPU's offers in terms of doing things in parallel, prefetching, branch prediction, etc.. When hardware gets faster developers also seem to care less about optimizing their code. The result is that despite having faster hardware, the applications are certainly not as fast as they could be, and it wastes a lot of CPU power. However, where more performance can be gained is probably just in the way the high-level code tries to solve the problem. For example in SQL, improving query structure, indexing and database redesign can sometimes change inefficient ways of doing things and reduce the time it takes to run a query from a day to seconds. It would be really helpful if AI could assist with making those kinds of code improvements based on patterns from best practices. But of course, if AI can also help to make certain functions in libraries or even OS API's faster and more secure, that could benefit the performance and security of all applications.

Gerard
Автор

The most perfect accent a human could have.

stldweller
Автор

“Congratulations Anastasi, very beautiful and content-rich videos, there is a lot to learn!

marcovillani
Автор

We were hand optimizing C -> assembly code back in the early 80's. Thats how I learned assembly by watching what code the produced. Also discovered how to code better in C to olliviate need for hand optimization. We only had 640kb of memory to run in.

briansauls
Автор

That's a bread and butter upgrade. This is a big score for the AI. Its like saying they found something better than numbers for counting:)

Thank you for AI stuff:)

MozartificeR