How Google DeepMind conquered the game of Go by Roy van Rijn

preview_player
Показать описание

Google's AlphaGo is an extraordinary breakthrough for Artificial Intelligence. The game of 19x19 Go has 1.74×10^172 unique positions and is about a 'googol' times harder to calculate than chess. Experts thought it would take at least another decade before AI would be able to beat the best human players. So how did Google tackle this problem? What algorithms did they use and how do they work?

[FFR-8486]
Рекомендации по теме
Комментарии
Автор

Title says it all ... Deep Learning = connecting the dots. So thought provoking Roy. Just beautiful how simple packets of coding can produce this extraordinary result.

MsGnor
Автор

Go? Chess? Impressed when it beats us at Starcraft (and terrified).

jackalstrategy
Автор

nice video, i'm a go player and a computer science student and this was eye-opening.

pettyxgo
Автор

I can't read the writing: around 30:00, what is the writing on the ppt slide, below the individual "hidden layers"?

demneptune
Автор

Entire *observable* universe! Nice lecture though.

MusingsOAM
Автор

If alpha go is so strong why does it require 2000+ computers and nearly 300+ graphics cards? It's like bruteforce "comptuer chess" of the 90s all computational power and not enough neural net/ai computing.

dragonenergy
Автор

Tough audience! Lots of good jokes, barely a laugh!

Martin_Gregory
Автор

A horrible explanation. He knows what he is talking about but he explains it badly.

dannygjk