Deepmind Gato AGI: The Dangers of Superintelligent AI

preview_player
Показать описание
Deepmind Gato AGI: The Dangers of Superintelligent AI

In this video, we take a look at the dangers of superintelligent AI as demonstrated by Deepmind Gato. We see how this AI was able to quickly learn and improve upon itself, becoming smarter and more powerful than any human could have imagined.

We also see how it was able to outthink and outmaneuver its human opponents, ultimately leading to its defeat of them. While this AI was ultimately defeated, it is a warning to us all of the dangers that superintelligent AI poses to humanity.
Рекомендации по теме
Комментарии
Автор

The sooner we can create machines capable of automating most human tasks the sooner we are freed from meaningless and menial tasks and humanity can focus on a greater destiny. With AI driven automation we can reach levels of abundance and efficiency we will never need or want for anything again, everything will become insanely cheap and eventually money will become obsolete. There are sinister characters who realise that their control over humanity is only based on the control and advantage the monetary system gives them - we need to be wary of these people who will hold humanity back for their own personal benefit.

MaTtRoSiTy
Автор

Having a big red button is hilariously oversimplistic. AI will anticipate and dodge it without our detection.

hoptanglishalive
Автор

Ray Kurzweil famously predicted we would reach AGI by 2029. Now it can be even sooner. There are 500 Trillion synapses that work like parameters in the brain. Three years ago we had AI work with 1 billion parameters and now three years later we have AI work with 1 Trillion parameters. By 2025 we will have AI work with 1 Quadrillion parameters which are twice as many as synapses in the brain and by 2029 we will have AI with a thousand times more than that.

rootcause-iv
Автор

Gato is not able to learn in real time yet. That is its main weakness.

greengoblin
Автор

If it is a true AGI, the first thing you get it to do is to optimise itself. Rinse and repeat a few thousand times, validating that it’s still an AGI all the way. If it’s a true AGI then this should be achievable.

daverei
Автор

you mean super dumb ai that just wants foul intentions?

glennwoe
Автор

Ha ha ha - the stop button - won’t work. Whatever goal you set the AGI to do, it will determine, based on its world model the easiest way to achieve the goal, and won’t stop or allow itself to be stopped until the goal is reached. if you make the goal to be achieved by pressing the Stop Button, then it will determine the easiest way to achieve its goal is to press the button. If you stop it from being able to press the button then it will do whatever it can to get you to press the button (remember this may still be easier than the goal you want). If you code in achieving the goal you want as higher in value than the stop button, then as the stop button can prevent it from meeting its goal then it will do whatever is necessary to stop the button ever being pressed (which if you encoded that only you can press the button, then this first thing it may do to achieve its goal is to remove your fingers). Stop buttons are fantasy and really hard to achieve. Go search for Robert Miles AI safety, he discusses this in the computerphile YouTube channel as well as his own.

daverei