New AI Beats DeepMind’s AlphaGo Variants 97% Of The Time!

preview_player
Показать описание

📝 The papers in this episode are available here:

My latest paper on simulations that look almost like reality is available for free here:

Or this is the orig. Nature Physics link with clickable citations:

🙏 We would like to thank our generous Patreon supporters who make Two Minute Papers possible:
Aleksandr Mashrabov, Alex Balfanz, Alex Haro, Andrew Melnychuk, Benji Rabhan, Bret Brizzee, Bryan Learn, B Shang, Christian Ahlin, Geronimo Moralez, Gordon Child, Jace O'Brien, Jack Lukic, John Le, Kenneth Davis, Klaus Busse, Kyle Davis, Lukas Biewald, Martin, Matthew Valle, Michael Albrecht, Michael Tedder, Nikhil Velpanur, Owen Campbell-Moore, Owen Skarpness, Rajarshi Nigam, Ramsey Elbasheer, Richard Sundvall, Steef, Taras Bobrovytsky, Ted Johnson, Thomas Krcmar, Timothy Sum Hon Mun, Torsten Reil, Tybie Fitzhugh, Ueli Gallizzi.

Károly Zsolnai-Fehér's links:
Рекомендации по теме
Комментарии
Автор

Hold on to your smartphones, because you don't want to drop it and smash your screen when you learn how incredible this is! 🔥Thank you Karoly!

imjody
Автор

Very happy to see you draw a line when it comes to things out side of your knowledge base

cem_kaya
Автор

Once when Gary Kasparov played against Deep Blue, the computer glitched and pulled a stupid move. Deep Blue had played a flawless game until that point, and Kasparov assumed it must be some brilliant incomprehensible strategy. It drove him mad and he quit on the spot. It could be argued that Deep Blue was a computer that used an adversarial attack on a human brain. It threw him a curve ball, a stray pixel, in the form of a random, idiotic move that briefly short-circuited Kasparov's brain and caused him to fail.

NorthOfEarthAlex
Автор

This is always how AI is defeated by smart humans in all the old sci-fi movies.

dgeorgaras
Автор

Excellent. As soon as I read this paper I knew you would be talking about it!

An idea I've kept in mind for a while now is something like a "Nemesis AI": An AI trained to beat other AIs at their own field, not necessarily using the same techniques that the other AI is trained on, but finding and executing adversarial attacks targeting the functioning of the opposing neural network itself.

This is a good example of what I had in mind: Not just a one-off happenstance, but a systematic approach to doing it.

DonVigaDeFierro
Автор

I have been thinking about adversarial networks since i heard of them and thought the answer would be ot use a "3 wise men" type system where you have 3 completely different networks trained on different data to do the same task. Maybe even different architectures. It would be very unlikely that an adversarial attack could break 3 different networks simultaneously.

deletedaxiom
Автор

Thanks so much for the incredible volume and quality of your work. I'm very grateful.

JollyFuchsia
Автор

Károly, you are a light transport researcher by trade. You've stated this numerous times. I can't fathom why fellow scholars would like a comment on LK-99 from you. Thank you for being authentic and yourself! And thank your for the paper!

Sekir
Автор

These adversarial attacks always remind me of human epilepsy. I'm sure this comparison is veeery surface level and simplistic to a neuroscientist, but like giving a bunch of random inputs and suddenly your ai trips and falls, it feels so close

MooImABunny
Автор

Thanks Dr. K!
You've helped to inspire me to start tinkering with AI. Building a YOLOv8n model for object detection right now. : D

dcdales
Автор

I can actually see the frog when that single pixel is added to the horse. The pixel becomes the anchor, which is the right eye of a frog facing a camera. Which goes to ask, how sure are we that the horse is really a horse? If someone looks at a cloud and sees a horse, and another person looks at a cloud and sees a frog, which person is right?

DejayClayton
Автор

"Babe, wake up. Two Minute Paper just uploaded another gem!"

UnknownOverwatchSoldier
Автор

What’s pretty cool is that with that “one pixel” adversarial attack in the horse example, you can actually see how the AI could interpret that as a frog. If you squint, you may be able to make out a forward facing frog whose left eye is the dark pixel that has been altered!

stez
Автор

I played go for more than 20 years and KataGo is one of the strongest go programs I have conquered.

MoniqueEX-oxme
Автор

While this is a significant achievement, if I’m understanding the paper correctly it’s not a novel concept. DeepMind themselves used adversarial attacks against AlphaZero when creating their AlphaStar AI, and that improved performance pretty dramatically as you covered in this video. And that was nearly 4 years ago.

benjaminlynch
Автор

This kind of reminds me of how the "meta" develops in video games, or how warfare evolves. If everyone adopts a strategy, then others will begin to adopt strategies that target weaknesses of that strategies.

mr_clean
Автор

Feels like the issues center around a lack of disqualifying ability within the network, whereby it's matching up with the required data points but doesn't have the capacity to weigh it against what shouldn't be there. I know there was a trend towards activation functions that were quicker but whose ranges start at zero, maybe something like a tanh would elegantly allow for more nuanced evaluations?

Aupheromones
Автор

I still love the image of two heavyweights entering the ring. The challenger decides to spaz out and do a weird dance, and the champ is so confused that his brain glitches out and he is immediately knocked out.

drkpaladin
Автор

Have you seen the Jen1 text to music with music in-painting paper by Futureverse? Would be awesome to cover that

kaizenshoten
Автор

I know its not structly analogous, but I love thinking about adversarial attacks against the human brain. Saying the right words or doing the right movement could get someone to freeze or perceives something completely different. I think its likely such examples exist, it seems like its kind of similar to optical illusions or other sensory illusions, but I wonder if there exists a sound clip that objectively is just noise, but when heard is percieved like a word with high clarity. It would have to be on a per individual basis, as our neural networks are all unique, and it'd also probably only work once, as the brain constantly updates and retrains itself.

Maybe thats what's happeniny when you think you heard your name, and then it turns out no-one said anything. Maybe a gust of wind just happened to be an adversarial example to your human auditory processing.

perplexedon