Should an AI Learn Like Humans?

preview_player
Показать описание
The paper "Investigating Human Priors for Playing Video Games" is available here:

We would like to thank our generous Patreon supporters who make Two Minute Papers possible:
313V, Andrew Melnychuk, Angelos Evripiotis, Anthony Vdovitchenko, Brian Gilman, Christian Ahlin, Christoph Jadanowski, Dennis Abts, Emmanuel, Eric Haddad, Eric Martel, Esa Turkulainen, Evan Breznyik, Geronimo Moralez, John De Witt, Kjartan Olason, Lorin Atzberger, Marten Rauschenberg, Michael Albrecht, Michael Jensen, Milan Lajtoš, Morten Punnerud Engelstad, Nader Shakerin, Owen Skarpness, Raul Araújo da Silva, Rob Rowe, Robin Graham, Ryan Monsurate, Shawn Azman, Steef, Steve Messina, Sunil Kim, Thomas Krcmar, Torsten Reil, Zach Boldyga.

Crypto and PayPal links are available below. Thank you very much for your generous support!
Bitcoin: 13hhmJnLEzwXgmgJN7RB6bWVdT7WkrFAHh
Ethereum: 0x002BB163DfE89B7aD0712846F1a1E53ba6136b5A
LTC: LM8AUh5bGcNgzq6HaV1jeaJrFvmKxxgiXg

Károly Zsolnai-Fehér's links:
Рекомендации по теме
Комментарии
Автор

This sort of thing makes me wonder how far machine learning will leapfrog once we have decent ways to do transfer learning. It seems that applying your knowledge in a completely new environment, even if it doesn't feel like you're using any knowledge, can give you a huge advantage

MobyMotion
Автор

That seems like an unfair analogy.

What would be interesting is to compare the difficulty a human has to grasp those new rules when compared with an AI that already learned another game.

A human is more like an already trained AI rather than a brand new one. And seeing how easily it can grasp a new set of rules and adapt to changing rules would be far more interesting in my opinion.

Like for exemple, even someone who never played a game will know that fire = bad and coin = good. That's because the human will have learned those things before. The machine on the other hand is more like a newborn thrown into the world with no other purpose than play the game and succeed at it.

i'm pretty sure a baby thrown into a videogame would be about as successful at finishing the base game than at finishing the game with reverse semantics for exemple. Simply because they didn't learn the normal semantics to begin with. That doesn't mean they would be successful though. They'd definitely fail hard in both cases. But the reverse semantics wouldn't provide any added difficulty.

Laezar
Автор

The difference is that humans begin with a lot of preconceptions about the world while the AI always starts from 0.
If I remember correctly, there was a paper about transferable knowledge, where learning one game made learning the other game much quicker.

Soul-Burn
Автор

Another concept worth exploring is the fact that humans have a slow “clock speed” when compared to a computer. Computers can play a million games in “fast forward” mode long before a human can use fine motor skillls via keys on a keyboard to move a character on a screen in just a single game. We have a massive disadvantage because we need to take thoughts and turn them into lethargic kinetic movement (typing on a keyboard in this example).

ngalawena
Автор

maybe i m missing something but the title seems misleading :)

tw
Автор

I think about this a lot. Probably the most important thing in my opinion is to help AI develop the skills that humans have; learning, memory, abstract reasoning. Once they have these, they can gain experience in the same way that a human can and solve problems without knowledge specific to the problem they are given. It may also be essential for AI to have "context knowledge" like humans do. For example, a human writing a news article can understand the nuances of society, culture, and history, while an AI designed to mimic human writing may not have this knowledge and may lose the texture and depth that humans have when they do a task they are experienced with; the ability to fully "understand" the problem and the context, as opposed to a shallow process that only captures the surface of the task. In addition, humans with experience in thousands of different tasks can combine their knowledge, accumulated over many years, and use this experience on almost any problem.

diamondguy
Автор

I was really excited about this one, based on the title - I was hoping for a bit of a philosophical discussion, which was not the case. I'm sure there are interesting papers written about the philosophical aspects of AI - are you exclusively reading technical papers? I would love to also hear about papers discussing the moral and ethic questions in the AI field.

RasmusSchultz
Автор

You should give humans some LSD and let them play... As a control group. I volunteer. :-)

erikziak
Автор

This seems like this would be really affected by the human's previous experience with video games (and also with the real world),
since the AI has to learn all of this from zero, but there are a lot of similarities to other video games that a human might have played,
that is obvious when the game is with the normal textures, but even with the random textures there is still the concept of a character that you control, platforms that you can stand on, gravity that makes you fall down when you are not on a platform, enemies that you can jump on to defeat, ladders that you can climb on, and more,
so even if you don't know which texture is which at the start, I feel like for a human learning this would be just matching concepts you already know to the textures,
while an AI needs to learn those concepts from the game.

ronmosenzon
Автор

The computers seem to be learning just fine their own way.

simoncarlile
Автор

The thing is, machines learning exactly the same way as humans would be of very little use, because we want something that complements us, doing things we can't do ourselves (or quicker than we could)

johannes-vollmer
Автор

Dear author, I will be using your vid to wirte my MA Thesis thank you !

konrexs
Автор

isn't learning without a progression score is a bruteforce approach?

spider
Автор

Maybe the problem with ai intelligence is that they always start from 0. We should try to create a generalised model + learning algorithm which will allow our models to be able to apply past experience in learning new things.

PythonPlusPlus
Автор

We humans simply overfit the observed datasets we are given as it is the winning strategy for surviving in human society. The games designed for humans already incorporate this fact, allowing us to transfer our daily ‘weights’ into playing the game. For an algorithm not plagued with overfitting, of course the validation test result is greater.

This only shows the trade-off we likely will see when transfer learning becomes viable: the more the A.I overfit for one domain, the more efforts it will need to put it to unlearn some parts of its prior knowledge.

swanknightscapt
Автор

The takeaway to me seem to be what I’d expect that machines don’t yet truly think, they brute force. Current AI looks like it’s on the cusp of simple forms of thinking but not there yet. At best I’d say we are starting to give machines simple forms of intuition. The fact that all of this obfuscation does nothing to the AI is a sign that it’s not actually thinking because it’s still processing vast numbers of possibilities, even if it’s learning in the processes unlike previous generations of AI. Humans do things completely differently. We clearly have discriminatory networks in our brains that identify and filter out VAST amounts of data VERY quickly. Narrowing things down to a very small subset of possibilities right away. For example looking at game we almost immediately throw away the background image. Next we almost immediately throw away all the platforms as objects that can be meaningfully interacted with. That leaves things like the ladder, doorway, enemies, etc. All of these we draw from memory and label them with semantic meaning. Further we then draw from our previous experience having played these games and in the span of 5 seconds or less we already have a theory that is likely 90% or more accurate as to how to play this game before we have even touched the controls. That’s why adding all the obfuscation messes with us, because our mental filtering no longer works and we are put on a similar level to the AI where we have to brute force it.

Locuts
Автор

Isn't just because the video game ai is narrow and doesnt apriori know at the start what ice cream, ladders, falling, etc are?

michaelharrington
Автор

Yeah this is the main human/technology right now. We have an input/output problem. We can instantly imagine a picture of an apple in our heads (input) but it will take us maybe hours or minutes to draw and colour the exact image we see on paper (output)

catalepsy
Автор

Does it mean it should also be difficult for computers to play the game when its masked?

yadishansar
Автор

Maybe that´s a way to let deep learning A.i to look for bugs and glitches in video games as such..

Zoza
welcome to shbcf.ru