AI Learns to play Flappy Bird!

preview_player
Показать описание
Can I use Unity ML-Agents to teach an AI to play Flappy Bird and beat my high score?
👍 Learn to make awesome games step-by-step from start to finish.

Making Flappy Bird in Unity!

How to use Machine Learning AI in Unity!

Teach your AI! Imitation Learning with Unity ML-Agents!

🌍 Get Code Monkey on Steam!
👍 Interactive Tutorials, Complete Games and More!

If you have any questions post them in the comments and I'll do my best to answer them.

See you next time!

#unitytutorial #unity3d #unity2d

--------------------------------------------------------------------

Hello and welcome, I am your Code Monkey and here you will learn everything about Game Development in Unity 2D using C#.

I've been developing games for several years with 7 published games on Steam and now I'm sharing my knowledge to help you on your own game development journey.

--------------------------------------------------------------------

Рекомендации по теме
Комментарии
Автор

🌐 Have you found the videos Helpful and Valuable?

CodeMonkeyUnity
Автор

I just seen an ad for your courses on another random video and it blew my mind, I thought I was watching one of your regular videos when I heard your voice . It wasn't untill I seen the skip button that I realized I was watching an ad . congratulations, I watched the entire ad .

UnderfundedScientist
Автор

💬 Can I use Unity ML-Agents to teach an AI to play Flappy Bird and beat my high score?

Once again it's surprisingly easy to use Machine Learning in Unity! Only took me a couple of hours to set this up and apply it to a previously made game.

CodeMonkeyUnity
Автор

This is the content I pay internet for

LivingToPay
Автор

"It's actually really quite simple, it only took me a few hours..." I'm laughing so hard right now 🤣🤣

milkmeapollo
Автор

Great video Code Monkey ! It shows an application on the raycasts answer you gave me in the previous vid !

Viandux
Автор

Really interesting video! Great to see it in action after your setup and training.

somewhatpsychtic
Автор

7:10 AI be like: "Sometimes my genius is... It's almost frightening"

algs
Автор

this is an awesome concept to show what the mlagents toolbox is capable of

whotfisthis
Автор

Fantastic as always. Have you thought of any non-game related tasks that ML Agents would be useful for? I know there are ML platforms more appropriate for non-game scenarios, but I'm more comfortable in Unity, and since a trained model can run in a built executable, it would he neat to see Unity used for non-game applications that make use of a pretrained model. I've also thought about setting up an environment where the agent decides which state (as in state machine) to run, rather than directly control movement. That might result in a smoother looking NPC that is still running a trained model to determine state.

beardordie
Автор

Bro I had played your all games and it is nice 😍😍😍.

nighthawk
Автор

Nice video once again! Would you consider making an AI that learns to play chess like AlphaZero? Or maybe some other more complex game (existing or invented by you) just to show off some cool things the AI can come up with.
I just wanted to see it do something more meaningful like in the hide-and-seek example by OpenAI, hopefully we can achieve similar results without a huge gpu farm...
Speaking of gpu farms, would it be possible to use Google Colab to train these models?

israelRaizer
Автор

Hi Code Monkey. Absolutely Loving this new series. Could you give a bit of advice on how to handle a tank movement for AI please? (Acceleration+Reverse) And (LeftTurn+HoldTurn+RightTurn) these two I figured out. But I am having hard time figuring out how to put all these in an action sequence.

Again I absolutely love your content. Thanks so much for making them.

CaptainCrazy
Автор

@codemonkey great video and really helpful

hamzamusani
Автор

I love your videos
I started game developement and unity ny watching your videos
I am currently working on a snake ai using ml agents but i am having some issues resuming the training after changing the inputs on my agent
How did you teach your flappy bird agent through various phases
Could you pls make a video on that

Truly love your content

aryanjain
Автор

A question on the parameters. You keep mentioning how you're changing the scenario for the model. Do you simply `--resume` training when you do the changes? I noticed that some of the parameters change based on step_number/total_steps. Do you just ignore that part or is there anything else to play around with those?

nraw_
Автор

Dear CodeMonkey, thanks alot for all of your great tutorial. Would you please help me understand how you got these 7 iterations of ever improving models? Am i correct to assume, that you spent time training on x amount of steps on a certain config (eg: extrin:0.0, bc:0.5, gai:0.5) until you liked the reward curve. After that you stopped and changed the config to like (eg: extrin:0.5, bc:0.3, gai:0.3) and then trained with a new run-id but used initialize-from to build upon tha last trained model? if that is not how you have done it, how did you improve on the existing training for every iteration with a changed config file to get these 7 comparable iterations? thank you very much!

takingpictures
Автор

I'm pretty bad at Flappy bird. Nice Video

felixt
Автор

I would like to see how to set up ai to handle randomness. I'm trying to train AI for poker, and my 'simple' version where they literally just call or fold the flop isn't producing the expected results. It may have something to do with the fact that the AI agents are all competing against one another with the same brain so what might be good one hand might be bad in other hands. (currently the AI is only looking at their hand, what the card ranks are, whether they are suited and whether they are paired)

Currently doing PPO training because I haven't figured out the yml setup for SCO training (which is probably better for this task)

What's funny is that for the first fee minutes it's showing expected results - calling AA and KK more often than other hands, but over time it stops playing AA and KK completely.

justinwhite
Автор

When the RayPerceptionSensor2D component is added to the character, does it automatically collect observations, or is it still necessary to manually add the observations in the public override void sensor) method in the agent's script?

AcademiaD