I made OD bots for Gaming in 30 MINUTES

preview_player
Показать описание
Well...I tried to lol.

Want to learn how to use Object Detection in games? Say no more! In this video I ridiculously tried to do it in 30 minutes using Python and a Pytorch implementation of the famous YOLO model.

Chapters
0:00 - START
0:33 - Explainer
1:35 - Mission 1: Setup YOLO
2:00 - Breakdown Boar
19:58 - Mission 2: Capture Timberman Data
30:50 - Mission 3: Label Images
43:03 - Mission 4: Train and Test Deep Learning model
1:09:14 - Fixing Up Performance

Oh, and don't forget to connect with me!

Happy coding!
Nick

P.s. Let me know how you go and drop a comment if you need a hand!
#yolo #python
Рекомендации по теме
Комментарии
Автор

"i made OD bots in 30 minutes"
video: 70 minutes

those bastards lied to me

Powercube
Автор

Always love your tutorial, i'm a machine learning enthusiast too. Keep up the good work.

muhammadharsyeibra
Автор

Good tutorial Nicholas, appreciate you putting efforts on making this video! Please keep it up!

sanjaydubey
Автор

I guess if you want to avoid the branch obstacle at the edge of screen from being detected as left/right obstacles, you could infact include a small part of the middle tree trunk, when labeling the right/left obstacles, so that model will pickup obstacles that are only attached to the main trunk.

Nevertheless, Fantastic video indeed 💯. Loved the fact that you came back to fully correct the model. Deep learning is often frustrating when you don't get it right, but the magic when it actually works makes up for everything.

Keep doing more of these❤️

arjunkrishna
Автор

I love your channel and thank you so much for this video because I have been working on this and finally found a video that explains everything so well. Keep up the good work

sumayaabdulrahman
Автор

This is super cool! As always, huge huge thanks, much appreciated!
This is extremely useful in so so many usecases, and you explain it in a way that i find easly adaptible to pretty much every scenario you can imagine :) Perfect!

I hope the "Breakdown Boar" is happy, whereever it lives :D Seriously though, I enjoyed that section :)

I play a lot of a game called "Oxygen Not Included" at the moment, it's a survival-ish colony-building type of game, and at first I thought that this wouldn't be all too interesting for a game like that, because making AIs that can play games is mostly awesome for more competitive game like sports games, racing games, RL, SC, games like that. But then I though it would be kinda cool to build 2 robot hands that can use mouse & keyboard and train them to play games like ONI mentioned above (or literally any game, really). Doesn't even need to be super good at the game at first, I think that getting an actual physical robot to play games with actual Mouse & Keyboard would be super cool in itself :D Tried googling to see if someone has worked on a project like that before, but couldn't find much (and the google results are diluted with tons of unrelated topics because the javascript guys decided to call one of their KBM scripts "robot" ...).

Since you asked about those challenge videos: I must admit I don't mind the challenge part all that much, becaue I feel like it just introduces a bit of stress/pressure due to the soft time limit and I prefer more calm/relaxed types of videos generally. I feel that too much "stress" makes it harder to learn, which is the ultimate goal. But I'm atistic, so that probably has influence on it aswell. It's not really a huge deal though ultimately and I wouldn't want to be the showstopper for those that do enjoy the challenge aspect, so I'm kinda neutral on that topic I'd say :)

Would also definitely love to see this integrated into a larger project. Chaining together multiple pipelines ultimately isn't that hard, but I think it's still a lot of fun and there's always valuable info in your tutorials :) Especially when it comes to optimizing one pipeline with the following already in mind, if you know what i mean. Again, chaining them together is not difficult usually, but there are differences to working with them in isolation, different problems that can arise, different optimizations that are possible so that one model works better in conjunction with another, etc etc.

Anyways, thanks again! Super awesome!

NoMercy
Автор

Very cool, Nick! Here's to another tool for reinforcement learning for games, yay!

1. I feel like LabelImg should be easier to use, particularly with 'pixelart' stuff like timberman. Label a few and then LabelImg should itself make guesses for the remaining unlabeled images, where you'd only have to confirm or decline the proposal, most via keyboard presses. The longer you do it the more confident it would get. Some sort of "self-escalation"

2. Also: I can't shake the feeling that a neural net should have an easier time (compared to real life footage, for sure!) to get the detection and bounding boxes right. I wouldn't say 'most', but still a lot of the frames it sees have almost identical pixel arrangements for the classes it tries to detect. The bobbing of the timberman barely qualifies as data augmentation, right!?

3. For an example like timberman, I suppose it could help a lot to, I dunno, use opencv to remove the background in all the labeled examples?! The objects we are trying to find have black contours. I'm sure opencv as something onboard to isolate that. Without the background there should be less variation, making it easier for the neural net.

There is this example of one neural net, that got good at detecting some animal (boy, I'm blanking on the actual kind). Turned out the training data tended to show said animal against a snowy background, and the net basically picked up on that more than on the animal itself.

johanneszwilling
Автор

# if the 's' key is selected, we are going to "select" a bounding
# box to track
if key == ord("s"):
# select the bounding box of the object we want to track (make
# sure you press ENTER or SPACE after selecting the ROI)
box = cv2.selectROI("Frame", frame, fromCenter=False, showCrosshair=True)

# create a new object tracker for the bounding box and add it to our multi-object tracker
tracker =
trackers.add(tracker, frame, box)

stevecoxiscool
Автор

how do i process. How do I give a command on the computer after I find the object?

mesutyilmaz
Автор

hi sir this was a great video.
can u make another on integrating this with the timberman game for actually making the python to play timberman based on the object detected from this script.

tharunv
Автор

My train.py step is not working as it is supposed to be. The error says the paging file size is too small. I've tried a lot of things to overcome this but nothing works. PLease

smackiaa
Автор

I love this video! Thank you so much for making this tutorial!

xolefray
Автор

This YOLOv5/PyTorch thing seems to have a shit-ton of memory leaks. Trying to train it is consuming all CUDA memory and system memory. I've had to reboot constantly.

sstainba
Автор

Please do me a favor. I want to detect on my whole screen. So what numbers should I put in screen array?

prakashrathod
Автор

If you have not played around with opencv rtacker and selectROI i would take a look. You can get a lot of data by selecting a region and setting this region on the tracker.

stevecoxiscool
Автор

Hey Nicholas, I just watched your Multiperson Pose estimation which uses Movenet from Tensorflow.
I have a request, is it possible for you to make a tutorial of Skeleton detection/ pose estimation using CenterNet.
It would be a great use to me!!

Or if you may provide a github for that. I am not able to find a proper working for the CenterNet Model.

Thanks in advance.

dipankarnandi
Автор

Very interesting. I am very new to coding, looks like I have a lot to learn.

kevink
Автор

Nick, what are you using to have the iPad and pencil project nicely? What software ? OBS studio + what?

Thank you.

xeb-
Автор

what is your computer specification to train the model?

sourabmaity
Автор

# grab the updated bounding box coordinates (if any) for each
# object that is being tracked
(success, boxes) = trackers.update(frame)

stevecoxiscool