Building a Custom Environment for Deep Reinforcement Learning with OpenAI Gym and Python

preview_player
Показать описание
Tired of working with standard OpenAI Environments?

Want to get started building your own custom Reinforcement Learning Environments?

Need a specific Python RL environment built for a project you’re working on in the field?

In this video you’ll learn how to do exactly that in 25 minutes. In this video you’ll learn how to build a basic custom reinforcement learning environment to get started with reinforcement learning. We’ll go through how to build your own environment class, setting up the __init__, step and reset methods and then train a simple RL model to learn how to interact with it using Python, Keras-RL and OpenAI Gym.

In this video you'll go through:
1. How to build a  custom environment with OpenAI  Gym
2. Training a DQN Agent on a Custom OpenAI Environment
3. Testing out a Reinforcement Learning agent on a Custom Environment

Chapters
0:00 - Start
0:30 - Cloning Baseline Reinforcement Learning Code
3:12 - Custom Environment Blueprint and Scenario
5:22 - Installing and Importing Dependencies
7:44 - Creating a Custom Environment with OpenAI Gym
9:21 - Coding the __init__() method for a OpenAI Environment
12:26 - Coding the step() method for an OpenAI Environment
16:50 - Coding the reset() method for an OpenAI Environment
17:23 - Testing a Custom OpenAI Environment
20:29 - Training a DQN Agent with Keras-RL
23:48 - Running a DQN Agent on a Custom Environment using Keras-RL

Oh, and don't forget to connect with me!

Happy coding!
Nick

P.s. Let me know how you go and drop a comment if you need a hand!Tired of working with standard OpenAI Environments?
Рекомендации по теме
Комментарии
Автор

Great tutorial. Simple and to the point, especially for someone who is familiar with RL concepts and just wants to get the nuts and bolts of an OpenAI gym env.

laser
Автор

Love these. Building custom environments is one of the biggest areas missing with the OpenAI stuff imo.
Would be cool to see one bringing in external data. Like predicting the direction of the next step of a Sine Wave or something simple like that.

tomtalkscars
Автор

Your tutorials were awesome, and I just finished your 3-hour RL tutorial, and I would like to see a Pygame implementation as soon as possible :)
If possible, try to create a different set of advanced videos where you will explain the math and intuition behind RL, along with code implementations (to cater a different audience).
Something I like about you is that you respond to each every comment, a characteristic which I don't see often from others. Kudos to you!
Thanks again mate! Stay safe!

Techyisle
Автор

Really informative video! As a high schooler self-learning RL, tutorials such as these are really helpful for showing applicability in RL.

tawsifkamal
Автор

Thank you for your tutorial, I hope to see how you can visualize the environment in the upcoming tutorial!

jeongseonjai
Автор

Man you can't stop giving us this gold of a tutorial!

user___
Автор

I Can't stop myself from commenting on this exceptionally good tuotorial.
Sir, really amazing job. I must say you should continue this good work, the way you explain each and every line is something that is very rare in the material that is available till now.

Much love from a Pakistani student currently in South Korea 😍

MuazRazaq
Автор

This is way more useful than the last one. The more you can modify OpenAI's envs, it seems, the more that you can get out of the reinforcement learning schema.

baronvonbeandip
Автор

Sir that was exceptionally good!!! 🔥
I would really love to see the render function in play using pygame.
Waiting eagerly for it!!!!

pratyushpatnaik
Автор

Hello Nick! I love your tutorial and it's actually helping so much in university especially consider the lack of documentation for openai. I was actually doing a custom environment for tictactoe to practice but for some reason when I run dqn.fit() like you did with the same everything for the keras-rl training part I get this:

"ValueError: Error when checking input: expected dense_16_input to have 2 dimensions, but got array with shape (1, 1, 3, 3)"

I don't quite understand why it got that shape because my tictactoe game's observation space is a np.array([Discrete(3)]*9) to represent the nine tiles and the three possibilites of what could be in them.

Again, thank you for the helpful tutorials!

SatoBois
Автор

On my mac the kernel keeps dying when I run the basic cartpole example. Don't know how to troubleshoot. Pls help.

Spruhawahane
Автор

I guess what urks me the most about all the universe/retro/baselines gym examples is that it's not straight forward to get your bright/shiny newly trained model to run in other environments. These gym examples have so many interdependencies and one does not really know what is going on inside the box. This is why I am glad you are doing the video on getting other environments to work with RL algos. Unreal is my choice sine Unity already has a ML examples.

stevecoxiscool
Автор

just recommended this video to one of my coursemate. your videos are worth sharing.

prakhars
Автор

As a beginner of RL, all your videos really help me a lot so thank u!!! And I just wonder if there is any chance to see the tutorial on how to build the env with multi-dim action?

yxzhou
Автор

Hello Nick, wonderful video. I am having the same error message you pointed out in the video and tried resolving it as shown but it is giving me a different error message stating the name model is not defined. Please help

idrisbima
Автор

Nicholas as I mention sometimes ago your YT channel is outstanding and your effort impressive. The RL is my favourite branch of ML so I extra enjoyed watch your performance. Exceptionally, you built also customised environment. The idea can be easily populated and applied to other specific tasks.  It is a great pleasure to watch your channel and I will recommend everyone to be here (to subscribe) Have a nice day!.

markusbuchholz
Автор

Thank you Nicholas,
this is a very good example to give it a kick start.

oliverprislan
Автор

i am doing a project: RL for smart car (prototype )by using DQN or any other RL algorithm.
So i am thinking to feed in images as a state (from the camera mounted on the car) and my car is able to take 3 actions (forward, right and left).. I am keeping it quiet simple i.e by keeping the car in front of our goal, and as the car sees the goal i want to reward it and take the next action, now if it takes sucha random action where the goal is no more in the vision of the camera, it gets a penalty (state, action, reward/panelty, next state and so on). The episode time is limited to 2 mins.My aim is that the car moves towards it goal (and the more it moves towards the goal the more the size of that feature would be larger, and hence it will get another reward bcz its moving towards its goal) (goal would be an image "Triangle" at the end of the room infront of the car intial position. Now before implementing my DQN into the real life prototype i need to train it on open AI gym (3d). I have no idea how i can build such a environment where i can train my DQN RL by simulation. any help and suggestion are apreciated

saaddurrani
Автор

Hi, great video! I was just wondering what happens if say for example the temperature is at 100 and the model try's to add 1 to temperature (so now outside the limits), does it then resample automatically or would you have to implement this in the code yourself?

charlesewing
Автор

Thank you for the video ⚡⚡⚡
I hope you can make a Custom Agent Next time ✅
Looking forward to see that ✨

islam
join shbcf.ru