Reinforcement Learning for Trading Tutorial | $GME RL Python Trading

preview_player
Показать описание
Heard about RL?

What about $GME?

Well, they’re both in the news a helluva lot right now.

So why not bring them together.

In this video you’ll learn how to build the beginnings of a Python Trading Bot using Reinforcement Learning. And better yet you’ll be able to do it by bringing in your own securities, in this case you’ll be working with Gamestop stock prices from MarketWatch.

The video doesn’t go through advanced strategies but gives you an idea of what’s involved to begin to leverage RL in a Finance/Trading environment!

In this video, you'll learn :
1. Working with the OpenAI Gym gym-Anytrading for Reinforcement Learning Trading
2. Training a Trading Bot with Python Reinforcement Learning using Stable Baselines
3. Loading in GME Trading data for training a custom RL Trading Bot

Chapters:
0:00 - Start
3:12 - Installing Gym-Anytrading and Dependencies
5:06 - Importing Dependencies
9:09 - Loading Gamestop Marketwatch data using Pandas
13:51 - Pushing Custom Data into the Gym-Anytrading Environment
18:11 - Testing the Trading Environment
24:18 - Training the Reinforcement Learning Agent
31:55 - Evaluating Model Performance

Oh, and don't forget to connect with me!

Happy coding!
Nick

P.s. Let me know how you go and drop a comment if you need a hand!
Рекомендации по теме
Комментарии
Автор

Fantastic tutorial! Some of the libs are a bit old now. I got it working on lambda stack with the following changes:
1. Use latest tensorflow-gpu and tensorflow
2. Change "stable-baselines" to "stable-baselines3"
3. Change "MlpLstmPolicy" to "MlpPolicy'"

Cheers

bennorquay
Автор

Please keep making videos! You're a real treasure explaining RL so well!
I'm just learning it at school and you really just helped me understand a lot of it. Thank you!

pedrostark
Автор

Just so everyone knows, you need to add df.sort_index() so the data isn't reversed. The model is training and predicting on reversed data. Gym-anytrading does not automatically sort by the date index.

JarlBulgruf
Автор

Am trying to make a project like this one and have almost no experience in it, but the learning curve just got easier thanks to you

victorthecat
Автор

would love to see more in-depth video, specially about custom signal features

urban
Автор

I trimmed the data down to just the closing prices, and the algorithm is training a lot better. I highly recommend others do that as well.

Nerdherfer
Автор

Extremely interesting and easy to understand! I'd like to learn more about the pros and cons of other gym trading environments. Thanks for all the time you spend producing these tutorials. You're helping a lot of people like me.

vincentroye
Автор

Can't express how much I enjoy your videos. Amazing job making the projects as practical as possible and looking forward to more ML videos!

aminberjaouitahmaz
Автор

Super helpful! Thank you for explaining everything in easy to understand terms

andrewkaplanc
Автор

You are an angel. Thanks a million for all you are doing by making knowledge free and easy to use.

EkShunya
Автор

Would love to see more videos like these in depth. And a video on stock price predictions using neural networks

hammadali
Автор

your RL stuff is fantastic thanks for doing it!! 💎 🙌 bro.

Also yes please do more RL trading stuff! Different action spaces & custom environments!

edzme
Автор

Outstanding content because you explain everything but do it quickly and clearly.

Throwingness
Автор

Nicholas your work is impressive and community is growing, Perfect. The community always can expect useful set of instructions/information's about AI and as now how to model/teach RL agents. The RL is really awesome but rather very abstract. so it requires lot of studying. Your effort in promoting this branch of AI noticeable. Thanks also for the stable-baselines tips. Have a nice day.

markusbuchholz
Автор

would really love to see a more in depth video, heck would love to see more video from you. I am learning a lot so thank you! new subscribers and been binge watching your stuff. good works

izzrose
Автор

You make understanding so simple . Love your work thank you for making such videos .

krishnamore
Автор

Thank you for such clear, hands-on tutorials on reinforcement learning. I have a couple of questions, though.

I've learned elsewhere that an RL agent requires a trade-off between exploration and exploitation. I didn't see this specifically mentioned in this video. Is there a reason for that? Perhaps it's not advisable to use any exploration/exploitation trade-offs in trading algorithms, or maybe this specific RL model doesn't support it. I would appreciate it if you could help me understand these considerations.

Additionally, I would love to see an example of an RL agent being trained with new data while in operation. I believe the official terminology is "online learning" or "continual learning." Please consider making a video that covers that topic as well.

borisljevar
Автор

Great, Thank alot for all the work you put in into this tutorial!

_hydro_
Автор

Man your videos are awesome. We need more about adding other features (or creating them).. Thank you

futurestadawul
Автор

Can you do an update on this ? Since tensorflow 1.15.0 is not available anymore and seems they changed so much that i just cannot get this to work with tensorflow 2

eb-worx