filmov
tv
Sign Language Detection using ACTION RECOGNITION with Python | LSTM Deep Learning Model
![preview_player](https://i.ytimg.com/vi/doDUihpj6ro/maxresdefault.jpg)
Показать описание
Want to take your sign language model a little further?
In this video, you'll learn how to leverage action detection to do so!
You'll be able to leverage a keypoint detection model to build a sequence of keypoints which can then be passed to an action detection model to decode sign language! As part of the model building process you'll be able to leverage Tensorflow and Keras to build a deep neural network that leverages LSTM layers to handle the sequence of keypoints.
In this video you'll learn how to:
1. Extract MediaPipe Holistic Keypoints
2. Build a Sign Language model using a Action Detection powered by LSTM layers
3. Predict sign language in real time using video sequences
Get the code:
Chapters
0:00 - Start
0:38 - Gameplan
1:38 - How it Works
2:13 - Tutorial Start
3:53 - 1. Install and Import Dependencies
8:17 - 2. Detect Face, Hand and Pose Landmarks
40:29 - 3. Extract Keypoints
57:35 - 4. Setup Folders for Data Collection
1:06:00 - 5. Collect Keypoint Sequences
1:25:17 - 6. Preprocess Data and Create Labels
1:34:38 - 7. Build and Train an LSTM Deep Learning Model
1:50:11 - 8. Make Sign Language Predictions
1:52:40 - 9. Save Model Weights
1:53:45 - 10. Evaluation using a Confusion Matrix
1:57:40 - 11. Test in Real Time
2:20:46 - BONUS: Improving Performance
2:26:52 - Wrap Up
Oh, and don't forget to connect with me!
Happy coding!
Nick
P.s. Let me know how you go and drop a comment if you need a hand!
In this video, you'll learn how to leverage action detection to do so!
You'll be able to leverage a keypoint detection model to build a sequence of keypoints which can then be passed to an action detection model to decode sign language! As part of the model building process you'll be able to leverage Tensorflow and Keras to build a deep neural network that leverages LSTM layers to handle the sequence of keypoints.
In this video you'll learn how to:
1. Extract MediaPipe Holistic Keypoints
2. Build a Sign Language model using a Action Detection powered by LSTM layers
3. Predict sign language in real time using video sequences
Get the code:
Chapters
0:00 - Start
0:38 - Gameplan
1:38 - How it Works
2:13 - Tutorial Start
3:53 - 1. Install and Import Dependencies
8:17 - 2. Detect Face, Hand and Pose Landmarks
40:29 - 3. Extract Keypoints
57:35 - 4. Setup Folders for Data Collection
1:06:00 - 5. Collect Keypoint Sequences
1:25:17 - 6. Preprocess Data and Create Labels
1:34:38 - 7. Build and Train an LSTM Deep Learning Model
1:50:11 - 8. Make Sign Language Predictions
1:52:40 - 9. Save Model Weights
1:53:45 - 10. Evaluation using a Confusion Matrix
1:57:40 - 11. Test in Real Time
2:20:46 - BONUS: Improving Performance
2:26:52 - Wrap Up
Oh, and don't forget to connect with me!
Happy coding!
Nick
P.s. Let me know how you go and drop a comment if you need a hand!
Комментарии