Tutorial 2/3 - Audio Reactive & Motion Controlled Visuals in TouchDesigner

preview_player
Показать описание
Hey! In this tutorial, we'll go over how to create audio reactive visuals that are also motion controlled using a custom Body Tracking plugin that I developed for TouchDesigner.

Tutorial references for an intro to feedback loops:

0:00 Intro and Overview
1:37 Project Setup
8:17 Custom Audio Reactivity Component Setup
15:27 Connecting our Audio Reactivity
17:11 Custom Feedback Component Setup
30:52 Adding Audio Reactivity to Our Feedback
31:11 Connecting the Motion Tracking
16:50 Example 4 Instancing all the Position Data for Rendering
18:53 Example 5 Passing your Own Video in for Body Tracking
Рекомендации по теме
Комментарии
Автор

I really appreciate this tutorial. Not only the subject matter but the way you actually explain each part. Many of the Touchdesigner tutorials come off mathematical with no simple explanation of why a certain OP is used. Thank you

frequensea_experience
Автор

Tnxs so much for this and all the other tuts. and the mediapipe..WoW so much fun to play with.

orquesmmusic
Автор

You're amazing. Please, keep doing tutorials. I have been learning a lot!

showsnsdespa
Автор

I love your tutorials and thank you for Mediapipe. I have just really started diving into touch designer. Ive been doing projection mapping and lighting design for sound reactive pieces but Ive been really wanting to learn more with interactive mapping projects so this has been a great starting point. I have an idea in mind that I'm pretty sure I can use media pipes facial tracking and facial landmarks feature to accomplish. Not asking for a tutorial or anything on it although something along the lines of it would be awesome, but I did want to see if there is any documentation that kind of would help with this concept. I want to do projection mapping onto a mask Id be wearing and have the facial tracking keep the visual (be it a noise or just projected video content or any thing to get started) on the mask even as I move. I just started thinking about it so I am sure there is someone who has already done it and there must be documentation but I thought Id ask here first. But regardless I very much appreciate your channel and I will like and subscribe and share your videos for now but once I am able to support your patreon page I will as you have some great additional content on there.

KeyFur_NYC
Автор

I signed up for the patreon and did this - a great tutorial and resource. Is it possible to switch out the ML model for a Kinect and keep the data nodes? I see the tracking happens directly inside the system. Unfortunately my kinect didn't register as a camera so I couldn't pipe it directly. I also tried 2 different USB web cams which TD picked up but the ML model couldn't run (I left it to load for more than 20 minutes at a time) so I'm not sure piping connect through the model in that way would help if we did use that process. Would love the advice because I'd love to keep what I made for an event!

augustababeta
Автор

Hi Torin, Ive been using the MediaPipe plugin for my research project for over a month now. Ive had some questions and there aren't enough tutorials online for mw to figure out how to go about this. My project is exploring narrative animation driven by gestures. The animations are made on blender and I use the media pipe plugin to play these animations, so for example if you wave at the screen (open palm) the character in the animation waves back at you. Anyway I dont want to go into much detail but if this something that interests you I would love if I got some help, although this plugin id already helping me realise my enquiry. thanks so much, Anna

annabelleisabot
Автор

Hi Torin do you have more in detail hand tracking tuts on your patron

pixelnation
Автор

is the v3 posted? I cant seem to find it, many thanks!

chasehoagland
Автор

Dude, everything is fine, of course, but shouldn't I complain that the picture doesn't match the preview? where is that on the preview

yklandares
join shbcf.ru