Discover how to use Mediapipe’s real-time pose tracking in TouchDesigner to create reactive audio-visual compositions. You’ll learn how body movement can generate or manipulate sound and visuals in live performance contexts.
Structure of the Training Session
- Preparation (approx. 60 minutes)
Objective: Prepare the session setup to ensure seamless learning.
Steps:
- Research Mediapipe’s functionality, pose tracking accuracy, and its compatibility with TouchDesigner.
- Prepare a short presentation outlining the basic principles of pose tracking and its use in performance art.
- Install and test Mediapipe integration with TouchDesigner, ensuring pose data is transmitted correctly to audio components.
Checklist:
- Mediapipe and TouchDesigner installed and integrated correctly.
- Test system: camera input, body tracking, audio output.
- Sample project ready to demonstrate basic interaction.
- Handouts/links to example projects and plugins available.
- Introduction to the Tool (approx. 30 minutes)
Objective: Introduce participants to Mediapipe’s role in interactive AV performance.
Steps:
- Explain the concept of pose tracking and its artistic applications.
- Show examples where body movement drives sound or visuals.
- Describe key advantages of Mediapipe (e.g., no need for wearables, open-source flexibility).
Trainer Tip: Use visual diagrams or videos to demonstrate pose detection. Relate to familiar tech like filters or fitness tracking apps.
- Hands-on Practice (approx. 45 minutes)
Objective: Guide participants through building a basic pose-to-sound interaction.
Steps:
- Demonstrate the setup: camera input, skeleton recognition, linking data to audio effects in TouchDesigner.
- Assign a task: trigger a sound sample when a specific gesture is made (e.g., hand above head).
- Support participants in customizing responses (e.g., gesture volume control).
Trainer Tip: Encourage creativity in gesture-sound mapping. Keep a sample audio library handy.
- Advanced Features and Creative Use Cases (approx. 30 minutes)
Objective: Expand on complex interactions and integration with other tools.
Steps:
- Introduce multi-joint tracking or gesture recognition patterns.
- Present creative applications (e.g., dance performances where motion alters entire soundscapes).
- Discuss troubleshooting (e.g., lag, lighting, joint misreads) and how to address them.
- Wrap-Up and Feedback (approx. 15 minutes)
Objective: Summarize the learning and collect participant reflections.
Steps:
- Recap the process: Mediapipe setup, gesture detection, audio mapping.
- Suggest further practice tasks (e.g., create a two-gesture composition).
- Gather feedback on difficulty, usability, and creative inspiration.
Post-Training Follow-Up
- Provide access to recorded walkthroughs and downloadable project templates.
- Recommend forums and documentation (e.g., Mediapipe GitHub, Derivative community).
- Set up a shared workspace where participants can post experiments or questions.
Trainer Tip: Host a mini showcase where participants present their sound-motion pieces.
