Signal & Code (Part 3) – Bodies and Data: Mediapipe Integration in TouchDesigner  | MMMAD Festival

Discover how to use Mediapipe’s real-time pose tracking in TouchDesigner to create reactive audio-visual compositions. You’ll learn how body movement can generate or manipulate sound and visuals in live performance contexts. 

Structure of the Training Session 

  1. Preparation (approx. 60 minutes)

Objective: Prepare the session setup to ensure seamless learning. 

Steps: 

  1. Research Mediapipe’s functionality, pose tracking accuracy, and its compatibility with TouchDesigner. 
  1. Prepare a short presentation outlining the basic principles of pose tracking and its use in performance art. 
  1. Install and test Mediapipe integration with TouchDesigner, ensuring pose data is transmitted correctly to audio components. 

Checklist: 

  • Mediapipe and TouchDesigner installed and integrated correctly. 
  • Test system: camera input, body tracking, audio output. 
  • Sample project ready to demonstrate basic interaction. 
  • Handouts/links to example projects and plugins available. 

 

  1. Introduction to the Tool (approx. 30 minutes)

Objective: Introduce participants to Mediapipe’s role in interactive AV performance. 

Steps: 

  1. Explain the concept of pose tracking and its artistic applications. 
  1. Show examples where body movement drives sound or visuals. 
  1. Describe key advantages of Mediapipe (e.g., no need for wearables, open-source flexibility). 

Trainer Tip: Use visual diagrams or videos to demonstrate pose detection. Relate to familiar tech like filters or fitness tracking apps. 

 

  1. Hands-on Practice (approx. 45 minutes)

Objective: Guide participants through building a basic pose-to-sound interaction. 

Steps: 

  1. Demonstrate the setup: camera input, skeleton recognition, linking data to audio effects in TouchDesigner. 
  1. Assign a task: trigger a sound sample when a specific gesture is made (e.g., hand above head). 
  1. Support participants in customizing responses (e.g., gesture volume control). 

Trainer Tip: Encourage creativity in gesture-sound mapping. Keep a sample audio library handy. 

 

  1. Advanced Features and Creative Use Cases (approx. 30 minutes)

Objective: Expand on complex interactions and integration with other tools. 

Steps: 

  1. Introduce multi-joint tracking or gesture recognition patterns. 
  1. Present creative applications (e.g., dance performances where motion alters entire soundscapes). 
  1. Discuss troubleshooting (e.g., lag, lighting, joint misreads) and how to address them. 

 

  1. Wrap-Up and Feedback (approx. 15 minutes)

Objective: Summarize the learning and collect participant reflections. 

Steps: 

  1. Recap the process: Mediapipe setup, gesture detection, audio mapping. 
  1. Suggest further practice tasks (e.g., create a two-gesture composition). 
  1. Gather feedback on difficulty, usability, and creative inspiration. 

 

Post-Training Follow-Up 

  • Provide access to recorded walkthroughs and downloadable project templates. 
  • Recommend forums and documentation (e.g., Mediapipe GitHub, Derivative community). 
  • Set up a shared workspace where participants can post experiments or questions. 

Trainer Tip: Host a mini showcase where participants present their sound-motion pieces.