Signal & Code (Part 4) – Visual Storytelling with Stream Diffusion in TouchDesigner | MMMAD Festival

Explore how to integrate stream diffusion models (AI-generated image sequences) into TouchDesigner to create evolving, dreamlike visuals. This tutorial walks through basic setup, style control, and applications in performance storytelling.  

Structure of the Training Session 

  1. Preparation (approx. 60 minutes)

Objective: Set up the space, content, and tools for a focused, interactive session. 

Steps: 

  1. Research current stream diffusion models available for real-time use in TouchDesigner (e.g. Stable Diffusion, ComfyUI integrations, etc.). 
  1. Prepare a presentation introducing generative AI in visual arts, including ethical considerations. 
  1. Test integration between TouchDesigner and the chosen stream diffusion model (e.g., via Python scripting, external servers, or plugins). 

Checklist: 

  • Stream diffusion model installed and functioning. 
  • TouchDesigner file template preconfigured. 
  • GPU-ready computer environment. 
  • Resource links and example outputs ready for presentation. 

 

  1. Introduction to the Tool (approx. 30 minutes)

Objective: Familiarize participants with the basics of stream diffusion models and their application in real-time performance visuals. 

Steps: 

  1. Present a brief overview of AI-generated imagery and the concept of stream diffusion. 
  1. Show examples of how generative visuals are used in performances or installations. 
  1. Explain what is possible within TouchDesigner, focusing on style transfer, real-time animation, and responsive visuals. 

Trainer Tip: Relate AI-generated visuals to traditional VJing or stage design to help participants grasp the creative potential. 

 

  1. Hands-On Practice (approx. 45 minutes)

Objective: Allow participants to experiment with basic workflows combining stream diffusion and TouchDesigner. 

Steps: 

  1. Guide them through setting up a stream diffusion node/workflow and importing outputs into TouchDesigner (live or pre-generated). 
  1. Assign a simple task: manipulate an AI-generated sequence based on an audio or motion trigger. 
  1. Encourage creative interpretation and offer support for scripting or technical issues. 

Trainer Tip: Use short, visually rewarding tasks to maintain motivation and make abstract concepts tangible. 

 

  1. Advanced Features and Creative Use Cases (approx. 30 minutes)

Objective: Expand understanding of deeper integrations and custom creative uses. 

Steps: 

  1. Explore workflows for live prompting, animation blending, or latency management. 
  1. Present case studies or artworks using real-time AI generation in live performance. 
  1. Share troubleshooting advice: dealing with GPU load, syncing timing with music/movement, and ensuring smooth playback. 

 

  1. Wrap-Up and Feedback (approx. 15 minutes)

Objective: Consolidate learnings and gather impressions from participants. 

Steps: 

  1. Recap the session: what stream diffusion is, how it interacts with TouchDesigner, and its stage potential. 
  1. Share additional resources (e.g. GitHub tools, Discord communities, tutorials). 
  1. Collect verbal or written feedback to adapt future sessions. 

 

Post-Training Follow-Up 

  • Share a simplified template project with built-in stream diffusion integration. 
  • Provide links to AI art tools, plugins, and TouchDesigner support forums. 
  • Encourage participants to share their own experiments or performance applications in a group forum. 

Trainer Tip: Frame stream diffusion as a collaborative partner in live storytelling—not just a novelty effect.