Explore how to integrate stream diffusion models (AI-generated image sequences) into TouchDesigner to create evolving, dreamlike visuals. This tutorial walks through basic setup, style control, and applications in performance storytelling.
Structure of the Training Session
- Preparation (approx. 60 minutes)
Objective: Set up the space, content, and tools for a focused, interactive session.
Steps:
- Research current stream diffusion models available for real-time use in TouchDesigner (e.g. Stable Diffusion, ComfyUI integrations, etc.).
- Prepare a presentation introducing generative AI in visual arts, including ethical considerations.
- Test integration between TouchDesigner and the chosen stream diffusion model (e.g., via Python scripting, external servers, or plugins).
Checklist:
- Stream diffusion model installed and functioning.
- TouchDesigner file template preconfigured.
- GPU-ready computer environment.
- Resource links and example outputs ready for presentation.
- Introduction to the Tool (approx. 30 minutes)
Objective: Familiarize participants with the basics of stream diffusion models and their application in real-time performance visuals.
Steps:
- Present a brief overview of AI-generated imagery and the concept of stream diffusion.
- Show examples of how generative visuals are used in performances or installations.
- Explain what is possible within TouchDesigner, focusing on style transfer, real-time animation, and responsive visuals.
Trainer Tip: Relate AI-generated visuals to traditional VJing or stage design to help participants grasp the creative potential.
- Hands-On Practice (approx. 45 minutes)
Objective: Allow participants to experiment with basic workflows combining stream diffusion and TouchDesigner.
Steps:
- Guide them through setting up a stream diffusion node/workflow and importing outputs into TouchDesigner (live or pre-generated).
- Assign a simple task: manipulate an AI-generated sequence based on an audio or motion trigger.
- Encourage creative interpretation and offer support for scripting or technical issues.
Trainer Tip: Use short, visually rewarding tasks to maintain motivation and make abstract concepts tangible.
- Advanced Features and Creative Use Cases (approx. 30 minutes)
Objective: Expand understanding of deeper integrations and custom creative uses.
Steps:
- Explore workflows for live prompting, animation blending, or latency management.
- Present case studies or artworks using real-time AI generation in live performance.
- Share troubleshooting advice: dealing with GPU load, syncing timing with music/movement, and ensuring smooth playback.
- Wrap-Up and Feedback (approx. 15 minutes)
Objective: Consolidate learnings and gather impressions from participants.
Steps:
- Recap the session: what stream diffusion is, how it interacts with TouchDesigner, and its stage potential.
- Share additional resources (e.g. GitHub tools, Discord communities, tutorials).
- Collect verbal or written feedback to adapt future sessions.
Post-Training Follow-Up
- Share a simplified template project with built-in stream diffusion integration.
- Provide links to AI art tools, plugins, and TouchDesigner support forums.
- Encourage participants to share their own experiments or performance applications in a group forum.
Trainer Tip: Frame stream diffusion as a collaborative partner in live storytelling—not just a novelty effect.
