top of page

Torches - Game Design Research Project

100 Second Overview

In my final semester at RPI, I created a show that brought together my skills in coding, interactivity, and large-scale show planning. Torches pushes the boundaries of how we are able to accept interactive signals in large-scale performance.

​

Using an HD webcam to track colored paddles with some from-scratch Computer Vision, I used the color, position, and number of paddles present in the room to create responsive effects. The show is interactive throughout its roughly 10 minute duration.

​

The show is executed entirely on the back of more than 1500 lines of C# code in Unity Game Engine, and includes my own implementations for show control (advancing the music and behaviors on screen when certain interactive objectives are achieved), visual effects, and image analysis.

​

This project is special to me for a couple of reasons. It allowed me to combine my hobby of visual and technological performance with my education in coding and interactivity. It also represents the next frontier of spectacle performance - this is where pieces like Disney's nighttime projector shows in the theme parks are headed next; I am sure of it. It was exciting to pioneer and study techniques that I believe will be in use in the next ten years. This project was my magnum opus and "last hurrah" of my time at RPI, allowing me to bid farewell in an exciting way, and letting me bring together many of the things I learned along the way, both inside and outside the classroom.

How the Main Scripts Work

CameraFeed.cs – Computer Vision

Camera Feed implements traditional CV blob detection. 

Every frame, this script:

  • Checks every 2nd pixel in the webcam feed (460,800 pixels).

  • Converts their color definition to Hue-Saturation-Value from RGB.

  • Identifies green pixels and checks existing blobs to see if it’s close enough to belong to any, starts a new blob otherwise.

  • Throws away any blobs that are not big enough or not close enough to any defined audience member positions (only identified blobs get kept).

To account for the reduction of paddle size with distance in the lecture hall, I use the equation of a line between two points that I supply once the camera is set up. One point is (lowest y,  and largest size) for the closest paddle to the camera, while the other is (highest y, smallest size) and so all the paddle sizes in between can be estimated, so long as they aren’t too far off to the right or left on screen.   

 

ShowControl.cs – Music, Timings, and Behaviors

Each different behavior has a string to identify its sections of the code. Strings allowed me to give meaningful names to each phase and reorder them however I desired.

Show Control has three major functions that manage the activity of the visuals.

  • StartPhase – sets the music to the correct position, and establishes the timings for the rest of the phase, including when it ends. Sets up needed data structures.

  • ShowBusiness – the update loop for the phase. Called by CameraFeed once the paddles are identified this frame. Handles frame by frame operations for the show.

  • EndPhase – cleans up data structures and starts the next phase.

Full Show

bottom of page