Interactive Projection Mapping
March 2018 - June 2018
I built an interactive projection-mapped floor. The goal of this piece is to show how I approached a challenge that was far outside of my scope of knowledge at the time.
This project was a collaboration between Canon Canada and Sheridan College. A small team of Interaction Design students, directed by professor Steve Hudak, were tasked with building an interactive floor for Canon headquarters. Overlapping projectors would create a field of grass with cherry blossoms floating along in the breeze. An AR component would let users see more cherry blossoms slowly fall around them. As participants walked through the floor the environment would react to their presence.
I was brought on as the Unity developer, tasked with building the floor environment. This role was extended to floor UX and some of the projection mapping as well. I worked closely with Alex Thompson, a fantastic designer who built the cherry blossoms and modelled their flow in Unity as well as providing critical support throughout the project
I had only started learning Unity about 2 months beforehand for a class project. I had never done a physical interactive project at this level before. So in face of large challenges, how do you overcome them?
First, through research. I scoured blog posts, articles, and youtube videos for how others had gone about similar challenges. Did we even need Unity? Touch designer? We had Xbox kinects, could we use those for tracking users? Would they play nice together? What about more expensive but better trackers? Myself and a few of the team members debated which technology would be needed and what was within our realm of realistic possibilities. It was important to justify our tech stack before beginning.
We settled on using Xbox kinects, each connected to a computer, which then brought the user positions into a multiplayer Unity environment, which was then sent to multiple projectors to project, overlapping to reduce shadows, onto the floor.
My role was to make this actually happen.
As this was on top of school it meant a lot of late nights and weekends. Unity uses C#, a language I wasn't very familiar with. How do you write in a language you hardly understand? I would start by identifying a specific problem and pseudo-code out my solution. If it logically should work I would google the specifics and piece together the code. This became increasingly easy as I learned more C# throughout the project.
This project also extended beyond just writing code. How do I get multiple Kinects from multiple computers to communicate with a single Unity game, and Unity to communicate with projectors? Information technology? Research was full of jargon I didn't understand. I approached this problem by defining terminology so that I could at least wrap my mind around the problems. I made a long list of relevant terms and looked back on them often. I created my own documentation as I went along with what was working. I made myself articles about key Unity concepts I was struggling to understand and broke them down until I had something that made sense to me.
We presented a beta at the 2018 Interaction Design Grad Show.
Eventually, we came to the limits of our technology. To eliminate shadows we needed the projectors to overlap as much as possible. This however meant a very small projected space. To span the required space we would need far more (and stronger) projectors. Ideally we would also switch off of the Xbox kinects, which were having trouble tracking feet to the precision we wanted. This was written into a proposal and the project was put on hold. While it never went as far as hoped, the project was both a fantastic learning experience and an exciting way of expanding what I thought I was capable of doing.