In the spring of 2014 I quit my job and began to explore how we can use technology to build connections to the places around us. In particular, I was interested in augmented reality and its potential to extend documentary storytelling into locations, exposing hidden stories of the past and projecting of the future.
At that point in time Hololens was a secret Microsoft project yet to be announced, and the Rift DK2 was just starting to get into the hands of developers. Most people's understanding of AR, if they knew of it at all, was to think of animated characters popping up over QR codes in the camera feed of their phone. Almost all AR at that point was being mediated through a screen.
Though compelling in some ways, viewing the world through this screen-based window felt like it didn't live up to the promise of augmented reality, particularly with immersive VR on the horizon. It often superimposed objects without any sense of context. Not to mention that the overhead of computer vision used to track the space made the virtual lag behind the real, breaking the illusion.
Beyond the technical constraints, the fundamental issue with AR was that even if a person were standing on top of an mixed reality experience, letting them know it's there so they can download an app and scan a code meant that the barrier to entry was quite high. Most people would walk by without ever being aware of what they had missed.
I wanted to rethink our approach, and not artificially limit the possibilities of a mixed reality by tying it to the trends of consumer electronics. At the top of my list was that the experience had to be a trick of optics, where no computer or screen stood between us and the world. However this instantaneous view of the world also poses a new challenge in tracking the view. Without the ability to delay the outside world, tracking would have to be as close to instant as possible. It brought into question a more fundamental assumption. What if mixed reality were a service of a space, rather than the ability of an individual user?
Because of the smartphone revolution, we tend to assume that all new technology should be first and foremost targeted for the individual user. However, this is not how emerging media has traditionally evolved. We take for granted the ability to watch videos on our phone, but the moving picture went through various stages of evolution from hand-cranked parlor trick to the nickelodeon and movie theater. Only with the rise of consumer electronics could television, computers and smart phones emerge.
Mixed reality is just being born, and for it truly to become the final medium there will undoubtedly be experiments and smaller steps to get us there. As I took into consideration all of these thoughts, a familiar object started to emerge in my mind, the coin-operated binocular.
As a device in the public space, it wouldn't require the end user to own an expensive wearable. Being fixed in that space meant tracking could be greatly simplified due to its mechanical range motion. Not having to physically attach it to the body allowed for optics and computational power to take priority over weight, as well as make the experience of walking up and using it frictionless.
Perceptoscope is just the beginning. As mixed reality becomes common place, the bounds between the digital and the real will continue to fade away, and the endless fountain of information we've come to expect online will be irrevocably tied to physical places. Stories will exist as something we live inside and beside rather than just flat content on a screen.
Though I've tried to capture some documentation throughout my journey, I was so tunnel visioned on getting from an idea to a working prototype that I didn't take the time to share it broadly along the way. This series of logs is a hope to remedy that as well as continue to share the project as it grows into something bigger than any single prototype.
A rickety Perceptoscope Mark I premiering at Two Bit Circus's STEAM Carnival, October 2014