I have been working on the idea actively for a little while and decided that due to a mix of recent news buzz surrounding higher end AR headsets in the tech news circuit and a lack of personal funding, that I should release everything I have been able to deduce thus far. For now I have cardboard prototypes, schematics, and about half of the parts that I will eventually need to complete a working model.

The idea came to me when I saw this Transparent screen listing here: amzn.to/3DGn7GZ

I realized that for the first time in our history, there are cheap and accessible transparent displays which can be used for a variety of purposes. I have (as many of you must also have) always wanted to replicate the fictional smart glasses in the Nickelodeon show Ned’s Declassified (see image below), and this seemed to me to be the easiest way to do it. Though two screens are ultimately needed, I ordered one to test, along with the associated Sparkfun Thing microcontroller and cables (microcontroller: amzn.to/2YZDM9P, cables: amzn.to/3lHzixd).

The transparent OLED screen has two connection options to the board, SPI or I2C. SPI (Serial Peripheral Interface) and I2C (Inter-Integrated Circuit) are communication protocols. (en.wikipedia.org/wiki/Serial_Peripheral_Interfaceen.wikipedia.org/wiki/I%C2%B2C)

Running the initial test program works as expected but I ran into two problems:
1. The Board uses Arduino sketches and is perhaps not ideal for screen mirroring. Open source Android screen mirroring software exists (github.com/Genymobile/scrcpy) which can run in Linux (and therefore a Raspberry Pi), however after trying to connect the screen via SPI to a Pi B+ model, I was unsuccessful at doing anything other than turning off my HDMI display, rendering the device useless until formatted. This could have something to do with the fact that I followed a tutorial for an unrelated display driver here: youtu.be/KciKqGX8g94.

2. Anything right up against your face is going to be blurry. The way that Heads Up Displays for motorcycles solve this is by using a lens to add the illusion of depth. The problem in this case is, for a display that is transparently covering one’s entire forward range of vision, any magnification would render the rest of the world blurry in return.

In this particular instance, working on the second problem in some ways has provided answers for the first.
I came up with two techniques for achieving a solution to the blurriness of the overlay without also distorting the room outside the lenses. my first idea was placing the display at the end of a specially shaped dual eye periscopic set-up, which works in essence but may require additional mirrors which could reduce peripheral vision (I lack the many tiny mirrors I would need to fully test this at this time, perhaps peripheral vision can be restored through appropriately shaped housing).

My second idea was to ditch the transparent OLED altogether and use teleprompter tech. The way a teleprompter works is by shining an image upwards at a glass pane which is angled at a 45 degree angle, with a camera pointing through the acute side so the reader will appear to be directly facing the camera on the recording while also seeing the reflected image (per youtu.be/f62KR51VE6c). If I were to make two small boxes that worked in this fashion, both with screens hooked up to a raspberry pi which is in turn mirroring an android phone, then that would be an achievement while also seeming totally achievable. (There is currently a slight double image in the reflection, this could either be due to the type of glass being used or the angle it is being used at.)

This seems both lower cost and more approachable since all I need to do in order to create the prototype is to buy two small HDMI ready screens and likely a Raspberry Pi 4 (or hopefully Pi Zero, CPU limitations permitting). After that I see absolutely no reason why you can’t watch YouTube while you wash the dishes, clean...

Read more »