Introduction

Vaporization is done by off-the-shelf ultrasonic atomizers, either really cheap ones (the black ones in the picture below, 2.50 $ each) or some more rigid ones (the silver one, a Fogstar 300, 290 € each), air is moved by an array of strong and silent pc case fans, laminar flow is formed by 3D printed flow formers. Rear projection works with any kind of video or still image (even dia-) projector.

The housing

The housing is CNC-milled out of black PVC hard foam plates ("Palight", not to be mixed up with PVC soft foam plates) but can be made from pretty much any waterproof material that can be glued or welded, i.e. acrylic.

If you choose Palight, you can theoretically build this just with a paper template, patience, a steel ruler, some drill bits and a box cutter. All the DXF files for milling, lasercutting or box cutter samurai are available in the GitHub repo. Never put PVC in a laser cutter though (super carcinogenic). The BOM list in the repo tells you, how often you need to mill or cut a certain file. A single file may contain more than one part.

Operating principle

The diagram below shows an axial cross section of a Hoverlay II module, it's probably self explaining:

As you can see, there are three channels. The fans suck in air into the left and right channel, which then travels upwards while beeing compressed and accelerated by the convergent channel walls. The air then is pushed through the 3D printed flow formers, which is just many, many small, parallel channels packed into a honeycomb grid. When the air flow exits the flow formers, its mostly laminar, meaning it contains almost no turbulence anymore (such as rotary turbulence from the fans) and can continue travelling upwards almost turbulence-free for quite some distance (> 1 m).

The two fast laminar air flows from the left and right air channel then carry off the fog from the center fog channel, an effect that is known as "Venturi Effect", both accelerating the fog upwards and shaping it into a thin film layer right between them, as you can see in the picture below.

This thin layer of fog has the ability to diffuse incoming directional light (such as light from a projector), which then can be perceived as if it was diffusely emitted by the fog layer itself.

A perforated pressure compensation tube was meant to help delivering fresh air into the fog channel and lifting out the fog homogeneously. A few words about this tube: It's just a perforated (manually, on a drill press) ø 25 mm PVC tube, cut to length of one Hoverlay II module (326mm) that slides in though the corresponding cutouts in the side panels. The pressure compensation tube can be dismissed (the cutouts still have to stay there!) when only using one or two Hoverlay II modules in series. It was meant to help homogenizing the fog density over the full width of larger (3+ modules) installations. However, we found a way better method for that, by using some sort of "brightness booster" in form of two additional fans, blowing air into the pressure compensation holes on both ends of the Hoverlay II array. This both helps homogenizing the fog density over larger screens as well as visibly increasing the fog density (thus brightness). Here you can see the brightness booster in action, it's a true enhancements over all previously posted videos:

I will soon release a nice and tidy, 3D printable "brightness booster" that can be snap-fitted onto the existing pressure compensation tube holes for easy installation. Until then, just ignore the pressure compensation in the BOM, because it just turned out to be a intimidatingly dull solution to the homogeneity problem on larger screens.

Interactivity

A Microsoft Kinect is sitting right on top of the projector, requiring minimal calibration and mapping. It turns out that the fog layer does not disturb the pattern recognition of the Kinect at all, so it works just fine.

Some more technical details

I spent a lot of time on optimizing the printing process of the flow formers and tested different geometries. For achieving the lowest possible density, they consist of fragile, single-wall, 3D-printed structures, that I could not get to print properly by just slicing a 3D model. No slicing tool I tested (Slic3r, Cura, Skeinforge) was able to lay down the walls in a way that made sense (i.e. one hexagon at a time). If they were able to slice the model (which took up to 12 h on a quad core i7), they all laid down the lines randomly all over the place which caused a loss of rigidity in the printed result and a unnecessary large amount of retraction moves, resulting in long printing times. Generating the GCode directly from a Processing sketch solved the puzzle and was not too much effort, since the model I first created was programmatically described in openSCAD. This GCode generation code runs blazing fast compared to a standard slicing tool and is, of course, available in the repository, well commented, and can be easily be configured to accomodate your own 3D printer by changing some values (i.e. nozzle size, print speed).

After I found out, that convergently aligned flow formers defenitely improved the stability of the laminar flow over a longer distance, I expected, printing flow fomers with convergent channels could yield even more laminar stability. I had to find out, that the opposite was the case. The simple, parallel structure seems to works the best. The picture below shows the original straw-solution (left), different experimental versions (middle) and the final, 3D printed flow former (right).

Finally, one Hoverlay II module consumes as much as 120 to 150 Watts of electricity at 12 V DC (fans) and 24 V AC (atomizers), which needs to be supplied safely by a powerful and appropriately fused power supply. Because every (especially the first) module in an array of Hoverlay II modules must pass a serious amount of current to the next module via an invisible, seamless but strong plug-and-play electical connection, I chose cheaply available gold plated power connectors as used for RC model cars and multicopters for motor and battery connections. They fit into an customized, 3D printed mounting clip, that secures them against the housing and helps with the cable management.

Application

So what can you do with this? If you just want to watch your favourite TV series, I recommed you buy the largest, brightest and most high-res Sony or Panasony that fits through your door. But if you want to create interactive visuals, games or display stunning animations (i.e. of your new and all-fancy product) on it, the Hoverlay II might be just the right toy for you. It's an eye catcher that is hard to beat, even by the largest 4k flatscreen with or without head-tracking-glasses-free-stereoscopic-3D. In addition to that, it's turns into a multi touch surface by just placing a the cheap and well hackable Microsoft Kinect sensor right on top of the projector, so even non-hardcore-hackers like me can just fire up Processing and start playing with tons of graphical and interactive libraries. The video below shows a lot of those possibilities:

All apps I wrote for this device are of course open source and available in the GitHub repository. The Hoverlay II also serves as some kind of futuristic telepresence display device, as we have successfully tested it with Skype. Make sure you sit in front of a somewhat black background, so you will look like a ghost, floating in midair on the Hoverlay of your callee.

Last but not least, the below video shows an amazing animation, that two great animation artists from my university made especially for the Hoverlay II:

The Cyber Glove

When I started testing the apps I wrote, I noticed that I somehow miss the haptic feedback I'm used to from regular touchscreens (just like I used to miss the taktile feedback I've been used to from physical keys and buttons..). It wouldn't be a big deal to make a small cyber glove using one of the smaller Arduino boards, a wireless module and a vibration motor from the drawer. The version below is totally alpha, and is not even yet in the BOM or repo, but everybody who used it shouted out a big "Wooow!" when touching the floating canvas with the glove, so it's probably worth mentioning. Also, some of the apps in the repository are prepared for use with such a haptic feedback glove, so that if the variable "gloveOn" is set to true in the config area, the app shifts out vibration intensities in form of a byte value between 0 and 255 over the configured serial port.

A small piece of plastic was used to increase the area on the glove, where one can actually feel the vibration. On the other end, there's just another Arduino with another wireless module, receiving the vibration intensities from the application. I am using the super cheap NRF24L01+ modules, but use whatever wifi or ble magic you got at hand (hehe, at hand :) if you want to try that out.

About this project

This project is still under heavy development, and I'm learning new things about what I am actually doing there every day. It's never to early to build one by yourself though. I'll answer questions and help where I can if someone decides to build one right now! As said, it's all open hardware and open software licensed under GPL V3, used libraries may be licensed differently. Details about the licensing can be found in the Project logs and the System Design Document, and of course, it's all the openness ends at the doors of the Microsoft Kinect. However, click the "CAD & Code" Link in this Hackaday Project's sidebar and you're in the game.

About vapor screens

There are also several other commercial and homebrew solutions for fog projection and vapor screening out there, so to be fair and just in case you're seeing such a thing for the first time: I had to solve some puzzles but it's not my invention. The effect of projecting light onto fog is almost as old as the rainbow. If you are interested in learning more about the commercial vapor screen solutions, like the Heliodisplay, invented by Chad Dyner, or the impressively big Fogscreen(R), along with some hombrew builts I found, feel free to read the research results I released on my blog article on the development of the predecessor Hoverlay I at http://iamnotachoice.com/hoverlay/