Carefully wrapped and disassembled, Narcissus made its way to Miami to be a part of RAW Popup. At the core of the new Design district, this event inaugurated a new space for art, over 50 artists coming together to exhibit their work.
Narcissus (2018) was debuted alongside a previous artwork of mine I rebuilt, the Murder Machine (2014). I documented my process here on Hackaday, make sure to check it out.
The full size 3D print was a success. There was very little left to chance so I wasn't surprised that everything worked out and fit right. I made the final print on a Stratasys UPrint SE Plus, using Ivory filament. I'm very happy with the result.
The base for the sculpture is hand made from three pieces of CNC'd Baltic Birch plywood. I've glued them together and sanded to give them a solid look and feel.
The wooden base received a coating of oil based water seal. It's meant for outdoor wooden structures that are in contact with water. It should protect the wood from water damage for the duration of the installation.
Initially, I was going to fill the pool with Castin' Craft Resin, but returned to the original idea of using water. I will play with resin after the show is over. Resin is a more permanent water effect that will not require any maintenance. But all my initial tests were unsuccessful. They poured on smooth, but got bumpy as they cured.
If anyone has run into a similar problem, please let me know, I'm curious. My theory is that since the wood was not sealed before pouring the resin the imperfections of the surface got picked up and amplified by the resin.
After many many many tests, the final assembly and 3D print are ready to be assembled at the gallery. 🔥
Working code was step one, now my focus is entirely on the physical part of the sculpture. I've been working to get the size of the sculpture just right so the screen feels perfectly proportional.
I spent a few hours at SVA's Visual Futures Lab using their Structure scanner and Skanect. That was much easier than what I'd been doing. Very easy to set up, a lot less cumbersome, and a lot faster than using the Kinect and my laptop. So big shoutout to the people at the VFL.
Below the results of the scan after a little work on Blender and Meshmixer.
I'm now refining the final size and mounting brackets for the screen. Used the "tube" function in Meshmixer to create a channel for the screen's cabling, it now looks like it's going straight into my heart.
The print will be on a simple platform that will hold water/resin and reflect the light from the screen.
The 3D printing tests are going well and I'm on schedule. 🔥🔥🔥
I'm using a Formlabs Form 2. I'm cropping the print to test specific parts of the sculpture (and to save resin).
Oh, yeah, I also printed a case for my Pi 3 B+ (Note: a slim case for a Pi 3 will not fit on a Pi 3 B+ thanks to the new jumpers behind the USB ports. I tried drilling the case to add the hole... that didn't work so well)
Learnings from 3D printing test:
Save resin by hollowing out the print! (DUH! Why didn't I know about this before... who knows, here's a good tutorial from maker's muse)
Blender and Meshmixer are tough to start using, but they quickly open up and become good friends.
Reduce source file size. I've found that turning Twitter's high-res images into a 256px image gives me the most speed without sacrificing accuracy. I wouldn't recommend going any smaller.
Adafruit's SSD1351 OLED screens work with Raspberry Pi thanks to the Luma.Oled library. I'll be writing a short no BS tutorial on how to get this working in the next few days. It works with most of their screens, so it's a good resource.
Setting up a headless Pi is super easy if you use Pi Bakery. I cannot recommend it enough. I'll be writing a little tutorial about it. This should be the standard way to flash an SD card with Raspbian, honestly I don't know why anyone wouldn't use this.
After a lot of trial and error I finally have a working prototype of the Python code that will power the whole experience. It runs really well on OSX.
The basic pseudo-code:
Twitter Authentication (REST API)
Get all tweets with "#selfie"
For every tweet:
Parse JSON for IMG file.
IF image is present
Find faces in image
Crop random face
Display face in OLED display
It works! 🔥🔥🔥🔥
All of this will be executed on a Raspberry Pi. Initially I was using a Pi Zero W, but the face recognition part was taking too long and since space isn't really an issue I could benefit from a little extra computing power; so upgraded to a Pi 3 B+. Now I need to optimize and cut down time to process the image and find the face. Multi-threading might be the answer.
The trickiest part so far has been the initial setup of all the required libraries, took me about 9 tries (about 15h each) on the Pi Zero only to realize it was too slow. The Pi 3 B+ was easier to set up, and I'll be writing a quick summary of everything I learned in the hopes others will find it useful. A Raspberry Pi quick setup guide for noobs like me.
I will be traveling over the next week, planning to use plane time to write and down time to code. Stay tuned for new updates.