Like all great stories, It started with a vending machine... 

It silicon valley the VC recipe for creating something wonderful is pretty simple:

  1. gather a group of highly creative & talented people
  2. Provide tools, materials and space to work
  3. introduce a strange and new thing
  4. stand way back  

This aptly describes what happened when the Portland Hackerspace acquired a vending machine. 

Shortly after the acquisition of a vending machine members of the hackerspace started to, well, hack the vending machine. It’s right there in the name. Various members of the hackerspace started working on remote monitoring and control, internal lights, even a siren. This is where I com in. The details are fuzzy but during an impromptu session of “wouldn't it be cool if” I thought the vending machine should interact with patrons other than just taking their money and  dispensing a thing. 

The vending machine needs Eyes

I had previously created an interactive kinetic project with animated LCD eyes. In that project I had used an open source computer vision systems to do facial detection, and then control a couple of servo motors. So I thought, why not put eyes on the vending machine that reacted to, and tried to make eye contact with people as they walk by. 

Doing this requires several components: 

  • some way of creating eyes
  • Some way of controlling where the eyes point
  • some way of detecting people, preferably faces
  • a method for determining where the person is
  • A method for communicating location info to the eye control
  • Something to display eyes
  • A camera
  • Dedicated hardware capable of running face detection AI

With my experience I knew how to write the software, but I did not have hardware… Fortunately the hackerspace has prototyping materials. I knew I could do this with a Raspberry Pi Model 3b, USB web cam, and a mini HDMI display panel.

Displaying the the eye

For the eye I used the software from the Adafruit Animated Eyes Bonnet for Raspberry Pi project, which I had used in a previous project. The core of this project is a python program which creates a fairly detailed and accurate, eyeball simulation. The python script, out of the box, allows the eyes to be controlled from a joystick. So I knew I’d be able to clone and modify the control code to accept input from elsewhere. 

Face detection

For this I chose OpenCV primarily for the large community and robust python support. OpenCV has been ported to and runs on a wide verity of hardware. Getting OpenCV to run, and do face detection, should be easier than it really is. For this I leaned heavily on the PyImagesearch website and specifically his tutorial on using DNN to do face detection 

Challenge(s) 

Open CV 

OpenCV is very dependant on OS version, Python version, and these dependencies vary wildly between OpenCV versions and operating systems. 

Python

I have a love hate relationship with python. While I love interpreted languages for their ability to separate code from hardware and OS specifics. I deeply loth how python has implemented backwards compatibility and it’s insulting and obnoxious syntax parser.

Without the Hackerspace….

Even with the materials graciously provided by the hackerspace, I could not have created this without the hackerspace community. In this case, other members of their hackerspace lent me python expertise when I ran forehead first into python v2 v.s. python v3 issues, and specifically another hackerspace member provided a critical solution for communicating between the face detection system and the eyeball simulation. At one point during implementation I hit a terminal road block: The raspberry Pi model 3 was just not going to have enough power to run the eyeball simulation AND the face detection. I was describing my situation to other members during one of the many regular open hack events hosted by the hackerspace, when they mentioned “do you really need to do face detection...

Read more »