Here is a YouTube video showing the application in action. http://youtu.be/z8_azcm-kwQ

Improving the resolution and frame rate of the application.

At first when getting at the pixel data from the Isight camera, I used one of Apple's APIs in objective C which got immediate results, but the frame rate was a poor 5-6 FPS. Writing a subroutine in C++, I was able to get at the luminance data and generate a very smooth 30FPS image from the camera, but no color, just greyscale. I knew that the camera used a YUV format which turned out to be much broader than I though. I looked through many specifications on how the bytes were packed in various YUV streams. I finally found one that produced a proper color image.

Every time the application starts up, it reads a PNG image with all the emojis in a grid, it would then generate an average HUE table from this. When the image data is read from the camera, it creates a mosaic using the average HUE of each cell. Then each of these "cells" in the mosaic are swapped out with an emoji that has the closest Hue/Saturation/Luminance. This buffer is then copied to a texture buffer in the GPU and is then displayed.

This worked but wasn't the fastest method. I decided to copy the emoji image data into the GPU memory, then generated a texture map on the fly using the HUE/SAT/LUM info. This was good for resolutions up to 640x480, but 1280x960 was dropping to 12FPS. I then thought to eliminate the the "closest HUE/SAT/LUM match" sub routine by sorting all the emojis in a pre-generated image by hue horizontally and by luminance+saturation vertically. This bumped up the out put to the 30FPS I wanted.