By using an array of RGB LEDS and a Raspberry PI camera, one can image an object with various light directions and colors. Then, a set of multi-variate functions are calculated for each pixel, based on the set of images captured. Then, an interactive image can be created in real time, which interpolates values set by the user for lighting directions and lighting colors. This allows users to see the object illuminated in any environment, or to create pseudo-color algorithms that might reveal properties like fine details or surface curvature. This is inspired by work at HP Labs on polynomial texture mapping and work at USC Institute for Creative Technologies on the Light Stage light field capture.
Any version supporting a camera
RGB LED array, sized according to object to be imaged
Not wanting to bother interfacing NeoPixels to Raspberry Pi, I'm just going to use an Arduino