UbiCondo is a spatially-aware device control framework, allowing for networked devices to be addressed, controlled, and coordinated through simple, reliable gestures. Currently prototyped using Kinect for Windows sensors (originally started as a research project at UC San Diego), it relies on a mix of interaction gestures ("pointing" actions that produce an intent vector within the instrumented space) and user positional tracking within the environment.
Users can point to devices to toggle states (turn lights/screens/TVs on and off, anything controllable via RESTful interface), initiate more complex inter-device communication (currently prototyped as passing tabs and videos between computers and XBMC HTPCs), and be passively serviced by the environment (automatic lights toggling and zone audio that follows the user).
The current proof of concept has been deployed and has shown a high degree of reliability and very low false-positive rate, and provides a base for experimentation.
Currently, an old video with a basic demonstration of active device interactions/coordination can be found here: