GLaDOS Glass

Google Glass becomes GLaDOs ---

Similar projects worth following  -  We use google glass to control a servo arm that has a camera attached at the end. Google voice to spech, at&t text to voice, melodyne autotune used to convert human voice into GLaDOS voice. A nerf gun will be attached to the end of the arm is triggered by double winking. The camera stream is displayed on glass for control.

  • 1 × Google Glass
  • 5 × Servo dynamixel ax-12a servo
  • 1 × BeagleBone Black
  • 1 × USB Hub
  • 1 × Wifi Module

View all 11 components

  • 1
    Step 1

    What we're doing is exploring the data rate limitations of glass by streaming high quality video to the heads up display(HUD) while sending orientation data to the Beaglebone. Both the video data streaming from the camera through the Beaglebone and the orientation settings from Glass, through the Beaglebone, and sent as servo commands to the servos test the communication capabilities of glass as well. This setup tests the data I/O and processing capabilities. We can modify the compression of the video on the Beaglebone before sending thereby modifying the processing that glass has to do to decode the data. We also test the Beaglebones limitations through modifying our compression ratios. This is also one of the reasons that we need a Logitech C920 as a camera. The C920 has an on-board H.264 compression IC. By compressing the stream in the camera the workload on the Beaglebone DRAMATICALLY decreases, allowing not only smooth operation, but operation in general. Without it, the stream is almost unusable.

    We started by screwing three servos together and screwing the camera into the end of the servo arm. Next we designed and printed an adapter to secure the nerf gun to the bottom of the camera with the fire servo. Finally, we hooked everything up to the beaglebone, launched our app on glass and began control. 

    All of our code is available here:

    Now that we have a starting point we've begun design and printing of a larger glados head to house the camera and servo. <See pictures> Now all we have to do is use a digital low pass filter to smooth our motion data out before we send it to the servos and we SHOULD be good to go. The new NERF design is next on our plate.

    The low pass filter is finished. We used Matlab to create a function we're using to smooth out the glass data. The new head is printed, custom 3D printed servo extensions installed, and an extra servo installed to mimic GLaDOS motion more realistically. A new feature we've just finished is voice mimicry. Glass voice to text -> AT&T text to speech -> file ->melodyne(using python for application for autotune automation) -> raspberry pi speakers.

    We'll be working on the project during our free time between classes and will post updates as they become available. Time to polish the project up and add internet control to those with glass and interested in trying it out. Voice control in GLaDos voice will be possible too!

View all instructions

Enjoy this project?



Salvador Paco wrote 04/08/2014 at 18:18 point
dont make it to real! :)

  Are you sure? yes | no

Eric Hahn wrote 04/06/2014 at 19:02 point
We're still waiting on a new camera to use, it should come sometime this week.

  Are you sure? yes | no

Mike Szczys wrote 04/02/2014 at 16:58 point
The live demo makes me laugh. Obviously you're making the most of your college years ;-)

  Are you sure? yes | no

DylanZeigler2 wrote 03/28/2014 at 19:35 point
The Ultimate killing machine

  Are you sure? yes | no

Eric Evenchick wrote 03/27/2014 at 16:15 point
Nice to see a Glass hack! Haven't seen too many of them in the wild.

  Are you sure? yes | no

Similar Projects

Does this project spark your interest?

Become a member to follow this project and never miss any updates