The Gauntlet

Just a Toy?

Marvel Legends (TM): Infinity Gauntlet by Hasbro. Original functionality included switch triggered sound playback and stone lights effects. Sounds included Infinity stone sound effects and metal glove movement sound effects. LEDS were not independent of each other, they all were controlled by the same signal.  Fingers are actuated with internal linkages that are pulled on by individual fingers. Each finger has a built-in switch for detecting finger actuation, originally these switches were all tied to the same light and sound effects response, there was no variability depending on what finger was actuated. Build quality overall is good and is very favorable to hacking. Lots of space for peripherals and easy assembly.  On the downside it can be large and heavy for those with small stature such as children.

The Brains

Utilizing the ubiquitous RPi 3 B+ and running on Raspbian Stretch, the gauntlet is controlled by a number of python scripts. Several external libraries are installed for added functionality and peripherals , but the main human-machine interface functionality, sounds logic, and lighting effects were personally coded. 

The Sensors

For our first layer of control, there are 6 available switches. One in each finger and one that's activated when depressing the center stone on the back of the gauntlet. These switches are only activated when a finger moves from open to closed. Currently unable to sense a fully closed finger, without mechanical modifications. However, the center stone does have press and hold capabilities.

For our second layer, we use a BNO055 Absolute Orientation Device. With the orientation data sent over UART, we can map our surroundings and act on the data. Calibration is necessary during initial startup (occurs automatically with certain movement) for accurate results but calibration data can be saved and written during the startup for repeatable results. Pitch, heading, and roll are read from the device by a python script on the Pi, this is used for control logic when controlling IoT devices.

The Software

Coding in python and running on Linux offers several advantages for the Raspberry Pi, including readily available libraries that extend the GPIO on the RPI and a rich development community that assists in working through most problems. The advantage of running Linux on the Pi for this project is the flexibility to make the device more than just a piece of cool fan art but also deliver some actually utility. The Pi 3 B+ still retains all it's stock features and can run as a small PC. Plug in an HDMI, a mouse, and a keyboard,and you can watch movies, surf the internet, and tinker more with the Gauntlets features. 

The Goal

With the ever-growing number of connected devices that surround our daily lives, from light switches to self-order tablets at restaurants, we will be constantly interacting with technology. Their are many different ways to interact with our technology that are making things easier, we made advances in AI tech to understand humans voice, we have touchscreen devices to tap away our intentions, but sometimes speakers can't hear you, and your phone takes several operations to reach the final intended tap, which can sometimes be cumbersome.

This project is a proof of concept of a new way to interact with technology. By using a more intuitive form of communication, simple body movements, we can break through into a seamless experience with our technology. Most people are familiar with this as gestural control, there has been a lot of research on the subject and continues to be a hot topic for human-interface devices. Although, I approach this problem from a slightly different perspective. Gesture recognition is indeed an end goal for the project, this is a large part of the advanced functionality that is projected for this type of technology, but recognizing specific hand motions with repeatable consistency is difficult, not because of the software capabilities but because the variability in a users individual motions. We apply the same problem solving methods of Natural Language Process to Gestural Language Process and we should be able to reach a reasonable success rate in gesture recognition with developments in machine learning. Imagine watching a person fluent in sign language, sending complex commands to an IoT device, or sending a text with a few flicks of the wrist. This soon may be the future, however,  when it comes to human-interface devices, simple is usually better, but not for the development process but for how the user uses the device. 

Pointing is one of the most basic ways we communicate, how do you point with  your voice? how do you point with your phone? Most of our smart home devices are simple on/off that do not need a lot of information in order to understand our intentions. A simple point and "click" should suffice. For other devices that need more information, well, we have five fingers and limb that can rotate, by utilizing these variable actuators we can develop an intuitive and simple form of communication. The goal was to create a device that did just that. This is the end result demonstrated in this project.