While my wife and I take care of our newborn baby we frequently end up shouting from room to room for assistance from each other. This project is an attempt to address that scenario with a pair of call button prototypes, built using readily available IoT development tools to quickly arrive at a working system. Oddly, there doesn't seem to be any product like this on the market, so hopefully the results of this design process (to be published on github) will be useful to other people in similar situations.
One of the core design considerations in this project is the way the LED ring is used to indicate different system states to the user, which in turn influences the requirements for the final hardware.
Microinteraction design has been discussed at length, particularly with regard to mobile application interfaces - in order to get a satisfying clicky feeling on a touchscreen device you are mostly constrained to tweaking the animations following a button press.
In our case we may not have the open-ended possibility of arbitrary animated graphics, but the NeoPixel ring does allow some flexibility in testing a variety of alternatives.
On the hardware side, we'll aim to have the simplest LED setup that will support the finalized animation. Using WS2812 LEDs isn't ideal from a power standpoint because the integrated controllers each draw around 2ma, even when the LEDs are off. I could throw in a transistor to cut the power when the LEDs are not in use, but there might not be any need for RGB LEDs anyway. The big benefit of the WS2812s is the single GPIO pin requirement. Of course there are a variety of multiplexing schemes out there, so depending on the number of LEDs in the final design the power savings over using WS2812s will probably be worth it. Ultimately the minimum LED setup will depend on the desired animation set.
Even though this first iteration using off-the-shelf parts is a lot bulkier than it needs to be, we can still test the out a scaled up version of the intended form factor. This 3-piece case consists of a chassis with attachment points for a strap or wristband, a clear diffuser cap, and an overside button that attaches to the tact switch.
After a test fitting an initial print, these CAD files will be going into the repository. Between this and the Adafruit BOM anyone should be able to reproduce the setup.
Following this example, I threw together a NodeMCU quick script that blinks the LEDs and pulses the buzzer when a message is published (on button push) or received. On the PC side, I have Mosquitto running as the message broker and I'm using mqtt-spy to easily monitor and publish to topics for debugging.
Since there are few connections ( 1 pin + gnd + batt for the WS2812 LEDs, 1 pin + gnd for the button, 1 pin + vcc for the buzzer), I opted to tie everything directly to the header pin holes on the Feather board. With the help of some double-sided tape this is already a the basic device:
I'll be cramming this into a 3D printed shell for now; later versions will consist of a compact single custom PCB.
There is a large ecosystem of IoT development tools by now, which means it extremely easy to get up and running with hardware like the esp8266. In this particular case, we will use the following hardware prototyping products from Adafruit:
Communication between the call buttons and the host computer will take place over the MQTT message passing protocol. On the ESP8266 side, MQTT is a built in package in the NodeMCU embedded development environment (MIT license). On the host side, we will use MQTT broker Mosquitto for message distribution (EPL/EDL license).
This project will attempt to address an immediate challenge my wife and I are facing - how to avoid yelling from room to room while we try to take care of our newborn baby and maintain some semblance of continuity in household upkeep. This means we want to build a working solution immediately, with available tools and a minimum amount of effort.
With this in mind, our strategy is to use (or misuse) widely available, well documented Internet-of-Things development tools to rapidly develop a non-internet Thing. Using these tools will have the added benefit of enabling more flexibility to A/B test variations on the basic user interaction, as well as remotely debug the system. Once the system operation is satisfactory, we can pursue a later design phase with more appropriately targeted hardware.
Here's a sketch of what we want:
- User A presses a button to get User B's attention
- User A's device lights up to indicate that the call has been sent, while User B's device lights up to show that a request has been made
- User B then presses the button to acknowledge the request
- Both devices show the acknowledgment
After initiating the initial call, User A can also press again to cancel the call. This whole sequence works from both devices so that either user can call the other.
While this system will likely end up using some sort of low power RF connection (such as a LoRa link) directly between the two devices, we will be implementing this on top of an ESP8266/MQTT stack. Here is a rough system diagram:
While this approach makes a number of concessions to unnecessary system overhead, during a rapid prototyping phase this extra firepower can be useful. In particular, the need for a wifi network and host server looks problematic at first glance, but will allow updates to be pushed seamlessly to each device, and message traffic to be monitored and debugged.
We hope by building the system in this manner we can arrive at something that works for our immediate needs very quickly (within days hopefully) and support an iterative design process to refine the product.