Close

Does this project spark your interest?

Become a member to follow this project and don't miss any updates

Reactron Overdrive

A small but critical number of minimally complex machines interact with each other, providing machine augmentation of human activity.

2.7k 6 41 40

This project was created on 05/26/2014 and last updated 2 months ago.

Description
The system is non-invasive and collects biometric and other data to coordinate connected devices with one's activity. A standardized control board turns non-connected devices into connected ones. The combination of a small number of simple devices can produce a large number of useful results, without the hassle and failure rate of larger, more complex and expensive single-purpose systems.

Most importantly, it saves time - the one thing in life that can never be replaced. The system manages asynchronous tasks and delivers the results - information or physical - just at the moment they are needed. It enhances personal workflow, instead of bottlenecking it with an attention-serializing interface. This is life augmentation, smooth integration with the machine world, made to enable and amplify all that one can do.
Details

The system can be asked questions, told to remember things, made to serve information. Complex sequences of physical events can be arranged to occur without human interaction, or in response to human interaction. It makes coffee, and will deliver it. It will collect trash. It shows you the status of the stock market, or of the weather, or of a custom dataset. It can cleanly stop a machine in an emergency. It unlocks a chemical cabinet for an adult, but not for a child. It keeps the human safe, preserves data, and preserves itself to keep maintenance minimal and unobtrusive. It does not become obsolete in less than a year. It will do whatever it is enabled to do.

Almost every component in the system is optional, which makes it highly tolerant to failure. The presence or absence of devices, each with a small number of capabilities, determines the capabilities of the system as a whole.

I call my reactive machine units "Reactrons". These devices have a fairly simple interface that amounts to listing what small set of abilities they have, and control points to execute those abilities. These devices are classified into a handful of groups:

  • Integrons: human interaction nodes
  • Recognizers: human detection and identification nodes
  • Collectors: data acquisition and transmission nodes
  • Transporters: movement of material or data
  • Processors: conversion of material or data
  • Energizers: control of power

This project introduces Integrons and Recognizers as separate, discrete machines. The others are basically all familiar hardware, with a small control board added to provide them the ability to interface with the system.

What this system does:

The main idea is to reduce the complexity and increase the number of machine nodes that are constantly on and around us. Here is a system diagram of the network.  Note that it contains exemplars, and a multiplicity of units exist beyond this scope. The only unique and non-optional thing is YOU, and your time and experience, and that is the whole point.

They detect our position and do things asynchronously so that our needs are anticipated, for a high percentage of the time. Verbal commands and line-of-sight status indicators give a way to interact with the "culture" of nodes, but passive interaction is preferred.  In order of priority:

  • 1) Stuff happens based on rules you set up, so they are ready for you when you need it, sensors detecting your presence and waiting for certain conditions to occur.
  • 2) Status of whatever dataset you like can be seen passively, from a distance, via lights. (...and sound if the system needs to alert you of something you defined.)
  • 3) You can interact verbally from a distance with the Integrons (which then query the full network for the actual answer or status).
  • 4) You can interact up close with the Integrons via screen and by gesture.

It is my hope that most of the interaction is #1 and #2, thereby allowing you to move through your life without interacting actively with the machines, most of the time, analogous to the way the doors just open for Maxwell Smart (https://www.youtube.com/watch?v=sWEvp217Tzw) without him breaking stride, only with more complex results than opening doors.

Here is a system diagram of the Integron unit itself, showing the internal components and how they are expected to interact. The only task of this unit is human integration with the network of nodes, as described above. This post describes what is currently working and what remains as of August 20th 2014.

The simplicity of each node is a huge factor, reducing MTBF for every node, and keeping costs low. Simple machines just work better and last longer. And you can have multiples so that when one breaks another steps in. This allows you to remove maintenance tasks from the critical path of your human activity. Save them up for the chore weekend, or delegate them. The simpler a machine is, the higher the chances are that one can build a machine to do the maintenance.   At critical mass, we have...

Read more »

Components

See all components

Project logs
  • Integron as Reactron subcomplex

    2 months ago • 0 comments

    Whenever I have a small number of Reactrons that are meant to work together as essentially a single unit, I call the arrangement a subcomplex.  It is a complex, in that it consists of more than one internal Reactron.  But it is "sub" in the sense that these internal units exist below a unified Reactron interface for the complex, so from the outside, it may appear as a single unit.

    As an example, if you consider the Reactron coffemakerwater pump, and coffee-contextualized simple button, those are actually three separate Reactrons.  But, because these Reactrons are a special case scenario, where the button has a one-to-one relationship with the water pump, and the pump has a one-to-one relationship with the coffeemaker reservoir, they could have been structured as a single, three-unit subcomplex, with a single network interface, instead of three.  I didn't do it this way because these units were all conceived and added at completely different points in time.  Additionally, I like keeping the button and pump abstracted, because if I ever need a general purpose button elsewhere, or a pump, these designs can be replicated easily without de-coupling a coffeemaker.  That is one of the basic tenets of Reactron Overdrive - keep units simple and abstracted, and create desired complexity with increased numbers.

    In developing the speech-interacting Integron, I have done a lot of testing and analysis, and suspect that the base unit is perhaps too complex.  It may be truer to the principles to abstract the speech processing from the human interaction, where the Linux module is a sub-Reactron module in its own right, and the sight, gesture, and audio components comprise a separate sub-Reactron.

    I had been working on an "Integron relay" where a subset of the human interaction components were present, but offloaded the heavy processing to a separate unit.  My thought was to create a device that could act as a (door) threshold device, like a doorbell intercom, but fully integrated with the whole network.  If a doorbell, this would be externally mounted, and therefore subject to weather, damage, potential theft, etc.  So it would be beneficial to have it be cheap and replaceable, with nothing critical in it at all.  It would just be a dumb terminal, effectively.

    After a lot of development on the Integron unit, I am thinking this should actually become the standard model.  All Integrons should be a sub-complex, with an Integron relay of whatever form (doorbell, automobiledesktop, wall-mounted, ceiling mounted, wrist-mounted) presented to the human, and all the data processing for speech and most Reactron network functions located on a separate unit. The two sub-units could be physically located together, but need not be.  This will allow the use of much higher performance computers to be utilized for speech processing, and will also create better machine utilization efficiency, since we generally do not need as many speech engines as we do interface points. The separation will allow a few-to-many relationship of engines to human interfaces.  

    It also allows a completely different handling of the audio, escaping ALSA and allowing Linux to just handle received waveform data without trying to play it or capture it.  The real question is, will the transport of data to and from the relay units be faster than the capture, processing, and playout all on a local Linux SBC?  I don't know yet, but I am going to test it.

    In the case of the Automobile Integron, I think the hardware will be pretty much the same, but wired and coded differently.  I will still use a BBB for the speech processor, but now the microcontroller will handle the audio capture and playout instead of ALSA. That subcomplex will be entirely local to the car, as GPRS would not be efficient enough data transport, performance-wise, and further, loss of signal would end up disabling...

    Read more »

  • First cut of desktop morphology Integron

    2 months ago • 0 comments

    A lot of the Integron unit is working in component parts.  It remains for me to coordinate the various internal subsystems.  Here is an image of the first cut of the desktop morphology.  

    In this image the screen is present but not active, and the little ripple in its surface is just the protective plastic which I have not yet removed.  A NeoPixel is activated blue, under an internal diffuser, in front of a sound baffle and support structure joining base disk to top disk.  The fabric sleeve is an acoustically transparent speaker grille fabric.  There is some left-right asymmetry due to uneven tension, due to me assembling and disassembling this prototype a number of times.

    Still working on the mechanical fit and angle of the PIR sensors in the support baffle structure, and I have yet to test the transparency of the acoustic fabric to the ultrasonic sensor.  The infrared passed just fine, so worst case if the ultrasonic does not work I will put a central PIR in a tube to narrow the angle, and use that to indicate presence of someone standing right in front of the unit.  But I'd rather use ultrasonic ranging since then I get an actual distance.

    The unit looks great in a darker setting.

    It also is effective at giving a colored light signal from a distance in normal light, though the camera does not report well what the human eye sees, which is much more even than this image shows.

    The unit in this image does not contain the Beaglebone Black and audio components, I have that assembled on a breadboard separately for testing, but expect to be assembling it all together shortly.  (It does fit - using a proto cape on the BBB.)

    The speech synthesis and speech recognition are working.  The USB sound card is working well, but I am still working on the right microphone and audio amplifier on the output.  Currently I have a capsule electret mic sourced from Adafruit, and it works quite well close up. But I have ordered higher sensitivity ones, because I want to be able to speak to it from a distance, and it's just not there yet.  I have also ordered some adjustable pre-amplified mics, which may be OK if I attenuate the signal all the way down so that the peak-to-peak does not overdrive (!) the mic input. The sound card is apparently able to withstand abuse - I did try a sustained, full 2 volts peak-to-peak signal to see if it would fry (the sound cards are only a few dollars each, that test was worth it to find out the capability.) Terrible sound, of course, super distorted - but it didn't fry the board. We'll see if we have to go there at all. The internal baffle of the Integron unit is designed to be a sound cone. On the output side of things, I am using an Adafruit audio amp, but while it is quite excellent at what it does, it may not be the best choice for this application. I have some less expensive PAM chips on order, and some different ones I have used before in house, I will be experimenting with them soon.  I also changed the speaker to one that was less tinny. The speech output is mono, so I am just using one speaker of a pair.  This unit is really a point source so in some ways it makes sense to combine channels for any potential stereo signal sound files and just use mono.  That makes it more compact.

    The Moteino is driving the NeoPixel without issue, but I will use more than one in the final unit so that the top, middle, and bottom can be illuminated differently to give three different levels of signal.  (That is the idea anyway, we will see if it is practical.) Due to the internal baffling, there may need to be three LEDs per tier to enhance visibility from all sides, so nine total.  I am considering making the screen bezel ring out of translucent white acrylic, and adding another NeoPixel to make that a soft, power-on indicator light (overridable of course, if you want it off).  So ten RGB LEDs.  In...

    Read more »

  • Component changes

    3 months ago • 0 comments

    I've been holding off on an official component list until I can stabilize the build a bit more, but I wanted to mention a few things that have occurred since the original plan was hatched, and some of the design decisions involved.

    Without delving too much into the history, I will just say that at one point I was coding a very simple speech recognizer to run on the ATMega328P, and I was able to create some code that was about 85% effective at recognizing a handful of carefully selected keywords that were pretty distinct in their pronunciation. That was the true positive rate. There was also a high-enough-to-be-annoying false positive rate, which would trigger my “initializer” (more on that at a later time). Anyway, I was getting sucked into the details of speech recognition and optimizing for a tiny processor, and this is really not the area where I can add the most value. So, I moved to R.Pi, to try to implement open-source speech recognition using Pocketsphinx, like many other builds. It meant having to use an SBC w/ Linux in addition to the ATMega328P based board, but for this usage I was not so space-constrained so that was OK, even with the added cost and power usage. Also, moving to Linux enabled speech synthesis as well, using festival. Before, I was playing out pre-defined sound files containing sounds and canned speech responses. But I wanted a more general interface, one that could recognize more than a handful of pre-defined keywords, and trigger more than a handful of pre-defined sounds. Ultimately I moved on to BeagleBone Black as the SBC for performance and I/O, but this is just for these Integron units. Other Reactrons that don’t require all this library support are fine on R.Pi, or just stand-alone ATMega328P, or even stand-alone Android devices, based on the specific application.

    A Reactron is defined by its communications protocol and internal data structure, not by its specific hardware complement. I wanted to mention that while my header image shows a panel of ATMega328P boards (based on the Moteino by Low Power Lab), Reactrons are not Arduino clones, though their hardware complement may contain one, several, or none. I chose the image because to me, it evoked “many small computers working together”, which is what the Reactron network is all about. In my other images, you will see a multiplicity of BBBs and R.Pis and other hardware, as the mix of hardware for a generalized Reactron completely depends on its purpose. For instance, some existing Collector units are purely BBB boards with attached sensors, and the software to support the Reactron interface. Some Recognizers are just BBBs running statistical calculations on inputs from Collectors - without any directly attached auxiliary hardware. I haven't had a chance yet to write much about Recognizers, but that is coming soon.

    Moving on to some of the changes, specific to the Integron unit:

    One excellent thing that has changed since I started this project is that the Moteino has been upgraded. [Felix] at Low Power Lab has informed me that all new R4 Moteinos are now shipping with the MCP1703 voltage regulator. This is good news for me, because now I am able to just add “Moteino” to the components list, instead of a board "based on the Moteino", where I would have to do have a separate BOM and so forth. Now, you can just use a stock R4 Moteino (with HopeRF radio and a 4Mbit flash chip) for any of my projects that require an RF enabled ATMega328P-based solution. At the time I started this project, the Moteino was shipping with the MCP1702 voltage regulator, and that meant that using a then-stock Moteino in my 12+V projects (like the Reactron water pump or the newer Automobile Integron) might have resulted in a puff of smoke and little else.

    I would also like to direct your attention to the fact that [Felix] has entered THP with the Moteino Framework, and you should go there and give him a skull. [Felix] is simply amazing and creates an enabling technology...

    Read more »

View all 17 project logs

Discussions

FrankenPC wrote 3 months ago null point

This is impressive. It's sort of the holy grail of futurist visions regarding home automation. BEST of luck with this! I'll keep an eye on this.

On a side note, I've been playing around with junk cell phones recently. Buying old Android phones from EBAY to experiment with. Currently I unlock them and play with all kinds of interfaces to various game controllers to play MAME style retro games. Why am I mentioning this? I realized it was sort of pointless to buy boards like RasPi or Beaglebone black when I can buy 2 or 4 processor Android juggernauts for 20$-40$ a pop. AND they come with capacitive high resolution touch screens and incorporate batteries (built in UPS!), GPS, WiFi, etc etc etc. Re-purposing great electronics that will eventually end up in a junk pile seems like a good idea and they work really well. The only hurdle is IO. That's easily taken care of with devices based on Arduino or whatever has a USB, wi-fi or Bluetooth interface.

Are you sure? [yes] / [no]

Kenji Larsen wrote 3 months ago null point

Thanks for those very kind comments!

The junk cell phones approach is actually great, and I am a huge proponent of repurposing old tech when possible. I chose BBB for the integration units here so I could leverage open source Linux-based locally-hosted voice recognition and synthesis, in a somewhat standardized and abstracted way. For some of my peripheral units I do use older Android devices. I have not connected any of those with additional modules, I just run them on the built-in WiFi. But they are great. They don't stop working just because time has passed. Some are slower, but I don't load them up with stuff they can't handle... that is part of the point here. Keep things running well by making stand-alone components simple and robust.

Are you sure? [yes] / [no]

phreaknik wrote 3 months ago null point

This project is beyond cool! Kudos!

Are you sure? [yes] / [no]

Kenji Larsen wrote 3 months ago null point

Thanks, I really appreciate the support! I feel I am a bit behind on documentation, but I am trying to get it all shored up soon. I'm doing my best to make the most important pieces off-the-shelf so anyone can build one easily, so some of the components are changing... stay tuned!

Are you sure? [yes] / [no]

Mike Szczys wrote 5 months ago null point

Cool! Thanks for entering this in The Hackaday Prize. I can't wait to see the specifics on your data transfer for these. Scaling to masses of simple machines is a rabbit hole I want to see to the bottom!

Are you sure? [yes] / [no]

Kenji Larsen wrote 5 months ago null point

Thanks for having the Prize to enter! You hit it right on the head. Localized node density will have a practical scaling limit. It's the same natural law that governs prices in Manhattan... If you look down that rabbit hole, it's turtles all the way down - I stop counting at seven.

Are you sure? [yes] / [no]

Similar projects