Build your own nervous system!

Similar projects worth following
NeuroBytes are small modular electronic neuron simulators that can be freely connected to form complex and biologically representative neural circuits. The NeuroBytes platform is currently in its billionth prototype generation with approximately 1400 individual elements built to date, along with numerous accessories that help constructed networks interface with the real world. Joe and Zach formed NeuroTinker, LLC to commercialize the concept on 4/15/2015, and received funding from the National Science Foundation under a Phase I (+II!) SBIR grant. Learn more at!

NeuroBytes is an Open Source Hardware project, with all hardware and firmware released under the GNU General Public License v3.0. Design files are available here:

Thanks for checking out NeuroBytes!

This Project Details section is current as of 10/10/2016. To see the previous edition, follow this link.

[img: NeuroBytes v0.91 boards getting programmed and tested, October 2016]

NeuroBytes® [we did register the name and the NeuroTinker logo, as discussed here] are open-source electronic neuron simulators designed to help students understand basic neuroscience. Each modular board includes five inputs (or dendrites) along with two outputs (its axon terminal) that allow easy connection with other identical boards. A rear-mounted RGB LED allows learners to quickly visualize various characteristics of each neuron, including membrane potential, operating mode, and firing rate. Additionally, analog sensors can be used for advanced biological modeling using our littleBits interface adapter and suitable analog input Bits (such as the light/dark sensor). Outputs can be connected to "muscles": 9g hobby servos driven through a NeuroBytes board switched to Motor Neuron mode.

[img: NeuroTinker co-founders Joe (left) and Zach (right) at San Diego Maker Faire 2015]

Since its inception in mid-2012 and hardware development starting in February of 2014, this project has gone through ten iterations and spawned the formation of NeuroTinker, LLC, a for-profit company primarily funded via a Phase I SBIR grant from the National Science Foundation. As of October 2016, we have submitted a Phase II application under the same prompt, joined an edtech-focused accelerator called EDSi, run a MN-based manufacturing trial that resulted in prototype sales into both secondary and post-secondary education markets, and generally spent a great deal of time preparing the products and accessories for commercial sale.

Open Source Hardware

NeuroBytes is Open Source Hardware, as defined by OSHWA and standardized under their Certification Program (US000024). Our design files (C code for firmware and KiCAD files for hardware) are released under the terms of GPLv3 as detailed in our license file.


NeuroBytes GitHub Repository. The firmware and hardware files for this project are released under GPLv3 and the latest version will always live in the linked GitHub repo.

NeuroTinker company site. This is where we share additional information related to the NeuroBytes product line, such as operating instructions, kit purchase links, and social media account info. We have a public forum on the site that sees occasional use, and a spot to sign up for infrequent email newsletters that make every effort to consolidate important info into one location.

This project page. Yes, the project page you are currently reading. New project logs will appear somewhat frequently and tend to provide a good up-to-the-minute record of technical challenges and developments; conversely, if you want an exhaustive history of NeuroBytes you can start at the beginning. We've had a few great conversations in the log page comment sections (along with the main comment section), so don't hesitate to provide your input.

[img: jumbo-sized and functional NeuroBytes board built for World Maker Faire 2016.]


NeuroBeast v2 springy feet, ready to accept shoes. Use PLA, 0.2mm layer height, 20% infill, no supports or raft. Released under the terms of the GPL v3, as detailed in the LICENSE file.

Standard Tesselated Geometry - 246.37 kB - 05/27/2016 at 16:37



NeuroBeast v2 8-legged chassis *.stl file for printing. Use PLA, 0.2mm layer height, 20% infill, no supports or raft. Cams will likely need a bit of trimming to work reliably. Released under the terms of the GPL v3, as detailed in the LICENSE file.

Standard Tesselated Geometry - 712.97 kB - 05/27/2016 at 16:37



All NeuroBytes firmware and hardware are released under GPL v3.0. Read it!

plain - 34.32 kB - 05/15/2016 at 20:00


Izhikevich model.ods

Spreadsheet used to model Izhikevich's neuron dynamics, both in floating point and integer format.

spreadsheet - 159.10 kB - 01/28/2016 at 20:26


  • 1 × v0.91 board 0.062" FR4-TG170, 1 oz Cu, double-sided, ENIG finish, matte green solder mask, white silkscreen. Tab routed and 4x panelized. See github/zakqwy/neurobytes for Gerbers and KiCad files.
  • 1 × D1, RGB LED, bottom mount, 4-PLCC SunLED XZMDKCBDDG45S-9
  • 1 × D2, Schottky diode, SC-79 Panasonic DB2S30800L or equal
  • 1 × IC1, 5VDC LDO, SOT-23-5 STMicroelectronics LD2980ABM50TR or equal
  • 1 × IC2, ATtiny88, 28-VQFN Atmel ATTINY88-MMHR

View all 15 components

  • Making Izhikevich Neurons Fast on the STM32

    Patrick Yeon07/09/2017 at 05:29 0 comments

    Most of this post is going to be down in the embedded software nitty-gritty, but I think I'll start you off with a little video. Here's a neurobyte board on my bench emulating a "chattering" neuron:

    Having written and verified an integer-based model of the neuron (as I've described in my previous post), I now need to check that it's still accurate on actual hardware and see that it can run at an acceptable speed. To do this, I split my code into a main loop, and a library that does all of the neuron work, then wrote a second main file to setup all of the hardware peripherals and run a timing loop on the microcontroller:

    #include <libopencm3/stm32/rcc.h>
    #include <libopencm3/stm32/gpio.h>
    #include "./izhi.h"
    #define FLOATPIN GPIO1
    #define FIXEDPIN GPIO3
    int main(void) {
        gpio_mode_setup(GPIOA, GPIO_MODE_OUTPUT, GPIO_PUPD_NONE, GPIO15);
        gpio_mode_setup(GPIOB, GPIO_MODE_OUTPUT, GPIO_PUPD_NONE, GPIO5);
        gpio_clear(GPIOA, GPIO15);
        gpio_clear(GPIOB, GPIO5);
        fneuron_t spiky_f;
        ineuron_t spiky_i;
        for (int i = 0; i < 5000; i++) {
            if (i < 100) {
                step_f(&spiky_f, 0, 0.1);
                step_i(&spiky_i, 0, 10);
            } else {
                gpio_set(GPIOA, GPIO15);
                step_f(&spiky_f, 10, 0.1);
                gpio_clear(GPIOA, GPIO15);
                gpio_set(GPIOB, GPIO5);
                step_i(&spiky_i, 10 * spiky_i.scale, 10);
                gpio_clear(GPIOB, GPIO5);

    I also added a simple Makefile to keep track of the build details:

    ARM_PREFIX = arm-none-eabi-
    OPENCM3_DIR = ./libopencm3
    CC = gcc
    LD = gcc
    RM = rm
    OBJCOPY = objcopy
    ARM_ARCH_FLAGS = -mthumb -mcpu=cortex-m0plus -msoft-float
    #ARM_CFLAGS += -fno-common -ffunction-sections -fdata-sections
    ARM_LDLIBS = -lopencm3_stm32l0
    ARM_LDSCRIPT = $(OPENCM3_DIR)/lib/stm32/l0/stm32l0xx8.ld
    ARM_LDFLAGS = -L$(OPENCM3_DIR)/lib --static -nostartfiles -T$(ARM_LDSCRIPT)
    	$(CC) izhi.c host.c -o host.o
    	$(ARM_PREFIX)$(CC) $(ARM_CFLAGS) -c izhi.c -o izhi.o
    	$(ARM_PREFIX)$(CC) $(ARM_CFLAGS) -c stm.c -o stm.o
    	$(ARM_PREFIX)$(LD) $(ARM_LDFLAGS) izhi.o stm.o $(ARM_LDLIBS) -o stm.elf
    	$(ARM_PREFIX)$(OBJCOPY) -Obinary stm.elf stm.bin
    	$(ARM_PREFIX)$(OBJCOPY) -Oihex stm.elf stm.hex
    	$(RM) *.o *.elf *.map *.bin

    With the code running on the native hardware, I fired up by trusty Saleae Logic analyzer to watch pins A15 and B5 to see the timing of my `step_f` and `step_i` loops, respectively.

    It looks like the fixed-point implementation runs 4x faster at ~0.4ms per loop, vs. ~1.67ms for the floating-point version. This surprised me, I actually expected the floating-point one to be much much worse! As a quick check, I re-compiled with `typedef int16_t fixed_t;` to see if 16 bit integers gave any improvement (I expected not, but especially with performance issues it's important to test one's assumptions), and the integer runtime went down to about 0.32ms per `step_i` loop. Of course, the behaviour would be completely incorrect because of the integer overflow I was struggling with when trying to develop a 16bit version, but it's good to have the datapoint that it looks like a series of 16bit operations takes about 80% as long as a matching series of 32bit operations.

    For completeness, I checked the two most obvious compiler optimization settings: `-Os` (optimize for size with speedups that tend not to increase size) and `-O3` (optimize for speed). These led to floating and fixed point loop times of 1.54ms/0.4ms and 1.6ms/0.37ms, so not really huge gains to be had there. This time I'm not surprised; the work being done is pretty straightforward and maps pretty easily to assembly code.

    So how fast do I need to get going? Well Zach didn't exactly give me a spec to hit, but in his log on implementing this for v07 he says he needs to update the LEDs every 30us and has the model calculations broken up in to 7 steps of no more than 29us each (there's also steps for reading...

    Read more »

  • Porting the Izhikevich Behaviour to the STM32

    Patrick Yeon06/17/2017 at 21:18 0 comments

    As Zach teased on the touch slider update, there's been work happening behind the scenes to implement the Izhikevich neuron model on the newer NeuroBytes boards. Well I may as well introduce myself (hi, I'm Patrick! In the daytime I'm an Electrical Engineer, and have been attracted to help out the project here because it reminds me of Valentino Braitenberg's Vehicles, a book that helped kick-start my interest in robotics) and I figured I would keep a log during my work for any budding programmers who would want a peek into the work of an embedded software engineer.

    Straight from the source, we have a model of the neuron where

    v' = 0.04v**2 + 5v + 140 - u + I
    u' = a(bv - u)
    if v >= 30, then {v = c, u = u + d}

    which I implemented in C on my host machine as

    #include <stdint.h>
    #include <stdio.h>
    typedef float float_t;
    typedef struct {
        float_t a, b, c, d;
        float_t potential, recovery;
    } fneuron_t;
    static void RS(fneuron_t *neuron) {
        // create a "regular spiking" floating point neuron
        neuron->a = 0.02;
        neuron->b = 0.2;
        neuron->c = -65;
        neuron->d = 2;
        neuron->potential = neuron->recovery = 0;
    static void step_f(fneuron_t *neuron, float_t synapse, float_t ms) {
        // step a neuron through ms milliseconds with synapse input
        //   if you don't have a good reason to do otherwise, keep ms between 0.1
        //   and 1.0
        if (neuron->potential >= 30) {
            neuron->potential = neuron->c;
            neuron->recovery += neuron->d;
        float_t v = neuron->potential;
        float_t u = neuron->recovery;
        neuron->potential = v + ms * (0.04 * v * v + 5 * v + 140 - u + synapse);
        neuron->recovery = u + ms * (neuron->a * (neuron->b * v - u));
    int main(void) {
        fneuron_t spiky;
        for (int i = 0; i < 2000; i++) {
            if (i < 100) {
                step_f(&spiky, 0, 0.1);
            } else {
                step_f(&spiky, 10, 0.1);
            printf("%f %f\n", i * 0.1, spiky.potential);

    I compile and run this with

    gcc izhi.c -o izhi
    ./izhi > rs.dat

    I use gnuplot to display this quickly, by starting it up and using the command

    plot 'rs.dat' with lines
    The output looks more or less like the outputs from the paper, at least to my eye.

    The next step was to implement the easiest fixed-point version I can think of, to see how well its output aligns with the floating-point version. The reason to do this is that floats mask a lot of complexity (their dynamic range protects me from rounding errors, overflow, and underflow, for example) that will become my problem to deal with when I work with fixed point arithmetic. Here is the added fixed-point math and an updated main function:

    typedef int16_t fixed_t;
    #define FSCALE 320
    typedef struct {
        // using 1/a, 1/b because a and b are small fractions
        fixed_t a_inv, b_inv, c, d;
        fixed_t potential, recovery;
    } ineuron_t;
    static void RS_i(ineuron_t *neuron) {
        neuron->a_inv = 50;
        neuron->b_inv = 5;
        neuron->c = -65 * FSCALE;
        neuron->d = 2 * FSCALE;
        neuron->potential = neuron->recovery = 0;
    static void step_i(ineuron_t *neuron, fixed_t synapse, fixed_t fracms) {
        // step a neuron by 1/fracms milliseconds. synapse input must be scaled
        //  before being passed to this function.
        if (neuron->potential >= 30 * FSCALE) {
            neuron->potential = neuron->c;
            neuron->recovery += neuron->d;
        fixed_t v = neuron->potential;
        fixed_t u = neuron->recovery;
        neuron->potential = v + ((v * v) / FSCALE / 25 + 5 * v
                                 + 140 * FSCALE - u + synapse) / fracms;
        neuron->recovery = u + ((v / neuron->b_inv - u) / neuron->a_inv) / fracms;
    int main(void) {
        fneuron_t spiky_f;
        ineuron_t spiky_i;
        for (int i = 0; i < 5000; i++) {
            if (i < 100) {
                step_f(&spiky_f, 0, 0.1);
                step_i(&spiky_i, 0, 10);
            } else {
                step_f(&spiky_f, 10, 0.1);
                step_i(&spiky_i, 10 * FSCALE, 10);
            printf("%f %f %f\n", i * 0.1, spiky_f.potential,
                   (float_t)(spiky_i.potential) / FSCALE);

    I'd like to highlight a few habits that usually make my life easier:

    typedef int16_t fixed_t;
    #define FSCALE 320

    I took an initial guess that I'd be using 16-bit...

    Read more »

  • Free (well, PCB-track-only) Touch Sliders!

    zakqwy05/23/2017 at 20:42 0 comments

      One of the many NeuroBytes boards we are currently developing is the Tonic Neuron. This board is sometimes referred to as the Izhikevich Neuron due to the origin of its algorithmic inspiration (and ensuing porting effort, first detailed here and here and continuing elsewhere, which is a discussion for another log post...). The Tonic Neuron is useful because it fires spontaneously, allowing the user to inject periodic signals into a larger NeuroBytes network. In addition to modelling actual tonic neurons in the body, these boards provide a compact (compared to ring oscillators) 'pacemaker' for robotics experiments such as the NeuroBuggy and the Invertebrate Locomotion Model.

      In an effort to reduce the BOM price, assembly steps, physical size, and general clunkiness of a potentiometer-based board, I spun up an experimental PCB to examine the possibility of using a linear touch slider as the user input for varying the Tonic Neuron's pulse rate:

      The slider itself is roughly 20mm long, and connects to a pair of GPIOs on the STM32L. Otherwise, the pins float entirely -- even the ground plane is separated by 2mm and constructed of a low-density grid, all in an effort to reduce parasitic capacitance to ground. When a user touches one of the pads, the capacitance to ground increases; since this effect is related to the contact area on the pad, using the triangular design shown above means the capacitance increase varies linearly as the user swipes their finger along the control. I should mention here that the design of these pads (and the underlying concept generally) can be found in many places, including ST's Touch Sense Design Guide. Other manufacturers have similar white papers; just keep in mind that they're usually written around a touch sensing peripheral, which my cheap-o-edition chips certainly do not include. It's okay, we don't need a fancy peripheral to handle touch input, especially if we aren't putting a barrier in front of the PCB itself:

      We are dealing with quite low capacitance here, to the point that connecting my 15 pF oscilloscope probes to de-tented vias dramatically changes the circuit's response. Measuring this change with the microcontroller is actually quite simple, and is explained nicely on the Arduino implementation website. Rather than using the unrolled loop method described in that code, I made use of the TIM21 input capture peripheral as follows:

      1. Set Touch Sensor 0 as an input and activate the pulldown resistor.
      2. Start TIM21 at clock speed and tell it to stop counting when the Touch Sensor 0 pin goes high.
      3. Activate the Touch Sensor 0 pullup resistor.
      4. Wait a few cycles to ensure that the pin went high.
      5. Record the TIM21 counter value.
      6. Repeat steps 1-5 for Touch Sensor 1.
      7. The touched location on the strip will be proportional to the difference between the two counter values, zeroed in the center.

      In practice, and with the pullup values and parasitic/body capacitance my setup produces, I found that the total time differential to be around 1.5 microseconds. At the current (relatively low) processor clock rate, that gives me ~3 bits of position resolution which should be more than adequate for our purposes. I suspect adding external components (particularly larger pullups/pulldowns triggered by other GPIOs) could increase the time differential a bit and allow one to eek out more resolution without a faster clock, but that would require more BOM lines!

      The code is here if you'd like to take a gander or use it in your own project (GPL v3); we're using libopencm3 along with a few hacks to get the TIM21 peripheral up, which we'll PR to the main project at some point once we have the library updated. All of the touch slider stuff is pretty well self-contained in main.c under the functions get_touch() and get_slider_position().

  • Cochlea Prototype!

    zakqwy04/13/2017 at 16:10 1 comment

    I posted these shots without much context to the NeuroTinker Instagram account. Hopefully this post fills in a few gaps.

    Above: sketchy prototyping techniques featuring a Teensy 3.5 and an Adafruit MAX4466 electret mic board. Advantage: I was able to cobble this together from stuff I had on hand in a few hours. Disadvantage: it's quite delicate (especially the LEDs).

    The cochlea is a spiral hollow structure in the inner ear that does the dirty work of converting sound into neural impulses. I won't get into detail on its workings here -- it's a fascinating system that is worthy of deep study -- beyond the parts relevant to this build.

    Above: a diagram of the cochlea from here.

    The cochlea is filled with a fluid that moves in response to sound entering the ear; the fluid in turn moves thousands of hair cells, each of which triggers nerve cells that send electrical signals to the brain. Due to their location along the fluid path (along with the varied stiffness of the basilar membrane), the hair cells respond to different frequencies.

    There are many ways we could simulate the ear, the simplest being an sound-pressure-level-to-firing-rate converter of some sort. Conversely, a faithful full-scale reproduction of the organ would have thousands of outputs tuned to distinct frequencies, along with an input method to simulate the extra hairs used to 'tune' the mechanical preamplification system that exists in the real ear. We opted for something in between: some frequency selectivity but hopefully not too much bulk or cost.

    Above: kinda hard to see, and the harmonica skills are severely lacking. But it works.

    The prototype makes liberal use of @Paul Stoffregen's excellent #Teensy Audio Library -- in particular, the 1024-bucket FFT function. The code, shown below, doesn't do much beyond (a) grab specific buckets, (b) scale the FFT values, and (c) map those values to LED brightness. The output ports don't do anything yet, and eventually the brightness values will be converted into firing rates... but you get the idea.

    #include <Audio.h>
    #include <Wire.h>
    #include <SPI.h>
    #include <SD.h>
    #include <SerialFlash.h>
    // GUItool: begin automatically generated code
    AudioInputAnalog         adc1;           //xy=227,187
    AudioAnalyzeFFT1024      fft1024_1;      //xy=480,289
    AudioConnection          patchCord1(adc1, fft1024_1);
    // GUItool: end automatically generated code
    //  LED pin identities
    int pinLED0 = 35;
    int pinLED1 = 36;
    int pinLED2 = 37;
    int pinLED3 = 20;
    int pinLED4 = 21;
    int pinLED5 = 22;
    int pinLED6 = 23;
    //  FFT reading array
    float input_array[7] = {0,0,0,0,0,0,0};
    //  FFT max value array
    float max_array[7] = {0.01,0.01,0.01,0.01,0.01,0.01,0.01};
    //  FFT value scaling array
    float scale_array[7] = {0.08, 0.10, 0.14, 0.3624, 0.3068, 0.46, 0.7201};
    //  LED output array
    int led_array[7] = {0,0,0,0,0,0,0};
    void setup() {
      pinMode(pinLED0, OUTPUT);
      pinMode(pinLED1, OUTPUT);
      pinMode(pinLED2, OUTPUT);
      pinMode(pinLED3, OUTPUT);
      pinMode(pinLED4, OUTPUT);
      pinMode(pinLED5, OUTPUT);
      pinMode(pinLED6, OUTPUT);
    void getSamples() {
      //  fills input_array[7] with FFT readings
        input_array[0] =,12);
        input_array[1] =,15);
        input_array[2] =,19);
        input_array[3] =,25);
        input_array[4] =,31);
        input_array[5] =,37);
        input_array[6] =,49);  
    void applyMinimum(float minimum) {
      //  cuts off the lowest part of the FFT result (noise)
      int i;
      for (i=0;i<7;i++) {
        if (input_array[i] < minimum) {
          input_array[i] = 0; 
    void keepMaximum() {
      //  keeps the maximum FFT values from input_array in max_array
      int i;
      for (i=0;i<7;i++) {
        if (input_array[i] > max_array[i]) {
          max_array[i] = input_array[i];
    void scaleOutput() {
      //  scales led_array (0-255) based on scale_array
      int i;
      for (i=0;i<7;i++) {
        led_array[i] = (input_array[i] / scale_array[i]) * 255;
        if (led_array[i] > 255) {
    Read more »

  • A New Design Direction (plus some excellent news)

    zakqwy03/23/2017 at 15:46 2 comments

      Since its inception in late 2014, this page has focused on the technical details of the NeuroBytes project. I try to cover everything from component selection to scaling challenges to firmware development to failed tangents; while I undoubtedly leave quite a bit out, from the engineering perspective I consider this running log to be reasonably comprehensive.

      Getting the product in front of our end users -- primarily high school students, but also younger (middle school) and older (college) folks -- has also been a priority, just one that we don't talk about quite as frequently. By the numbers, we've sold NeuroBytes prototype kits into a dozen high school classrooms and two college neuroscience departments, and @NeuroJoe and I (but mostly him) have brought the platform into many more formal and informal learning environments (as shown in the image above). These interactions have driven our iterations starting with v0.4 and continuing through the green v0.91 boards we built last year.

      Stuff We've Learned from Users

      This won't be anything close to a comprehensive list; I'm focusing on lessons learned that are immediately relevant to future product development plans.

      1. Mode switches are confusing. Using a single base NeuroBytes board to do everything -- integrate-and-fire neuron simulation, motor neuron functionality, etc -- is great for minimizing SKUs, but it's confusing to new users. Every time we teach the patellar reflex kit, a few students end up setting an upstream interneuron into motor mode by mistake. As motor neurons directly output a servo-ready PWM signal -- i.e. a 50 Hz, 5% duty cycle square wave -- downstream neurons see an extremely rapid series of pulses, saturating their membrane potential value and holding the LED in a constant bright white state. This problem will grow more pronounced as we add new operating modes.
      2. Detailed PCB art is worthwhile. Students look carefully at NeuroBytes and notice just about everything -- that means the details need to be physiologically accurate. The gold-rendered NeuroBytes logo catches the eye and suggests a relationship to neuroscience, but the stylized design isn't perfect from an educational perspective. Specifically, dendrites should be more branched and spindly; the axon terminal should be longer and clearly split to output connectors; we should illustrate myelination; etc.
      3. Cables are too short. The stubby cables overly constrain even the simplest circuits, while the long cables still aren't long enough to adequately separate battery packs or sensors from circuits.
      4. We need dedicated power connectors. I had to see this problem first-hand to understand its importance. We spend a good deal of time teaching directionality (i.e. information flows only from dendrite to axon), and then tell students that power connections can plug into any free terminal. I view this as a convenience, but students see it as an exception to a fundamental rule!
      5. Our ecosystem is novel and has educational value, but it's not comprehensive enough. This was a hard truth, and one that took a long time to really believe. NeuroBytes don't have a ton of replay value and can't teach much beyond basic neuronal function, simply because they don't do much beyond blink LEDs and twitch motors. While teachers have enthusiastically accepted the platform and students are interested, the product doesn't have staying power because it lacks the flexibility for truly free exploration. In other words, we need to provide more options for input and output modules.

      Product Development Plans

      [above: new Motor Neuron prototype, featuring three equal-weight dendrites; a 'scope/programming port; two servo headers; and a revised PCB outline and look.]

      Short answer: it's time for a massive prototyping sprint.

      We're going to design and build a number of dedicated NeuroBytes modules, including a few concepts that we haven't explored at all yet (such as a model of the cochlea). Generally speaking, the modules will be single-purpose...

    Read more »

  • NeuroBytes v0.92 Prototype

    zakqwy02/14/2017 at 22:45 0 comments

    Today, the NeuroBytes project takes its first baby step into the 21st century.

    Okay, ignore the extremely sketchy construction techniques. It's an STM32F0 Discovery board, chosen due to its extremely low cost (under $10) and built-in ST-LINK programmer. My modifications include an RGB LED tied into the output compare registers of TIM1, along with a bottom-mounted 'sled' that includes power/ground rails, a 3.3vdc regulator, voltage dividers for the three dendrite signal/type pairs, and a diode for the bridged axon signal/type lines:

    The decision to move to a 32-bit platform, and the selection of the STM32F0 specifically (-L0, potentially), wasn't one I made lightly. I am fairly comfortable with the ATtiny line at this point, a chip I have used for NeuroBytes since v0.4 in 2014. ATtinys are cheap, they have plenty of I/O, they use extremely simple open-source command line tools for programming, and the complete manual is under 250 pages. However, ATtiny88s (my current model of choice) doubled in price last year, meaning the STM32F0 is actually the economical option. And the other advantages of the new chipset -- enough hardware PWM channels for the RGB LED, vastly superior math capabilities due to speed and bit width improvements, online debugging capabilities using st-link and gdb, etc -- make the inconveniences (3.3VDC power, different development environment, etc) worthwhile.

    The board is wired up as follows:

    PB13 / TIM1_CH1NRGB LED Red
    PB14 / TIM1_CH2NRGB LED Green
    PB15 / TIM1_CH3NRGB LED Blue
    PA6Dendrite 1 Type
    PA7Dendrite 1 Signal
    PC4Dendrite 2 Type
    PC5Dendrite 2 Signal
    PB0Dendrite 3 Type
    PB1Dendrite 3 Signal
    PC8 / TIM3_CH3Axon Type/Signal (all)

    The non-standard dendrite/axon quantity isn't important -- it's just based on how much FR4 I wanted to dedicate to this build. The resistor dividers and power supply should make this prototype compatible with existing v0.91 stuff; the diode in the axon circuit is there to protect the processor in case they are hooked up to outputs by mistake.

    I've been playing around with libopencm3, an open-source firmware library for a variety of ARM Cortex microcontrollers. Getting the toolchain set up and working reliably wasn't particularly simple, but instructions in the official example repository eventually got me there. I wrote a simple program that uses the TIMER1 peripheral to PWM the LED at an extremely high rate (5 kHz) and resolution (10 bits per channel, gamma-corrected), producing some excellent effects:

    above: 1/3 second exposure at F16 (ish) and ISO 200, feat. plenty of board-waving.

    Code for the LED PWM test is shown below, including an unnecessarily large gamma correction lookup table. No further commentary at the bottom of the code block, so feel free to stop reading now (recommended).

     * This file is part of the libopencm3 project.
     * Copyright (C) 2013 Chuck McManis 
     * Copyright (C) 2013 Onno Kortmann 
     * Copyright (C) 2013 Frantisek Burian  (merge)
     * This library is free software: you can redistribute it and/or modify
     * it under the terms of the GNU Lesser General Public License as published by
     * the Free Software Foundation, either version 3 of the License, or
     * (at your option) any later version.
     * This library is distributed in the hope that it will be useful,
     * but WITHOUT ANY WARRANTY; without even the implied warranty of
     * GNU Lesser General Public License for more details.
     * You should have received a copy of the GNU Lesser General Public License
     * along with this library.  If not, see .
    	RGB LED PWM test using an STM32F0 discovery board.
    	Red		PB13	TIM1_CH1N
    	Green	PB14	TIM1_CH2N
    	Blue	PB15	TIM1_CH3N
    #include <libopencm3/stm32/rcc.h>
    #include <libopencm3/stm32/gpio.h>
    #include <libopencm3/stm32/timer.h>
    #include <libopencm3/cm3/nvic.h>
    #include <libopencm3/cm3/systick.h>
    volatile uint8_t tick = 0;
    static const uint16_t gamma_lookup[] = {
    /*	gamma = 2, input range = 0-1023,...
    Read more »

  • Chris 'n Zach blab about NeuroTinker, SBIR, and other random crap

    zakqwy01/07/2017 at 16:47 1 comment

    If you're exceedingly bored, I had the distinct honor of being on @Chris Gammell's excellent radio show this week, The Amp Hour. Listen to the episode here--we cover a lot in a brief (hah!) 109 minutes, including quite a bit about the history and challenges related to this very project.

  • DIY Patellar Reflex (and a new intern!)

    Jarod White12/18/2016 at 01:00 0 comments

    I’m Jarod and I’m a physics/electrical engineering student at the University of Minnesota and am really excited to be a newly hired intern at Neurotinker! My first project here has been to revamp the 3D-printable patellar reflex model. The original principles of the model have stayed the same: 3D print a leg and thigh piece and hook them up with four NeuroBytes, two servo motors, and a button to create a neurologically accurate model of the patellar reflex! My project was to make the 3D printing and assembly process as easy as possible so that users can focus more on learning neuroscience and less on intricate 3D printing and assembly.

    The patellar reflex is a great starting model for new NeuroBytes users because it uses only four NeuroBytes and demonstrates a neurological phenomena that everybody recognizes (even an engineering student with no neuroscience experience :>). Two muscles cause the leg to involuntarily kick when the patellar ligament is struck--the quadricep and the hamstring. The quadricep sits on top of the thigh and its motor neurons are told to contract and jerk the leg when the patellar sensory neurons send a signal. The hamstring sits on the bottom of the thigh and normally stays contracted to keep the leg from moving--but an inhibitory signal from the patellar ligament tells it to relax and let the quadricep do its thing.

    [image by Amiya Sarkar, available on wikipedia here]

    A button embedded in the leg model functions as the patellar tendon and is connected via an excitatory cable to a sensory neuron. An impulse from the button is enough to fire the sensory neuron and send an impulse through its axons. One axon is connected to the quadricep motor neuron via an excitatory cable and causes the quadricep to contract (i.e. move its servo motor) and pull the leg up. The other axon sends an inhibitory impulse to the hamstring motor neuron and goes through an interneuron on its way. Interneurons are located in the ‘processing centers’ of our body like the brain and spinal cord and routing signals through them causes a brief delay. In contrast, the signal from the patellar neuron to the quadricep motor neuron does not go through an interneuron which causes the reflex to happen more quickly and also makes it involuntary. The hamstring motor neuron relaxes (i.e. moves its servo) to allow the quadricep to move the leg in unison but doesn’t actually do any work in making the motion happen.

    Building this model not only teaches students how neurons in our body function, but also shows how a simple kicking motion hides a ton of complexity in how a network of neurons, talking to each other only through brief electrical pulses, can coordinate motion.

    The original patellar reflex model did this demonstration very well but was difficult to print and assemble. Printing was made difficult by the substantial amount of support structure necessary to get the thing printed which required a laborious, and sometimes painful, removal process. Assembly was difficult and mostly improvisational as the servos and button had to be mounted with tape and zip ties. The model’s support structure, which used four screws as a sort of tripod, was also finicky and unreliable.

    My goal was to design a new model that could be printed without support structure and could be fully assembled easily and intuitively with only a couple parts from Home Depot or a local hardware store. Zach calls this design condition the ‘Home Depot test’ and, for this and future kits, is a very important design requirement that means users can create all sorts of cool models with NeuroBytes without having to buy costly custom-manufactured components. I also used a free CAD tool (Fusion 360) and will open source all of the design files so users can customize the model to their needs--and also contribute to a growing ecosystem of NeuroBytes models.

    The first major change I made was to switch the anatomical skeleton leg to a cartoonish leg with a shoe. The skeleton leg had lots of...

    Read more »

  • Tedious testing suggests the connector selection is sound!

    zakqwy12/08/2016 at 18:21 0 comments

    The JST GH connector has served me well since its initial selection during the development of NeuroBytes v0.8. Prior to choosing this connector, we ran a few usability tests with customers to see which designs were intuitive to use (some pigtails have been lost to history):

    The GH was the clear winner; its locking tab was easy to understand and simple to operate, and the connector's small width compared to some competitors was a bonus during the PCB design. Definitely better than the tiny SH series used in v0.5, v0.6, and v0.7.

    However, larger concerns loomed on the horizon. The JST GH datasheet omits connector lifetime: the closest the document offers is 'initial contact resistance' (< 0.030 ohms) and 'contact resistance after environmental tests' (< 0.050 ohms). These connectors, as with nearly every other PCB-mounted header, aren't designed to be consumer-facing, like a micro USB or 1/8" phono plug. Rather, they are designed for single use during assembly, and perhaps subsequent use during hardware upgrades or repairs.

    To date, we have built roughly 700 NeuroBytes boards that use the GH platform, and we have never observed an issue stemming from connector aging. Voltage drop across large networks is essentially negligible, and the additional 1 V headroom provided by the on-board LDO allows us to tolerate higher neuron-to-neuron contact resistance. On the flip side, a number of highly qualified individuals -- JST factory engineers, sales reps, independent electrical engineers, and others -- have raised valid concerns about connector aging, particularly as the failure mode would likely be quite frustrating to end users. No one likes intermittent connections.

    Time for some instrumented tests.

    Finally an excuse to eBay a four-wire resistance meter! The HP 3478A isn't the newest bit of kit, but this one seems to work well enough (although it is not calibrated). I also picked up a set of unbranded Kelvin clips and a Prologix GPIB-USB adapter. Using the ++read command and gtkterm, I can now continuously log the meter reading to a text file. No time stamps, but it sure beats manual data entry.

    The test setup, shown above, is simple enough: I soldered an unused JST GH header to a slab of FR4 and assembled a matching connector which I also soldered down via a short pigtail. Then, I attached two additional leads for the Kelvin clips. The FR4 is clamped down, meaning the entire setup doesn't physically move during testing. It's not perfect -- the milliohm measurement includes the resistance of a bit of FR4, four solder joints, and ~20cm of wire -- but it's good enough to see how the contact resistance changes over time.

    Next, I turned on a few Youtube teardown videos and manually cycled the connector 2,322 times. Each time I mated up the header, I left it connected long enough to get a handful of readings; during the data analysis, I dumped the first reading (it ranged from 10x to 1000x the stabilized measurement) and averaged the rest for each point:

    [click the graph to see a larger image -- you'll likely have trouble reading the axes inline... ]

    As you can see, the connector resistance is quite consistent for the first few hundred cycles -- roughly 40 milliohms. Near insertion 300, the resistance variability starts to increase, although in the vast majority of cases it still remains below 100 milliohms. Subsequently, the measurement stabilizes again and stays fairly constant for the remainder of the test.

    When I fitted a line to the data, the slope was almost exactly zero -- a great sign, as it suggests the connector contact resistance isn't gradually increasing. Additionally, the variability seemed to stabilize halfway through test. In order to better quantify this variability, I put together a histogram of the data:

    As you can see, measurements over 100 milliohms aren't common. The data is also skewed away from normalcy -- the median value is 0.049 ohms, while the mean is 0.057 ohms.

    It's worth mentioning here that the real-world impact of contact...

    Read more »

  • Hot Garbage and Iteration

    zakqwy11/14/2016 at 17:20 5 comments

    Hot Garbage, as in poor product design decisions that I am finally fixing. I'm talking, of course, about the first NeuroBytes v0.8 power supply:

    I covered the panicked construction of these power supplies in a previous log. Briefly, I needed a 5VDC power supply that was somewhat self-contained and JST GH compatible, so I bodged the boxes shown above together at the eleventh hour. They were Hot Garbage for any of a number of reasons; in particular, the slide switch was horrifically unreliable and had a nasty habit of pushing into the case after a few uses.

    The move to on-board LDOs with NeuroBytes v0.91 meant the power supply boards became a good deal less complex. However, I still made some bad decisions with subsequent iterations:

    Above: Still not great. The 9vdc snap doesn't sit flat on the 4xAA pack which looks bad. Also, someone could easily hook up a 9vdc battery which wouldn't be great for any number of reasons. No, I won't take a better picture. Sorry.

    Above: Two similar minimalist options. I tried to make the board as small as possible to minimize cost. Yes, the top right iteration was backwards, so I could either turn the board around or flip it upside-down. Both of these designs were simple enough, but they're also prone to damage when stored loose in a box with heavy batteries installed. Also, the asymmetry means they don't sit flat on the table. A quick fix but in need of another revision.

    Above and below: Many important changes. For starters, I located a nylon rivet on McMaster that _precisely_ fits the two holes in the battery pack, and is sized right for 1/16" FR4. The rivets are incredibly secure and allowed me to expand the circuit board quite a bit. Secondly, I inset the JST GH connectors a bit; when cables aren't plugged in, they're somewhat protected by the board overhang and are less likely to break off if the battery pack is stored loose in a box. Thirdly, I included test switches for each power jack (along with the now-required resistor divider to avoid sending 6vdc to the ATtiny), greatly improving the usability of the battery pack--now it can fully control a connected central pattern generator circuit. Finally, the pair of stick-on polyurethane bumpers prevent the switches from breaking off. We'll see how well they stay adhered; this power supply is going to get the shoulder bag pocket treatment as a real stress test.

    Next steps? Probably one more iteration; I may extend the circuit board a bit to allow more silkscreen documentation (beyond our logo which I like showing off at every possible opportunity). I need to fix a few minor dimensional issues as well, including hole sizes and general alignment. And I'm not convinced the overhang design is the right move; I may keep the board flush with the battery pack and put the connectors on the bottom, along with another pair of bumpers.

View all 70 project logs

  • 1
    Step 1

    Get your hands on some NeuroBytes v04 boards.

    You can grab the gerber files from the GitHub repo, and either etch the boards yourself or send them off to a place like OSHpark or DirtyPCBs for fabrication. I also have a number of individual boards left over from our production run, since we had a panel fail E-testing with a single problem (meaning 31 elements were still good). Drop me a line and I'll send a few your way, as we're pretty much done making v04 devices at this point.

  • 2
    Step 2

    Obtain all parts listed in the 'Components' section of this page.

    • Make sure you get the ATtiny44A model that comes in the 14-pin SOIC package.
    • You might be able to substitute a different RGB LED, but make sure it has the same pinout as the cheap units I used.
    • The passives are all pretty run-of-the-mill; you could probably swap different 0603 pulldown resistors in if you have something else on hand. The other resistors are specific to the various LED channels, so make sure they're correct.
    • The larger filtering capacitor only seems to be necessary if you're directly powering servos using the Motor NeuroBytes firmware.
    • Make sure you get the correct connector headers; the TE Connectivity devices I specified use 2mm spacing, which will seem small if you're used to 0.1" stuff.
  • 3
    Step 3

    Apply solder paste.

    We follow RoHS guidelines (the bare boards, even at the prototype level, were always destined for a classroom, so avoiding lead seemed prudent); as such, we used Chip Quik no clean Lead-Free from a syringe. You can use pretty much anything here, just make sure your board of choice is rated for the required reflow temperature.

    Applying solder paste with a trimmed down matchstick seems to work pretty well. I generally run a line of paste along each side of the microcontroller, letting the solder mask work its magic to avoid bridges. The toughest part seems to be the LED; its leadless nature means if the quantity of solder on each of the four pads isn't consistent, you might miss one of the terminals and end up with RG (or RB or GB) device.

    Or you could be smart and make a damn stencil. I hear kapton stencils are cheap. If you do this let me know and I will send you mad Internet kudos.

View all 9 instructions

Enjoy this project?



Trent Sterling wrote 04/10/2017 at 18:02 point

Wow! Awesome project! I've never seen such a wicked PCB design before!

  Are you sure? yes | no

zakqwy wrote 04/10/2017 at 18:22 point

Thanks! Inkscape + KiCad is a killer combo!

  Are you sure? yes | no

Trent Sterling wrote 04/10/2017 at 18:24 point

Eventually I'll make it that far! Thanks for the tool suggestions!

  Are you sure? yes | no

Pure Engineering wrote 09/14/2016 at 08:28 point

Nice Work. I have a similar project but more sensors.

you should check out the connector system that I'm using to make the modules. its simpler than crimping wires to connect everything together. I would love to see if somehow you made a compatible neutron module.  Let me know if you are interested.    

  Are you sure? yes | no

zakqwy wrote 09/14/2016 at 13:16 point

very cool stuff! it looks like your platform is intended for fast prototyping of sophisticated IoT systems and sensor networks; a bit different from our goals but still quite interesting. I grabbed the datasheet for your edge connector, it's something we have considered but not thoroughly investigated. definitely appealing as the on-board headers are a BOM cost driver for us.

Compatibility with your platform would be neat, but based on the sophistication of your inter-board communication protocol i'm guessing it would be a challenge. I'll keep that in mind for the future, but at this point we've got a lot of other stuff on our plate as we continue to work towards commercialization.

  Are you sure? yes | no

zakqwy wrote 09/14/2016 at 15:46 point

So one thought--I read through the AVX card edge connector datasheet you linked to in #PURE modules and they're only spec'd for five insertion cycles. How have you gotten around this? Did you do your own testing and determine their rating was conservative, or did you design around an assumed contact resistance growth rate? I've run into the same thing with the JST GH platform I've opted to use, so I'm curious to know how you solved the problem.

  Are you sure? yes | no

Pure Engineering wrote 09/14/2016 at 19:29 point

I have tried out the connector. It seems to last about 10+ connections for a solid connection. after about 100+, it still works but it gets loose, but maintains an electrical connection. I just wouldn't use the connector as a mechanical support. 

The connector is low cost, so by replacing the part you can keep going. 
The pcb does develop some scratching over time. but even after 100+ insertions doesn't cut through the copper. 
I somewhat compare it to the old school NES cartridges.  So I'm guessing after a couple years there might be some issues if you keep reusing the same connector. but swapping to a new one restores it.

Also, I think their 5 insertions spec is to maintain a 2.5A current capability. Since in this case, we are typically doing IO, or a few mA for power we are good. 

  Are you sure? yes | no

jaromir.sukuba wrote 09/12/2016 at 10:11 point

I read all project logs just now. Amazing amount of work. Though I have to admit neuroscienece is not cup of coffee and I'm not exactly sure how the simple neurons can do something useful - but I followed the project to find out soon :-)

  Are you sure? yes | no

zakqwy wrote 09/12/2016 at 16:39 point

Thanks for the kind words, @jaromir.sukuba

You bring up an excellent point, and to be honest one I've struggled with for some time now. When @NeuroJoe and I started our collaboration almost two years ago, the fundamental decision was made _not_ to focus on making the platform as sophisticated and close to biological reality as possible; that means we aren't concentrating on important concepts such as neurotransmitters, neuroplasticity, backpropagation, and so forth. Rather, our model is designed to engage a younger group of students in _basic_ neuron and neuroscience concepts, such as action potential thresholds, dendritic weighting, and so forth. In other words, we're balancing sophistication with cost and usability, something that I don't do automatically (as you may have noticed in some of the highly tangential logs buried within this project).

As we start to have Real Customers--we've sold a few dozen prototype kits at this point to various institutional partners--we'll solidify the fundamentals of the platform and hopefully start building out more sophisticated features as time and capability allow. So yeah, stay tuned, I suppose  :-)

  Are you sure? yes | no

jaromir.sukuba wrote 09/14/2016 at 20:33 point

Again, I must admit I'm not very familiar with neuroscience, so perhaps my questions will be a bit trivial, but perhaps I'm not the only one wondering:

* If the neurons of your project are not very close to reality (I fully understand this decision - we just can't have it all at once), how much is it far from reality? I know it is hard to quantify, so another question:

* Some animals do have low neuron count, like Caenorhabditis elegans has 302 of them (I'm pretty sure you mentioned in logs). Is it possible - having 302 pieces of Neurobytes (that is a lot of hardware, but achievable) - to "emulate" its behavior? Or the advanced features of neurons are needed to "emulate" the animal?

* How does one real neutron "know" it's part of memory, decision system or sense processing and how it knows what to do with inputs? Are the real neutrons "preprogrammed" somehow by nature for particular function, or are all the same?

* Could you add some sensors and actuators to bring inputs/outputs to the neurons? I think of "emulating" senses and muscles.

  Are you sure? yes | no

zakqwy wrote 09/15/2016 at 15:11 point

No worries @jaromir.sukuba, these are great questions. Not trivial at all, they get at the core compromise we have to make to build a commercial product. I'm responding to each question you starred in like format, so hopefully that keeps this wall of text somewhat clear.

* One of the reasons neuroscience is usually taught at a higher level is that it's just not a simple subject; the internal function of a neuron is inherently complex, involving a number of electrochemical phenomena occurring simultaneously. For example, the 'membrane potential' of a neuron can be measured as a voltage potential across the cell membrane, and it's dependent on the relative concentration (i.e. inside and outside the cell) of several different ions: calcium, sodium, potassium, and chloride. Each of these ions travel through specific ion channels that are permeable to a particular ion, meaning the total membrane potential is dependent on many different specific mechanisms. Since it's common practice to quantify the total membrane potential as a single voltage value (rather than figuring out individual ionic concentration differentials), we do the same in our platform--membrane potential is represented by a single integer. In the end, we've adjusted things like our decay algorithm so the waveform is similar to a _typical_ neuron; the easiest way to see this is to compare our NeuroBytes oscilloscope trace photo:

... to actual lab-recorded neuron membrane potential traces, such as this one:


The fast pulses shown in the last figure are emulated using our Izhikevich-based analog input mode (not yet hooked up to the 'scope, so this is from an early calculation spreadsheet):

So again, certainly not _perfect_, but close enough to give students an accurate picture of what's going on inside a neuron. Without the 'scope the membrane potential is simply represented by LED color, so exact waveform accuracy isn't a huge deal.

* The work folks like OpenWorm ( have done to translate C. elegans' neural structure into an easy-to-understand and open-source project is truly amazing. We've gotten this question a few times in the last several years and unfortunately my answer hasn't changed; it would be _amazing_ to fully emulate the worm (especially now that I'm a few weeks from having 500 v0.91 boards in hand), but I just haven't had a chance to explore the possibility. My _guess_ is that at least some of the neurons they have mapped are dramatically different from the standard NeuroBytes firmware, meaning they would need to be reprogrammed with a custom firmware set. Additionally, I'll bet some of the cells have more than 5 dendritic connections, so a 'mega NeuroBytes board' would likely need to be built that has a huge number of inputs. Not a huge deal, would just require a new PCB design and a higher I/O count microcontroller (or some clever multiplexing). TL;DR: C. elegans is on the long list of cool projects, but is not a priority currently.

* From my understanding, memory is formed by (a) structural network changes (i.e. new axon-to-dendrite connections forming and old ones decaying); and (b) individual dendritic weighting changing as connections are strengthened and weakened. I'll defer to an actual neuroscientist ( @NeuroJoe, want to jump in on this? ) regarding the initial state of a newly formed neuron. The current firmware keeps weighting constant (but different for each dendrite), but the physical network is easy to modify by a student. The next firmware iteration will also have a simple method implemented for changing individual weightings on the fly to 'tune' a neural circuit; however, at this point we haven't gone down the path of learning, neuroplasticity, backprop, etc.

* New input and output methods are on our short list. On the input side, I've developed a littleBits-->NeuroBytes adapter that allows one to use the full range of littleBits analog sensors (light, flex, sound, etc); however, currently this implementation requires a specific firmware flash. Again, the next firmware iteration will integrate this operating mode into the standard runtime and include some degree of adjustment; the lack of 'zero' and 'span' settings made a lot of the LB analog sensors fairly useless as they didn't exhibit a wide enough voltage swing to produce a useful (i.e. changing) output. For outputs, the servo operating mode is pretty well established but we're considering audible and linear actuation schemes too. The servo mode actually works pretty well with continuous rotation servos (see NeuroBuggy).

Make sense? Apologies for the text wall. Feel free to reply again on this thread if you have more questions or want additional clarification.

  Are you sure? yes | no

jaromir.sukuba wrote 09/22/2016 at 09:59 point

Thanks you much for the response, it contains a lot of information and leaves me with little to ask. The text wall is absolutely appropriate, there is not much sense in scientific haiku ;-)

Apart from your response, I did my homework and studied a bit about neurons and nervous system - that's why the late reply. I found a few books in my home library (father worked in medical field) related to this topic too. I must admit some things start to make sense now, but the deeper concepts are still somehow blurry. Perhaps that's why I'm not neuroscientist, but hardware designer/programmer :-)

I feel like I'm going to stay at this level of understanding for now and watch you how the project turns out. The amount of work put into this is really amazing.

  Are you sure? yes | no

Ric Johnson wrote 05/13/2016 at 12:29 point

How close are you to modelling C. elegans with NeuroBytes? Is this far away or a logical step in the near future?

  Are you sure? yes | no

zakqwy wrote 05/13/2016 at 12:50 point

Up until now we haven't had the 302 neurons needed to model the worm, but that will change on our next production run. I also haven't closely studied any of the specific requirements for that model: for example, I'm not sure if a simple integrate-and-fire device can accurately model the whole organism, and I'm guessing at least a few neurons would need more than 5 dendrites.

Having said all of that, the OpenWorm guys made brief contact with us when we started showing off v0.4 and wanted to know the same thing. Right now our priority is to get the many ducks in a row for scaling production beyond toaster oven quantities (i.e. low hundreds) so we can sell a few kits and really prove out our business case. However, our next model will move beyond the patellar reflex; we're going to build out a simplified modular invertebrate motion platform based around the repeating elements in C. Elegans and others. So, stay tuned!

Thanks for the comment! Any suggestions to get us closer to modeling the worm are welcome!

  Are you sure? yes | no

maehem wrote 12/19/2015 at 20:23 point

Cool PCB design!   As for the google eyes,  I believe it was Dan at OSH Park who was affixing googly-eyes to everything.

  Are you sure? yes | no

Peter McCloud wrote 12/19/2015 at 17:49 point

The new boards look really slick. Congrats on the NSF funding!

  Are you sure? yes | no

zakqwy wrote 12/19/2015 at 17:53 point

Thanks @Peter

  Are you sure? yes | no

spacetoon34 wrote 09/07/2015 at 14:25 point

Very nice and rich blog. 

I have tried to gather all my collection of websites and youtube channels and videos in one place. There are many things also to introduce to you in an elegant way so I established this site : and I hope you like it.


  Are you sure? yes | no

Peter McCloud wrote 08/27/2015 at 00:47 point

Congrats on becoming a best product finalist. Good luck winning the best product prize. Keep up the great work!

  Are you sure? yes | no

zakqwy wrote 08/27/2015 at 02:58 point

Thanks @Peter!!!

  Are you sure? yes | no

Jarrett wrote 04/09/2015 at 18:13 point

More research material:

Harvard free online neuroscience courses. At least part i and part ii discuss how neurons work.

  Are you sure? yes | no

zakqwy wrote 04/09/2015 at 18:18 point

Thanks! I've been picking my way through an introductory neuroscience textbook, but a free open class would be a great additoinal resource. I'll give the Harvard course a look; I've never done a MOOC but this seems like a good starting point.

  Are you sure? yes | no

Dom wrote 01/07/2015 at 13:12 point

Perceptrons.. I like.

Looking forward to some data :]

  Are you sure? yes | no

Paul Bristow wrote 01/05/2015 at 16:39 point

Hi, would an affordable open source hardware neural network be of any use to you guys?

  Are you sure? yes | no

zakqwy wrote 01/05/2015 at 17:08 point

Hi Paul--definitely an interesting project, but it's not terribly relevant here. Having discrete neuron elements is a key part of this effort, and the BrainCard stuffs a bunch of them on a single chip. Either way, thanks for posting.

  Are you sure? yes | no

Stryker295 wrote 12/23/2014 at 03:40 point

Is there a video of the SketchUP plugin in action, by any chance?

  Are you sure? yes | no

zakqwy wrote 12/23/2014 at 04:49 point

There isn't, sadly. I'll ping Andrew and see if he has a clip of the program running. Or better--source code to post!

  Are you sure? yes | no

Stryker295 wrote 12/25/2014 at 01:40 point

Awesome! You mentioned the plugin and I only just went looking for it but couldn't find a link, ah well :P

  Are you sure? yes | no

AltMarcxs wrote 12/15/2014 at 18:21 point
You probably know about this:
I had a thought about many Stm32 at 120MHz/96KB ($4.72/pce) running a 32bit version of above on one board.
But now with the odroid C1 at $38 for quad 1.5 GHz/ 1GB, a got another goal.

  Are you sure? yes | no

zakqwy wrote 12/15/2014 at 19:00 point
I didn't come across that paper in my research--thanks for posting, very interesting to see how they implemented complex floating point functions using extremely basic microcontrollers. The PIC18F45J10 seems a bit more sophisticated than the ATtiny44A; they're a bit more than twice the price in a QFP configuration and 100+ quantities: Still amazing though, since it looks like their ultimate goal is to emulate an entire network on a single chip!

My project is probably a bit more constrained by cost than you have in mind for your STM32 project; each Neuron's entire BOM cost including external accessories is under $4.72. The intention isn't to make any individual element terribly complex; instead, each Neuron should be minimally capable but extremely cheap and easy to use/interconnect on the fly.

  Are you sure? yes | no

K.C. Lee wrote 09/22/2016 at 10:30 point

Or you can use cheap STM32F030F4 at $0.44 a piece from China.  That's what I am using on my HaD projects this year.  Only a few times or so slower in integer math, but at 1/10 the 120MHz price.  (don't have FPU nor single cycle multiplier)  Still beats the heck out of 8-bit chips. 

  Are you sure? yes | no

zakqwy wrote 09/23/2016 at 13:00 point

It's something we've been considering, @K.C. Lee. I do enough 16-bit math on the ATtiny that I'm ready to graduate to something more advanced. I picked a tube of 'em up the other day and have been putzing around with libopencm3, it seems like a good FOSS option.

  Are you sure? yes | no

zakqwy wrote 11/10/2014 at 15:12 point
Thanks Bruce! Your project has some great background info--particularly related to membrane potential and recovery parameters. It looks like I need to study the Izhikevich model.

  Are you sure? yes | no

Stryker295 wrote 11/10/2014 at 04:36 point
This is definitely interesting! I'm curious to see where it goes.

  Are you sure? yes | no

zakqwy wrote 11/10/2014 at 12:20 point
Thanks! It's been a fun project; right now I'm in the documentation catch-up phase, so you'll probably see a flurry of updates that cover the system background.

  Are you sure? yes | no

Stryker295 wrote 12/08/2014 at 02:49 point
Random question that just popped into my head: Could this be easily ported to run so that instead of individual uCs emulating the neurons, individual pixels on a computer screen could fill in for that?

Just kinda envisioning a version of this written in javascript or something that could be run in-browser, or perhaps even downloaded and run locally. Random thoughts!

  Are you sure? yes | no

zakqwy wrote 12/08/2014 at 14:46 point
Great point. The system totally could be emulated--that's actually what Andrew's program does, although his software was written as a Ruby-based plugin for Sketchup. I need to get him to post his repo so folks can play around with it.

We were keen on having a physical toy, something one could easily manipulate on a tabletop, away from a computer or smartphone. We considered implementing some type of centralized system, maybe using serially addressable LEDs and a main controller, but realized that changing the physical layout of the system would be cumbersome--every time you moved axons about, you'd need to copy those changes into software so everything gets displayed properly. Plus, you'd be tied in with a computer.

One interesting side effect of completely localized control and the fact that I skipped crystal oscillators to save cost--each microcontroller's clock rate is slightly different, so Neuron behavior has an element of pseudo-randomness. This could be emulated in software too, of course, but it makes the toy feel a bit more organic.

  Are you sure? yes | no

Similar Projects

Does this project spark your interest?

Become a member to follow this project and never miss any updates