Build your own nervous system!

Similar projects worth following
NeuroBytes are small modular electronic neuron simulators that can be freely connected to form complex and biologically representative neural circuits. The NeuroBytes platform is currently in its billionth prototype generation with approximately 1500 individual elements built to date, along with numerous accessories that help constructed networks interface with the real world. Joe and Zach formed NeuroTinker, LLC to commercialize the concept on 4/15/2015, and received funding from the National Science Foundation under a Phase I (+II!) SBIR grant. Learn more at!

NeuroBytes is an Open Source Hardware project, with all hardware and firmware released under CC-BY-SA 4.0. Design files (hardware, software, firmware, etc) are available here:

Thanks for checking out NeuroBytes!

This Project Details section is current as of 9/25/2017. To see the previous edition, follow this link.

[Above: 3+ years of NeuroBytes evolution. The new boards have different connectors, different microcontrollers, different graphics, different LEDs, different communication protocols, and ... well, different just-about-everthing. Below: the current NeuroBytes ecosystem as rendered from the original PCB designs, all of which have been prototyped and some of which have been produced in 100+ board quantities. Left to Right: Interneuron, Tonic Neuron, Motor Neuron, Touch Sensory Neuron, Pressure Sensory Neuron, Rod Photoreceptor, Battery Pack. Not shown: Vestibular System, Cochlea, NID, Braitenberg Vehicle chassis, Patellar Reflex model, etc...]

NeuroBytes® are open-source electronic neuron simulators designed to help students understand basic neuroscience. Each module can be freely connected with others to form complex and biologically representative neural networks. On-board reverse-mounted RGB LEDs indicate current state: excited, inhibited, firing, learning, etc. The boards are based on the STM32L0 ARM Cortex M0+ microcontroller and the libopencm3 hardware library, and communicate via a novel board-to-board networking protocol that allows central data collection and global message broadcasting.

[img: NeuroTinker co-founders Joe (left) and Zach (right) at San Diego Maker Faire 2015]

Since its inception in mid-2012 and hardware development starting in February of 2014, this project has gone through many iterations and spawned the formation of NeuroTinker, LLC, a for-profit company primarily funded via Phase I and Phase II SBIR grants from the National Science Foundation. We are currently on track to commercially launch the product via a crowdfunding campaign before the end of 2017, and will have products for sale on the general market in early 2018.

Open Source Hardware

NeuroBytes is Open Source Hardware, as defined by OSHWA and standardized under their Certification Program (US000024). Our design files (C code for firmware and KiCAD files for hardware) are released under the terms of GPLv3 as detailed in our license file.


NeuroBytes GitHub Repository. The firmware and hardware files for this project are released under GPLv3 and the latest version will always live in the linked GitHub organization. Individual boards are listed under their respective names (Interneuron, Motor Neuron, etc).

NeuroTinker company site. This is where we share additional information related to the NeuroBytes product line, such as operating instructions, kit purchase links, and social media account info. We have a public forum on the site that sees occasional use, and a spot to sign up for infrequent email newsletters that make every effort to consolidate important info into one location.

This project page. Yes, the project page you are currently reading. New project logs will appear somewhat frequently and tend to provide a good up-to-the-minute record of technical challenges and developments; conversely, if you want an exhaustive history of NeuroBytes you can start at the beginning. We've had a few great conversations in the log page comment sections (along with the main comment section), so don't hesitate to provide your input.

[img: jumbo-sized and functional NeuroBytes v0.91 board built for World Maker Faire 2016.]

6mm x 6mm castellated daughterboard for mounting a standard common-anode 0606 RGB LED in place of the fancy reverse-mount SunLED version we prefer.

Zip Archive - 217.87 kB - 05/04/2018 at 17:31



NeuroBeast v2 springy feet, ready to accept shoes. Use PLA, 0.2mm layer height, 20% infill, no supports or raft. Released under the terms of the GPL v3, as detailed in the LICENSE file.

Standard Tesselated Geometry - 246.37 kB - 05/27/2016 at 16:37



NeuroBeast v2 8-legged chassis *.stl file for printing. Use PLA, 0.2mm layer height, 20% infill, no supports or raft. Cams will likely need a bit of trimming to work reliably. Released under the terms of the GPL v3, as detailed in the LICENSE file.

Standard Tesselated Geometry - 712.97 kB - 05/27/2016 at 16:37



All NeuroBytes firmware and hardware are released under GPL v3.0. Read it!

plain - 34.32 kB - 05/15/2016 at 20:00


Izhikevich model.ods

Spreadsheet used to model Izhikevich's neuron dynamics, both in floating point and integer format.

spreadsheet - 159.10 kB - 01/28/2016 at 20:26


  • 1 × v0.91 board 0.062" FR4-TG170, 1 oz Cu, double-sided, ENIG finish, matte green solder mask, white silkscreen. Tab routed and 4x panelized. See github/zakqwy/neurobytes for Gerbers and KiCad files.
  • 1 × D1, RGB LED, bottom mount, 4-PLCC SunLED XZMDKCBDDG45S-9
  • 1 × D2, Schottky diode, SC-79 Panasonic DB2S30800L or equal
  • 1 × IC1, 5VDC LDO, SOT-23-5 STMicroelectronics LD2980ABM50TR or equal
  • 1 × IC2, ATtiny88, 28-VQFN Atmel ATTINY88-MMHR

View all 15 components

  • it's okay to be paranoid about single-source components

    zakqwy05/04/2018 at 17:56 0 comments

    @Davor recently posted a request for an alternative to the SunLED reverse-mount RGB LED we used on nearly all NeuroBytes boards. @Jarod White actually had the same thought when he first joined the company, and I created a 6mm x 6mm castellated daughterboard that allows one to reverse-mount a common-anode 0606 RGB LED (I used the LTST-C19HE1WT from Lite-On) in place of the specified device. I had to dig this one out of the archive; I only tested it on an old-logo-design ARM prototype:

    The boards look a bit different but the LED effect is the same. And I included a sweet concentric circle silkscreen pattern on the back for good measure. I think I did forget a polarity alignment mark though.

    Design files for the daughterboard (KiCad and Gerbers) are in the Files section. Uploading the design kicks back an error on OSHpark as the boards are just below 0.25" x 0.25", but it's simple enough to pattern the design in KiCad so several are fabricated at once. And now that the SunLED devices appear to be out of stock (at least on Digi-Key), this design may become ever more relevant.

  • Inter-neuron comms, the Network Interface Device...

    zakqwy03/22/2018 at 22:59 0 comments

    ... and other cool stuff @Jarod White came up with but hasn't had time to talk about. Sorry to jump the gun, friend, but feel free to comment if you have anything to add.

    Revisiting the Oscilloscope

    Sometime in early 2016, I created the NeuroBytes Oscilloscope, a prototype device that allowed one to view the real-time membrane potential of a connected NeuroBytes board:

    I built two Oscilloscopes, one for me (below) and one for @NeuroJoe (above). The devices, based around the #Teensy 3.0 & 3.1 & 3.2 & 3.5 & 3.6 with a portrait-style 320x240 LCD, worked with ATtiny88-based NeuroBytes v0.8. The boards ran modified firmware that bitbanged UART data via one of the dendrite connections. Note the baud rate indication on the LCD in the image above; the bit-banging wasn't carefully clocked, so I tweaked the Teensy's UART speed to get good data (in this case, at 765 baud).

    We didn't anticipate the reception we would get showing these devices to potential customers. At Maker Faires, on college campuses, and in high school classrooms, the consensus came in that this was our missing piece. Viewing real-time membrane potential allowed the user to fully grasp the meaning of the LED color on each NeuroBytes board. Students immediately grasped concepts like temporal and spacial summation, dendritic weighting, action potential thresholds... the list went on. The Oscilloscope had the makings of the NeuroBytes 'killer app'.

    Platform Change

    After NeuroBytes v0.91 (the green boards), we decided to change microcontroller platforms from the ATtiny88 to the STM32L0. Part of this was performance-driven; our decay algorithm at the time made use of 16-bit variables on an 8-bit micro, something that could cause issues with our use of pin-change interrupts if we weren't careful. And the ATtiny88 lacked three independent timer outputs, meaning the RGB LED had to be PWM'd manually. This led to all sorts of code optimization tangents that never really eliminated LED flicker and significantly limited algorithm complexity.

    [above, swinging a long-exposure camera at NeuroBytes v0.4 (left) and v0.8 (right). Flicker got better, but 160 Hz still ain't good enough for an RGB LED running at 10% brightness.]

    Mostly, the decision to switch microcontroller platforms was driven by cost. Say what you will about the Microchip acquisition of Atmel; all I know is that around that time, ATtiny88 prices doubled and suddenly the 32-bit L0 was the budget option. Goodbye avrdude, hello st-link and libopencm3.

    When I designed the first NeuroBytes boards based around this new processor, I wanted to build in oscilloscope functionality from the start. I also wanted to ditch pogo connections for programming so end users could more easily reflash boards. And we wanted a dedicated and unique (i.e. not 4-pin) power connection for each board -- students were getting confused by the notion of plugging a battery pack into an axon or a dendrite ("but I thought neurons were unidirectional, why does the power connection not matter... ?"). In any case, I added on a 7-pin JST GH connector for power, programming, and a dedicated SPI port for the 'scope:

    We eventually ran into issues with the 7-pin JST GH connector; the plastic webbing on the top of the connector spanned far enough that it was easy to damage. The clear answer was to ditch the SPI NSS and MOSI lines and move to a 5-pin connector.

    Jarod has an idea

    The new oscilloscope concept, as I planned it at least (with 4 channels), would have worked something like this:

    Black wires are 'scope data, red cables are NeuroBytes impulses. Problem: cables are expensive and tend to get out of control when you have a lot of them in a small space. Jarod thought we could do it like this:
    The NeuroBytes would send data along the same cables as they originally sent simple pulses...
    Read more »

  • Happy New Year, KS goal achieved, and we failed EMC testing.

    zakqwy01/15/2018 at 21:30 1 comment

    Yaay 2018!

    First some good news: we made our Kickstarter goal by a comfortable margin ($34k of a $25k target). Thank you to all of our supporters, especially those whom found us via the timely Hackaday blog post. We aren't immediately cutting a PO to manufacture the products (for reasons I cover below), but our plan is to do a ~$50k manufacturing run -- in other words, if you missed the KS campaign we'll have tons of inventory left to sell. And we're fortunate that we don't need Kickstarter money to pay our salaries or cover the costs of testing. We are lucky -- most don't have that luxury.

    Earlier this month, Jarod and I spent the day at a local compliance testing facility. Our products -- the NID, the battery pack, and all the NeuroBytes boards -- don't have any intentional RF sources on board, so the radiated emissions testing under FCC 15.109(g):2018 and ICES-003:2016 is called 'unintentional radiator' testing. Before I discuss results, a few pictures from the 10m anechoic chamber:

    above, the 10m anechoic chamber entrance. Of note: the inner door handle is plastic. You can also see the low-frequency antenna mounted on its automated rising boom. This test is a farfield measurement, so the antenna is 10m from the target and measurements are taken with horizontal and vertical polarities from a variety of heights.

    above, a close-up of the low-frequency antenna. Note the bizarre conductive foam anechoic panels (they are probably 600mm square and extend back into the wall 1m), and the expensive fiber-optic lights.

    above, obligatory high-frequency antenna selfie. We didn't use this one since our equipment didn't have any active radiators (so we swept up to 1 GHz).

    above, Jarod stands by the high frequency antenna. The table on the round floor piece holds the NeuroBytes network (barely visible) and rotates continuously during testing. We stashed the NID's tablet in the floor so it (hopefully) wouldn't interfere with results.

    above, our chosen NeuroBytes network configuration for compliance testing. After consulting with the lab and a number of other sources we determined that a representative network with a 'typical' amount of traffic would suffice; testing boards individually would be cost prohibitive and, more importantly, not representative of how NeuroBytes are actually used.

    above, part of the analysis setup at the lab -- several shielded cameras recording the DUT (Device Under Test) along with a screen from their insanely fancy Agilent spectrum analyzer (whose model number I forgot to jot down). 

    Yeah, it was a neat experience, and the tech who worked with us for the 2-hour radiated emissions test was helpful and well-informed. Then we failed the test! Results:
    Above, a full NeuroBytes network test with the NID attached and the tablet graphing real-time data. This graph shows raw cumulative test data and we didn't actually fail in all the spots shown; after running through the quasi-peak detector we found that the first big hump (~30 MHz - 40 MHz) is the one we need to worry about.

    We got the result above after a ~10 minute test, so we had plenty of time to try various experiments to narrow down the problem. I was concerned with rise time on the inter-neuron communication lines causing EMI spikes; our tech thought the USB cable may have been the issue. We didn't have any better-shielded or ferrite-equipped cables with us, so he tried snapping a giant 'suitcase ferrite' onto the USB cable:

    Above, maybe we're on the right track going after the USB cable? The 32 MHz peak is down quite a bit, but the 39 MHz peak jumped a few dBuV/m...

    Not shown here, we also tried pulling both the continuous rotation servo and the micro-servo off the network; even though they're tiny I thought we might be seeing some commutation noise. No change.

    Next, we ditched the tablet and USB cable entirely. Unfortunately this meant we didn't have data running between the NID and the first NeuroBytes board, but we wanted to be sure the...
    Read more »

  • Kickstarter is live!

    zakqwy11/16/2017 at 20:27 0 comments

    Yes, the day is here. We've worked on this project for damn near 5 years (well, 3 years that are documented here, at least) and it's finally time to see if anyone actually wants to buy NeuroBytes. If you have kept up with my logs, hopefully you're confident in our ability to deliver products in a timely manner; we've sorted through the manufacturing side of things and when push comes to shove, our products really aren't all that complicated. I should also be clear about our goals: we want to get our products into the hands of real paying customers that will get angry and break things, and plow as much profit as possible into building inventory so we can make a big splash at NSTA's national meeting next year.

    I guess I don't really expect a lot of readers to be interested in actually backing our campaign -- I didn't start publishing logs here with that intent! In fact, if you're interested in NeuroBytes, I hope you head to our Github repo ( ), grab our KiCad files, and spin up your own boards. Really, the QFNs are easy to solder by hand since they don't have ground pads.

    However, if you have parents, relatives, students, or anyone else that you think might be interested in learning more about neuroscience in a hands-on way, I do ask that you pass our campaign on to them. Joe and I are new to this crowdfunding thing (and PR/marketing in general), so any help is appreciated!

    [up next: less of this commercial spam and more interesting technical posts about things like our new Network Interface Device board, I promise]

  • Getting ready to lunch. Er, launch! Launch. On Kickstarter.

    zakqwy10/29/2017 at 17:17 0 comments

    [above, our NeuroBuggy Kit. Designed for building Braitenberg Vehicles and playing around with simple neural network-controlled robotics.]

    Yes, the day is finally near at hand -- we're going to launch a Kickstarter campaign in the coming weeks to fund our first major production run. Joe and I (and Jarod, and Jill, and Andrew, and Pat, and Mel, and Cecilia, and a great many other people) have been working on this project for a number of years and it's time to push the concept into the real world.

    We're starting with five kits; the NeuroBuggy Kit shown above, along with three physiology kits (the Knee Jerk Reflex Kit, the Eye Kit, and the Skin Kit), and one Advanced Kit that includes a whole bucket of NeuroBytes. We are also offering a few other reward options -- a sticker, a 'random board only', a standalone Network Interface Device, and two classroom-scale sets with curricula.

    Expect at least one more log update pushing our KS campaign, but I'll also get back to the technical side soon enough. Jarod is getting close to a working NID prototype (I just ordered boards!), and its functionality is probably what has excited and driven us most in the past year.

    So. If you are interested in following our KS launch, I suggest heading to the NeuroTinker website -- -- and signing up for email updates on our page so you know when the campaign goes live. We're tentatively planning for 11/7, but... there are still many things to be done...

  • Another documentation update . . .

    zakqwy09/25/2017 at 21:05 0 comments

    Time for another Project Details update. I guess these happen annually now? In any case, the previous text is below... this log mostly exists to preserve the 10/10/2016 copy:

    Thanks for checking out NeuroBytes!

    This Project Details section is current as of 10/10/2016. To see the previous edition, follow this link.

    [img: NeuroBytes v0.91 boards getting programmed and tested, October 2016]

    NeuroBytes® [we did register the name and the NeuroTinker logo, as discussed here] are open-source electronic neuron simulators designed to help students understand basic neuroscience. Each modular board includes five inputs (or dendrites) along with two outputs (its axon terminal) that allow easy connection with other identical boards. A rear-mounted RGB LED allows learners to quickly visualize various characteristics of each neuron, including membrane potential, operating mode, and firing rate. Additionally, analog sensors can be used for advanced biological modeling using our littleBits interface adapter and suitable analog input Bits (such as the light/dark sensor). Outputs can be connected to "muscles": 9g hobby servos driven through a NeuroBytes board switched to Motor Neuron mode.

    [img: NeuroTinker co-founders Joe (left) and Zach (right) at San Diego Maker Faire 2015]

    Since its inception in mid-2012 and hardware development starting in February of 2014, this project has gone through ten iterations and spawned the formation of NeuroTinker, LLC, a for-profit company primarily funded via a Phase I SBIR grant from the National Science Foundation. As of October 2016, we have submitted a Phase II application under the same prompt, joined an edtech-focused accelerator called EDSi, run a MN-based manufacturing trial that resulted in prototype sales into both secondary and post-secondary education markets, and generally spent a great deal of time preparing the products and accessories for commercial sale.

    Open Source Hardware

    NeuroBytes is Open Source Hardware, as defined by OSHWA and standardized under their Certification Program (US000024). Our design files (C code for firmware and KiCAD files for hardware) are released under the terms of GPLv3 as detailed in our license file.


    NeuroBytes GitHub Repository. The firmware and hardware files for this project are released under GPLv3 and the latest version will always live in the linked GitHub repo.

    NeuroTinker company site. This is where we share additional information related to the NeuroBytes product line, such as operating instructions, kit purchase links, and social media account info. We have a public forum on the site that sees occasional use, and a spot to sign up for infrequent email newsletters that make every effort to consolidate important info into one location.

    This project page. Yes, the project page you are currently reading. New project logs will appear somewhat frequently and tend to provide a good up-to-the-minute record of technical challenges and developments; conversely, if you want an exhaustive history of NeuroBytes you can start at the beginning. We've had a few great conversations in the log page comment sections (along with the main comment section), so don't hesitate to provide your input.

    [img: jumbo-sized and functional NeuroBytes board built for World Maker Faire 2016.]

  • Product Update!

    zakqwy09/19/2017 at 22:38 0 comments

    A few months ago, I mentioned that we were preparing to embark on a prototyping sprint. While prototyping will forever be a continuous and fluid process, at this point the fast-setting cement that is hardware development has solidified enough to share in a more formal manner than overly stylized Instagram posts: two(ish) months we are launching a crowdfunding campaign (I know... 'ugh'. Agreed. I promise to keep my project log spam to a minimum) with the following nearly finalized boards:


    The Interneuron board operates as an integrate-and-fire neuron simulator, similar to the previous (v0.4, v0.8, v0.91, etc) NeuroBytes boards. Beyond the ARM upgrade (STM32L0 + libopencm3), these boards have a more balanced dendrite-to-axon ratio (4:3) which helps a great deal when building complex circuits. The graphics have changed quite a bit as well -- we're moving away from our previous logo-based design to something a bit more physiologically accurate, so the dendritic tree is quite a bit more spindly on this board. The learning mode features continue to grow more sophisticated; our latest iteration features potentiation and depression, both of which are triggered by switching modes with the on-board switch. And most importantly, these boards (along with the others in this post) no longer send simple voltage spikes to indicate a firing event; instead, they communicate via JarodNet, a novel protocol developed by @Jarod White that really deserves a post of its own.

    Touch Sensory Neuron

    The Touch Sensory Neuron (previously Touch Sensor, previously^2 a tiny snap-action switch that attached to standard NeuroBytes) is pretty simple -- it fires immediately whenever the switch is pressed. From a pure cost perspective this board could be simplified a bit (or integrated with the Interneuron), but separating mechanoreceptors from interneurons is critical from a biology education point of view. The board graphics also show a few new concepts that can be taught in the classroom -- myelination (although this board doesn't operate any faster than others) and unipolar neuron physiology.

    Motor Neuron

    The Motor Neuron is currently the only output device (beyond on-board LEDs) in our ecosystem. Initially designed for our patellar reflex demonstration, the Motor Neuron is now also used to drive continuous rotation servos for building Braitenberg Vehicles (again, this deserves its own post). These boards have three equally-weighted dendrites and on-board connections for two servos, so we no longer need fussy and easy-to-lose JST-to-servo adapters. The ID switch is used to change between CR and STD servo mode.

    Rod Photoreceptor

    The Rod Photoreceptor emulates its biological analogue by causing downstream NeuroBytes boards to fire at a rate inversely proportional to the light intensity at the sensor. In the interest of building more nifty circuits, we also provided a not-quite-biologically-accurate 'light output' that is directly proportional to light intensity. The boards use a Broadcom ambient light sensor, which is super slick because it integrates a photocell with a logarithmic-response amplifier circuit -- that means it responds to light like the human eye, and has the effect of giving us a few extra bits on the STM32L0's ADC. Crucially, the boards also have easy-to-use zero and span buttons so they can be quickly calibrated to ambient classroom conditions. And the top and bottom curves on the boards match up, so they can be stacked with the sensors quite close to each other:

    Pressure Sensory Neuron

    The Pressure Sensory Neuron is another type of mechanoreceptor that fires action potentials at a rate proportionate to the pressure applied to the on-board force sensitive resistor. This is a great board for testing networks quickly, and it has quite a bit of range-ability -- the FSR combined with the STM32L0's 12-bit ADC mean a light touch and a hard press both produce noticeable changes. In the future we'll probably integrate some kind of...

    Read more »

  • Making Izhikevich Neurons Fast on the STM32

    Patrick Yeon07/09/2017 at 05:29 0 comments

    Most of this post is going to be down in the embedded software nitty-gritty, but I think I'll start you off with a little video. Here's a neurobyte board on my bench emulating a "chattering" neuron:

    Having written and verified an integer-based model of the neuron (as I've described in my previous post), I now need to check that it's still accurate on actual hardware and see that it can run at an acceptable speed. To do this, I split my code into a main loop, and a library that does all of the neuron work, then wrote a second main file to setup all of the hardware peripherals and run a timing loop on the microcontroller:

    #include <libopencm3/stm32/rcc.h>
    #include <libopencm3/stm32/gpio.h>
    #include "./izhi.h"
    #define FLOATPIN GPIO1
    #define FIXEDPIN GPIO3
    int main(void) {
        gpio_mode_setup(GPIOA, GPIO_MODE_OUTPUT, GPIO_PUPD_NONE, GPIO15);
        gpio_mode_setup(GPIOB, GPIO_MODE_OUTPUT, GPIO_PUPD_NONE, GPIO5);
        gpio_clear(GPIOA, GPIO15);
        gpio_clear(GPIOB, GPIO5);
        fneuron_t spiky_f;
        ineuron_t spiky_i;
        for (int i = 0; i < 5000; i++) {
            if (i < 100) {
                step_f(&spiky_f, 0, 0.1);
                step_i(&spiky_i, 0, 10);
            } else {
                gpio_set(GPIOA, GPIO15);
                step_f(&spiky_f, 10, 0.1);
                gpio_clear(GPIOA, GPIO15);
                gpio_set(GPIOB, GPIO5);
                step_i(&spiky_i, 10 * spiky_i.scale, 10);
                gpio_clear(GPIOB, GPIO5);

    I also added a simple Makefile to keep track of the build details:

    ARM_PREFIX = arm-none-eabi-
    OPENCM3_DIR = ./libopencm3
    CC = gcc
    LD = gcc
    RM = rm
    OBJCOPY = objcopy
    ARM_ARCH_FLAGS = -mthumb -mcpu=cortex-m0plus -msoft-float
    #ARM_CFLAGS += -fno-common -ffunction-sections -fdata-sections
    ARM_LDLIBS = -lopencm3_stm32l0
    ARM_LDSCRIPT = $(OPENCM3_DIR)/lib/stm32/l0/stm32l0xx8.ld
    ARM_LDFLAGS = -L$(OPENCM3_DIR)/lib --static -nostartfiles -T$(ARM_LDSCRIPT)
    	$(CC) izhi.c host.c -o host.o
    	$(ARM_PREFIX)$(CC) $(ARM_CFLAGS) -c izhi.c -o izhi.o
    	$(ARM_PREFIX)$(CC) $(ARM_CFLAGS) -c stm.c -o stm.o
    	$(ARM_PREFIX)$(LD) $(ARM_LDFLAGS) izhi.o stm.o $(ARM_LDLIBS) -o stm.elf
    	$(ARM_PREFIX)$(OBJCOPY) -Obinary stm.elf stm.bin
    	$(ARM_PREFIX)$(OBJCOPY) -Oihex stm.elf stm.hex
    	$(RM) *.o *.elf *.map *.bin

    With the code running on the native hardware, I fired up by trusty Saleae Logic analyzer to watch pins A15 and B5 to see the timing of my `step_f` and `step_i` loops, respectively.

    It looks like the fixed-point implementation runs 4x faster at ~0.4ms per loop, vs. ~1.67ms for the floating-point version. This surprised me, I actually expected the floating-point one to be much much worse! As a quick check, I re-compiled with `typedef int16_t fixed_t;` to see if 16 bit integers gave any improvement (I expected not, but especially with performance issues it's important to test one's assumptions), and the integer runtime went down to about 0.32ms per `step_i` loop. Of course, the behaviour would be completely incorrect because of the integer overflow I was struggling with when trying to develop a 16bit version, but it's good to have the datapoint that it looks like a series of 16bit operations takes about 80% as long as a matching series of 32bit operations.

    For completeness, I checked the two most obvious compiler optimization settings: `-Os` (optimize for size with speedups that tend not to increase size) and `-O3` (optimize for speed). These led to floating and fixed point loop times of 1.54ms/0.4ms and 1.6ms/0.37ms, so not really huge gains to be had there. This time I'm not surprised; the work being done is pretty straightforward and maps pretty easily to assembly code.

    So how fast do I need to get going? Well Zach didn't exactly give me a spec to hit, but in his log on implementing this for v07 he says he needs to update the LEDs every 30us and has the model calculations broken up in to 7 steps of no more than 29us each (there's also steps for reading...

    Read more »

  • Porting the Izhikevich Behaviour to the STM32

    Patrick Yeon06/17/2017 at 21:18 0 comments

    As Zach teased on the touch slider update, there's been work happening behind the scenes to implement the Izhikevich neuron model on the newer NeuroBytes boards. Well I may as well introduce myself (hi, I'm Patrick! In the daytime I'm an Electrical Engineer, and have been attracted to help out the project here because it reminds me of Valentino Braitenberg's Vehicles, a book that helped kick-start my interest in robotics) and I figured I would keep a log during my work for any budding programmers who would want a peek into the work of an embedded software engineer.

    Straight from the source, we have a model of the neuron where

    v' = 0.04v**2 + 5v + 140 - u + I
    u' = a(bv - u)
    if v >= 30, then {v = c, u = u + d}

    which I implemented in C on my host machine as

    #include <stdint.h>
    #include <stdio.h>
    typedef float float_t;
    typedef struct {
        float_t a, b, c, d;
        float_t potential, recovery;
    } fneuron_t;
    static void RS(fneuron_t *neuron) {
        // create a "regular spiking" floating point neuron
        neuron->a = 0.02;
        neuron->b = 0.2;
        neuron->c = -65;
        neuron->d = 2;
        neuron->potential = neuron->recovery = 0;
    static void step_f(fneuron_t *neuron, float_t synapse, float_t ms) {
        // step a neuron through ms milliseconds with synapse input
        //   if you don't have a good reason to do otherwise, keep ms between 0.1
        //   and 1.0
        if (neuron->potential >= 30) {
            neuron->potential = neuron->c;
            neuron->recovery += neuron->d;
        float_t v = neuron->potential;
        float_t u = neuron->recovery;
        neuron->potential = v + ms * (0.04 * v * v + 5 * v + 140 - u + synapse);
        neuron->recovery = u + ms * (neuron->a * (neuron->b * v - u));
    int main(void) {
        fneuron_t spiky;
        for (int i = 0; i < 2000; i++) {
            if (i < 100) {
                step_f(&spiky, 0, 0.1);
            } else {
                step_f(&spiky, 10, 0.1);
            printf("%f %f\n", i * 0.1, spiky.potential);

    I compile and run this with

    gcc izhi.c -o izhi
    ./izhi > rs.dat

    I use gnuplot to display this quickly, by starting it up and using the command

    plot 'rs.dat' with lines
    The output looks more or less like the outputs from the paper, at least to my eye.

    The next step was to implement the easiest fixed-point version I can think of, to see how well its output aligns with the floating-point version. The reason to do this is that floats mask a lot of complexity (their dynamic range protects me from rounding errors, overflow, and underflow, for example) that will become my problem to deal with when I work with fixed point arithmetic. Here is the added fixed-point math and an updated main function:

    typedef int16_t fixed_t;
    #define FSCALE 320
    typedef struct {
        // using 1/a, 1/b because a and b are small fractions
        fixed_t a_inv, b_inv, c, d;
        fixed_t potential, recovery;
    } ineuron_t;
    static void RS_i(ineuron_t *neuron) {
        neuron->a_inv = 50;
        neuron->b_inv = 5;
        neuron->c = -65 * FSCALE;
        neuron->d = 2 * FSCALE;
        neuron->potential = neuron->recovery = 0;
    static void step_i(ineuron_t *neuron, fixed_t synapse, fixed_t fracms) {
        // step a neuron by 1/fracms milliseconds. synapse input must be scaled
        //  before being passed to this function.
        if (neuron->potential >= 30 * FSCALE) {
            neuron->potential = neuron->c;
            neuron->recovery += neuron->d;
        fixed_t v = neuron->potential;
        fixed_t u = neuron->recovery;
        neuron->potential = v + ((v * v) / FSCALE / 25 + 5 * v
                                 + 140 * FSCALE - u + synapse) / fracms;
        neuron->recovery = u + ((v / neuron->b_inv - u) / neuron->a_inv) / fracms;
    int main(void) {
        fneuron_t spiky_f;
        ineuron_t spiky_i;
        for (int i = 0; i < 5000; i++) {
            if (i < 100) {
                step_f(&spiky_f, 0, 0.1);
                step_i(&spiky_i, 0, 10);
            } else {
                step_f(&spiky_f, 10, 0.1);
                step_i(&spiky_i, 10 * FSCALE, 10);
            printf("%f %f %f\n", i * 0.1, spiky_f.potential,
                   (float_t)(spiky_i.potential) / FSCALE);

    I'd like to highlight a few habits that usually make my life easier:

    typedef int16_t fixed_t;
    #define FSCALE 320

    I took an initial guess that I'd be using 16-bit...

    Read more »

  • Free (well, PCB-track-only) Touch Sliders!

    zakqwy05/23/2017 at 20:42 0 comments

      One of the many NeuroBytes boards we are currently developing is the Tonic Neuron. This board is sometimes referred to as the Izhikevich Neuron due to the origin of its algorithmic inspiration (and ensuing porting effort, first detailed here and here and continuing elsewhere, which is a discussion for another log post...). The Tonic Neuron is useful because it fires spontaneously, allowing the user to inject periodic signals into a larger NeuroBytes network. In addition to modelling actual tonic neurons in the body, these boards provide a compact (compared to ring oscillators) 'pacemaker' for robotics experiments such as the NeuroBuggy and the Invertebrate Locomotion Model.

      In an effort to reduce the BOM price, assembly steps, physical size, and general clunkiness of a potentiometer-based board, I spun up an experimental PCB to examine the possibility of using a linear touch slider as the user input for varying the Tonic Neuron's pulse rate:

      The slider itself is roughly 20mm long, and connects to a pair of GPIOs on the STM32L. Otherwise, the pins float entirely -- even the ground plane is separated by 2mm and constructed of a low-density grid, all in an effort to reduce parasitic capacitance to ground. When a user touches one of the pads, the capacitance to ground increases; since this effect is related to the contact area on the pad, using the triangular design shown above means the capacitance increase varies linearly as the user swipes their finger along the control. I should mention here that the design of these pads (and the underlying concept generally) can be found in many places, including ST's Touch Sense Design Guide. Other manufacturers have similar white papers; just keep in mind that they're usually written around a touch sensing peripheral, which my cheap-o-edition chips certainly do not include. It's okay, we don't need a fancy peripheral to handle touch input, especially if we aren't putting a barrier in front of the PCB itself:

      We are dealing with quite low capacitance here, to the point that connecting my 15 pF oscilloscope probes to de-tented vias dramatically changes the circuit's response. Measuring this change with the microcontroller is actually quite simple, and is explained nicely on the Arduino implementation website. Rather than using the unrolled loop method described in that code, I made use of the TIM21 input capture peripheral as follows:

      1. Set Touch Sensor 0 as an input and activate the pulldown resistor.
      2. Start TIM21 at clock speed and tell it to stop counting when the Touch Sensor 0 pin goes high.
      3. Activate the Touch Sensor 0 pullup resistor.
      4. Wait a few cycles to ensure that the pin went high.
      5. Record the TIM21 counter value.
      6. Repeat steps 1-5 for Touch Sensor 1.
      7. The touched location on the strip will be proportional to the difference between the two counter values, zeroed in the center.

      In practice, and with the pullup values and parasitic/body capacitance my setup produces, I found that the total time differential to be around 1.5 microseconds. At the current (relatively low) processor clock rate, that gives me ~3 bits of position resolution which should be more than adequate for our purposes. I suspect adding external components (particularly larger pullups/pulldowns triggered by other GPIOs) could increase the time differential a bit and allow one to eek out more resolution without a faster clock, but that would require more BOM lines!

      The code is here if you'd like to take a gander or use it in your own project (GPL v3); we're using libopencm3 along with a few hacks to get the TIM21 peripheral up, which we'll PR to the main project at some point once we have the library updated. All of the touch slider stuff is pretty well self-contained in main.c under the functions get_touch() and get_slider_position().

View all 77 project logs

  • 1
    Step 1

    Get your hands on some NeuroBytes v04 boards.

    You can grab the gerber files from the GitHub repo, and either etch the boards yourself or send them off to a place like OSHpark or DirtyPCBs for fabrication. I also have a number of individual boards left over from our production run, since we had a panel fail E-testing with a single problem (meaning 31 elements were still good). Drop me a line and I'll send a few your way, as we're pretty much done making v04 devices at this point.

  • 2
    Step 2

    Obtain all parts listed in the 'Components' section of this page.

    • Make sure you get the ATtiny44A model that comes in the 14-pin SOIC package.
    • You might be able to substitute a different RGB LED, but make sure it has the same pinout as the cheap units I used.
    • The passives are all pretty run-of-the-mill; you could probably swap different 0603 pulldown resistors in if you have something else on hand. The other resistors are specific to the various LED channels, so make sure they're correct.
    • The larger filtering capacitor only seems to be necessary if you're directly powering servos using the Motor NeuroBytes firmware.
    • Make sure you get the correct connector headers; the TE Connectivity devices I specified use 2mm spacing, which will seem small if you're used to 0.1" stuff.
  • 3
    Step 3

    Apply solder paste.

    We follow RoHS guidelines (the bare boards, even at the prototype level, were always destined for a classroom, so avoiding lead seemed prudent); as such, we used Chip Quik no clean Lead-Free from a syringe. You can use pretty much anything here, just make sure your board of choice is rated for the required reflow temperature.

    Applying solder paste with a trimmed down matchstick seems to work pretty well. I generally run a line of paste along each side of the microcontroller, letting the solder mask work its magic to avoid bridges. The toughest part seems to be the LED; its leadless nature means if the quantity of solder on each of the four pads isn't consistent, you might miss one of the terminals and end up with RG (or RB or GB) device.

    Or you could be smart and make a damn stencil. I hear kapton stencils are cheap. If you do this let me know and I will send you mad Internet kudos.

View all 9 instructions

Enjoy this project?



Davor wrote 05/04/2018 at 11:12 point

One more part-related question, this time regarding the JST connectors. Might I ask, what made you decide on those specifically, so I can try to find a similar one on Mouser that fits the requirements? I'm having trouble sourcing that part too. 

  Are you sure? yes | no

zakqwy wrote 05/04/2018 at 16:18 point

A few things went into the decision:

- Cost. Our Interneuron board needs 7 connectors (8 counting power), so saving cents on each connector goes a long way. The GH is on the low end compared to other options such as Molex.

- Ease of use. I pretty much tested every 1mm - 1.5mm pitch connector I could get my hands on, and the GH wins for usability. The locking tab is easy to non-electronics-folks to understand, and the connector slides out quite easily when desired. A lot of connectors in this size range are friction lock, which students tend to break.

- Size. Again, in the interest of saving money, we wanted to make the PCBs as small as possible. In most cases, the width of the connector (given the number of axon and dendrite connections we wanted) dictate the overall PCB size, and the GH was one of the narrower options we found.

- Current capacity/voltage drop. The GH can handle 26 AWG wires, while many other connectors in this size range top out at 28 AWG. Every bit counts when a lot of devices are daisy-chained.

- Longevity. These types of connectors are almost never designed to be user-facing, and none of the big manufacturers will rate them beyond 25-50 insertion cycles. I tested the GH well past this and found its long-term connection resistance changed less than most other designs.

Sorry you're having trouble finding them. Post a comment if you continue to have issues and I'll drop a handful in the mail.

  Are you sure? yes | no

Davor wrote 05/03/2018 at 22:45 point

Great project! I'd like to build a few, but I can't seem to find a XZMDKCBDDG45S-9 RGB LED with reverse gullwing legs anywhere except Digikey (I'm in Europe so Mouser is my go-to). Is there an alternative? How well would mounting a regular PLCC-4 RGB LED upside down work? 

  Are you sure? yes | no

zakqwy wrote 05/03/2018 at 23:05 point

Hey @Davor -- we actually made a tiny daughterboard that uses a standard 0606 RGB LED with castellated edges. This board gets soldered in upside-down -- it's the reason for the 6mm x 6mm clearance around the LED hole. I don't think the KiCad files for that board are in any of the repos so I'll dig around and get back to you when I find 'em..

  Are you sure? yes | no

Davor wrote 05/04/2018 at 10:24 point

Hey thanks for the quick reply, that sounds excellent!

  Are you sure? yes | no

zakqwy wrote 05/04/2018 at 17:57 point

Just uploaded the design files and added a project log about the castellated daughterboard design:

  Are you sure? yes | no

Michael Barton-Sweeney wrote 03/22/2018 at 23:33 point

Great project!

  Are you sure? yes | no

Trent Sterling wrote 04/10/2017 at 18:02 point

Wow! Awesome project! I've never seen such a wicked PCB design before!

  Are you sure? yes | no

zakqwy wrote 04/10/2017 at 18:22 point

Thanks! Inkscape + KiCad is a killer combo!

  Are you sure? yes | no

Trent Sterling wrote 04/10/2017 at 18:24 point

Eventually I'll make it that far! Thanks for the tool suggestions!

  Are you sure? yes | no

Pure Engineering wrote 09/14/2016 at 08:28 point

Nice Work. I have a similar project but more sensors.

you should check out the connector system that I'm using to make the modules. its simpler than crimping wires to connect everything together. I would love to see if somehow you made a compatible neutron module.  Let me know if you are interested.    

  Are you sure? yes | no

zakqwy wrote 09/14/2016 at 13:16 point

very cool stuff! it looks like your platform is intended for fast prototyping of sophisticated IoT systems and sensor networks; a bit different from our goals but still quite interesting. I grabbed the datasheet for your edge connector, it's something we have considered but not thoroughly investigated. definitely appealing as the on-board headers are a BOM cost driver for us.

Compatibility with your platform would be neat, but based on the sophistication of your inter-board communication protocol i'm guessing it would be a challenge. I'll keep that in mind for the future, but at this point we've got a lot of other stuff on our plate as we continue to work towards commercialization.

  Are you sure? yes | no

zakqwy wrote 09/14/2016 at 15:46 point

So one thought--I read through the AVX card edge connector datasheet you linked to in #PURE modules and they're only spec'd for five insertion cycles. How have you gotten around this? Did you do your own testing and determine their rating was conservative, or did you design around an assumed contact resistance growth rate? I've run into the same thing with the JST GH platform I've opted to use, so I'm curious to know how you solved the problem.

  Are you sure? yes | no

Pure Engineering wrote 09/14/2016 at 19:29 point

I have tried out the connector. It seems to last about 10+ connections for a solid connection. after about 100+, it still works but it gets loose, but maintains an electrical connection. I just wouldn't use the connector as a mechanical support. 

The connector is low cost, so by replacing the part you can keep going. 
The pcb does develop some scratching over time. but even after 100+ insertions doesn't cut through the copper. 
I somewhat compare it to the old school NES cartridges.  So I'm guessing after a couple years there might be some issues if you keep reusing the same connector. but swapping to a new one restores it.

Also, I think their 5 insertions spec is to maintain a 2.5A current capability. Since in this case, we are typically doing IO, or a few mA for power we are good. 

  Are you sure? yes | no

jaromir.sukuba wrote 09/12/2016 at 10:11 point

I read all project logs just now. Amazing amount of work. Though I have to admit neuroscienece is not cup of coffee and I'm not exactly sure how the simple neurons can do something useful - but I followed the project to find out soon :-)

  Are you sure? yes | no

zakqwy wrote 09/12/2016 at 16:39 point

Thanks for the kind words, @jaromir.sukuba

You bring up an excellent point, and to be honest one I've struggled with for some time now. When @NeuroJoe and I started our collaboration almost two years ago, the fundamental decision was made _not_ to focus on making the platform as sophisticated and close to biological reality as possible; that means we aren't concentrating on important concepts such as neurotransmitters, neuroplasticity, backpropagation, and so forth. Rather, our model is designed to engage a younger group of students in _basic_ neuron and neuroscience concepts, such as action potential thresholds, dendritic weighting, and so forth. In other words, we're balancing sophistication with cost and usability, something that I don't do automatically (as you may have noticed in some of the highly tangential logs buried within this project).

As we start to have Real Customers--we've sold a few dozen prototype kits at this point to various institutional partners--we'll solidify the fundamentals of the platform and hopefully start building out more sophisticated features as time and capability allow. So yeah, stay tuned, I suppose  :-)

  Are you sure? yes | no

jaromir.sukuba wrote 09/14/2016 at 20:33 point

Again, I must admit I'm not very familiar with neuroscience, so perhaps my questions will be a bit trivial, but perhaps I'm not the only one wondering:

* If the neurons of your project are not very close to reality (I fully understand this decision - we just can't have it all at once), how much is it far from reality? I know it is hard to quantify, so another question:

* Some animals do have low neuron count, like Caenorhabditis elegans has 302 of them (I'm pretty sure you mentioned in logs). Is it possible - having 302 pieces of Neurobytes (that is a lot of hardware, but achievable) - to "emulate" its behavior? Or the advanced features of neurons are needed to "emulate" the animal?

* How does one real neutron "know" it's part of memory, decision system or sense processing and how it knows what to do with inputs? Are the real neutrons "preprogrammed" somehow by nature for particular function, or are all the same?

* Could you add some sensors and actuators to bring inputs/outputs to the neurons? I think of "emulating" senses and muscles.

  Are you sure? yes | no

zakqwy wrote 09/15/2016 at 15:11 point

No worries @jaromir.sukuba, these are great questions. Not trivial at all, they get at the core compromise we have to make to build a commercial product. I'm responding to each question you starred in like format, so hopefully that keeps this wall of text somewhat clear.

* One of the reasons neuroscience is usually taught at a higher level is that it's just not a simple subject; the internal function of a neuron is inherently complex, involving a number of electrochemical phenomena occurring simultaneously. For example, the 'membrane potential' of a neuron can be measured as a voltage potential across the cell membrane, and it's dependent on the relative concentration (i.e. inside and outside the cell) of several different ions: calcium, sodium, potassium, and chloride. Each of these ions travel through specific ion channels that are permeable to a particular ion, meaning the total membrane potential is dependent on many different specific mechanisms. Since it's common practice to quantify the total membrane potential as a single voltage value (rather than figuring out individual ionic concentration differentials), we do the same in our platform--membrane potential is represented by a single integer. In the end, we've adjusted things like our decay algorithm so the waveform is similar to a _typical_ neuron; the easiest way to see this is to compare our NeuroBytes oscilloscope trace photo:

... to actual lab-recorded neuron membrane potential traces, such as this one:


The fast pulses shown in the last figure are emulated using our Izhikevich-based analog input mode (not yet hooked up to the 'scope, so this is from an early calculation spreadsheet):

So again, certainly not _perfect_, but close enough to give students an accurate picture of what's going on inside a neuron. Without the 'scope the membrane potential is simply represented by LED color, so exact waveform accuracy isn't a huge deal.

* The work folks like OpenWorm ( have done to translate C. elegans' neural structure into an easy-to-understand and open-source project is truly amazing. We've gotten this question a few times in the last several years and unfortunately my answer hasn't changed; it would be _amazing_ to fully emulate the worm (especially now that I'm a few weeks from having 500 v0.91 boards in hand), but I just haven't had a chance to explore the possibility. My _guess_ is that at least some of the neurons they have mapped are dramatically different from the standard NeuroBytes firmware, meaning they would need to be reprogrammed with a custom firmware set. Additionally, I'll bet some of the cells have more than 5 dendritic connections, so a 'mega NeuroBytes board' would likely need to be built that has a huge number of inputs. Not a huge deal, would just require a new PCB design and a higher I/O count microcontroller (or some clever multiplexing). TL;DR: C. elegans is on the long list of cool projects, but is not a priority currently.

* From my understanding, memory is formed by (a) structural network changes (i.e. new axon-to-dendrite connections forming and old ones decaying); and (b) individual dendritic weighting changing as connections are strengthened and weakened. I'll defer to an actual neuroscientist ( @NeuroJoe, want to jump in on this? ) regarding the initial state of a newly formed neuron. The current firmware keeps weighting constant (but different for each dendrite), but the physical network is easy to modify by a student. The next firmware iteration will also have a simple method implemented for changing individual weightings on the fly to 'tune' a neural circuit; however, at this point we haven't gone down the path of learning, neuroplasticity, backprop, etc.

* New input and output methods are on our short list. On the input side, I've developed a littleBits-->NeuroBytes adapter that allows one to use the full range of littleBits analog sensors (light, flex, sound, etc); however, currently this implementation requires a specific firmware flash. Again, the next firmware iteration will integrate this operating mode into the standard runtime and include some degree of adjustment; the lack of 'zero' and 'span' settings made a lot of the LB analog sensors fairly useless as they didn't exhibit a wide enough voltage swing to produce a useful (i.e. changing) output. For outputs, the servo operating mode is pretty well established but we're considering audible and linear actuation schemes too. The servo mode actually works pretty well with continuous rotation servos (see NeuroBuggy).

Make sense? Apologies for the text wall. Feel free to reply again on this thread if you have more questions or want additional clarification.

  Are you sure? yes | no

jaromir.sukuba wrote 09/22/2016 at 09:59 point

Thanks you much for the response, it contains a lot of information and leaves me with little to ask. The text wall is absolutely appropriate, there is not much sense in scientific haiku ;-)

Apart from your response, I did my homework and studied a bit about neurons and nervous system - that's why the late reply. I found a few books in my home library (father worked in medical field) related to this topic too. I must admit some things start to make sense now, but the deeper concepts are still somehow blurry. Perhaps that's why I'm not neuroscientist, but hardware designer/programmer :-)

I feel like I'm going to stay at this level of understanding for now and watch you how the project turns out. The amount of work put into this is really amazing.

  Are you sure? yes | no

Ric Johnson wrote 05/13/2016 at 12:29 point

How close are you to modelling C. elegans with NeuroBytes? Is this far away or a logical step in the near future?

  Are you sure? yes | no

zakqwy wrote 05/13/2016 at 12:50 point

Up until now we haven't had the 302 neurons needed to model the worm, but that will change on our next production run. I also haven't closely studied any of the specific requirements for that model: for example, I'm not sure if a simple integrate-and-fire device can accurately model the whole organism, and I'm guessing at least a few neurons would need more than 5 dendrites.

Having said all of that, the OpenWorm guys made brief contact with us when we started showing off v0.4 and wanted to know the same thing. Right now our priority is to get the many ducks in a row for scaling production beyond toaster oven quantities (i.e. low hundreds) so we can sell a few kits and really prove out our business case. However, our next model will move beyond the patellar reflex; we're going to build out a simplified modular invertebrate motion platform based around the repeating elements in C. Elegans and others. So, stay tuned!

Thanks for the comment! Any suggestions to get us closer to modeling the worm are welcome!

  Are you sure? yes | no

maehem wrote 12/19/2015 at 20:23 point

Cool PCB design!   As for the google eyes,  I believe it was Dan at OSH Park who was affixing googly-eyes to everything.

  Are you sure? yes | no

Peter McCloud wrote 12/19/2015 at 17:49 point

The new boards look really slick. Congrats on the NSF funding!

  Are you sure? yes | no

zakqwy wrote 12/19/2015 at 17:53 point

Thanks @Peter

  Are you sure? yes | no

spacetoon34 wrote 09/07/2015 at 14:25 point

Very nice and rich blog. 

I have tried to gather all my collection of websites and youtube channels and videos in one place. There are many things also to introduce to you in an elegant way so I established this site : and I hope you like it.


  Are you sure? yes | no

Peter McCloud wrote 08/27/2015 at 00:47 point

Congrats on becoming a best product finalist. Good luck winning the best product prize. Keep up the great work!

  Are you sure? yes | no

zakqwy wrote 08/27/2015 at 02:58 point

Thanks @Peter!!!

  Are you sure? yes | no

Jarrett wrote 04/09/2015 at 18:13 point

More research material:

Harvard free online neuroscience courses. At least part i and part ii discuss how neurons work.

  Are you sure? yes | no

zakqwy wrote 04/09/2015 at 18:18 point

Thanks! I've been picking my way through an introductory neuroscience textbook, but a free open class would be a great additoinal resource. I'll give the Harvard course a look; I've never done a MOOC but this seems like a good starting point.

  Are you sure? yes | no

Dom wrote 01/07/2015 at 13:12 point

Perceptrons.. I like.

Looking forward to some data :]

  Are you sure? yes | no

Paul Bristow wrote 01/05/2015 at 16:39 point

Hi, would an affordable open source hardware neural network be of any use to you guys?

  Are you sure? yes | no

zakqwy wrote 01/05/2015 at 17:08 point

Hi Paul--definitely an interesting project, but it's not terribly relevant here. Having discrete neuron elements is a key part of this effort, and the BrainCard stuffs a bunch of them on a single chip. Either way, thanks for posting.

  Are you sure? yes | no

Stryker295 wrote 12/23/2014 at 03:40 point

Is there a video of the SketchUP plugin in action, by any chance?

  Are you sure? yes | no

zakqwy wrote 12/23/2014 at 04:49 point

There isn't, sadly. I'll ping Andrew and see if he has a clip of the program running. Or better--source code to post!

  Are you sure? yes | no

Stryker295 wrote 12/25/2014 at 01:40 point

Awesome! You mentioned the plugin and I only just went looking for it but couldn't find a link, ah well :P

  Are you sure? yes | no

AltMarcxs wrote 12/15/2014 at 18:21 point
You probably know about this:
I had a thought about many Stm32 at 120MHz/96KB ($4.72/pce) running a 32bit version of above on one board.
But now with the odroid C1 at $38 for quad 1.5 GHz/ 1GB, a got another goal.

  Are you sure? yes | no

zakqwy wrote 12/15/2014 at 19:00 point
I didn't come across that paper in my research--thanks for posting, very interesting to see how they implemented complex floating point functions using extremely basic microcontrollers. The PIC18F45J10 seems a bit more sophisticated than the ATtiny44A; they're a bit more than twice the price in a QFP configuration and 100+ quantities: Still amazing though, since it looks like their ultimate goal is to emulate an entire network on a single chip!

My project is probably a bit more constrained by cost than you have in mind for your STM32 project; each Neuron's entire BOM cost including external accessories is under $4.72. The intention isn't to make any individual element terribly complex; instead, each Neuron should be minimally capable but extremely cheap and easy to use/interconnect on the fly.

  Are you sure? yes | no

K.C. Lee wrote 09/22/2016 at 10:30 point

Or you can use cheap STM32F030F4 at $0.44 a piece from China.  That's what I am using on my HaD projects this year.  Only a few times or so slower in integer math, but at 1/10 the 120MHz price.  (don't have FPU nor single cycle multiplier)  Still beats the heck out of 8-bit chips. 

  Are you sure? yes | no

zakqwy wrote 09/23/2016 at 13:00 point

It's something we've been considering, @K.C. Lee. I do enough 16-bit math on the ATtiny that I'm ready to graduate to something more advanced. I picked a tube of 'em up the other day and have been putzing around with libopencm3, it seems like a good FOSS option.

  Are you sure? yes | no

zakqwy wrote 11/10/2014 at 15:12 point
Thanks Bruce! Your project has some great background info--particularly related to membrane potential and recovery parameters. It looks like I need to study the Izhikevich model.

  Are you sure? yes | no

Stryker295 wrote 11/10/2014 at 04:36 point
This is definitely interesting! I'm curious to see where it goes.

  Are you sure? yes | no

zakqwy wrote 11/10/2014 at 12:20 point
Thanks! It's been a fun project; right now I'm in the documentation catch-up phase, so you'll probably see a flurry of updates that cover the system background.

  Are you sure? yes | no

Stryker295 wrote 12/08/2014 at 02:49 point
Random question that just popped into my head: Could this be easily ported to run so that instead of individual uCs emulating the neurons, individual pixels on a computer screen could fill in for that?

Just kinda envisioning a version of this written in javascript or something that could be run in-browser, or perhaps even downloaded and run locally. Random thoughts!

  Are you sure? yes | no

zakqwy wrote 12/08/2014 at 14:46 point
Great point. The system totally could be emulated--that's actually what Andrew's program does, although his software was written as a Ruby-based plugin for Sketchup. I need to get him to post his repo so folks can play around with it.

We were keen on having a physical toy, something one could easily manipulate on a tabletop, away from a computer or smartphone. We considered implementing some type of centralized system, maybe using serially addressable LEDs and a main controller, but realized that changing the physical layout of the system would be cumbersome--every time you moved axons about, you'd need to copy those changes into software so everything gets displayed properly. Plus, you'd be tied in with a computer.

One interesting side effect of completely localized control and the fact that I skipped crystal oscillators to save cost--each microcontroller's clock rate is slightly different, so Neuron behavior has an element of pseudo-randomness. This could be emulated in software too, of course, but it makes the toy feel a bit more organic.

  Are you sure? yes | no

Similar Projects

Does this project spark your interest?

Become a member to follow this project and never miss any updates