Day 1-3

A project log for DIY Space Grade Cyber Space Suit

AKA - Cyince

Chuck GlasserChuck Glasser 08/23/2016 at 06:080 Comments

Let's see, 5 x 7 is 35 days. Better make the best of them! Today was consumed by wrapping up the AmpToken-II pcb using Kicad. I really like Kicad. This AM I managed to get the Java program FreeRouting up thanks to the efforts of this brilliant fellow. I've worked so hard over the years to avoid learning Java, but NetBeans looks nice. Maybe I should reconsider my bias.

The AmpToken-II board is a ADS1299, 8 - channel EEG amplifier is almost done. Really tricky layout, lots of 0201 components. Thankfully, the vacuum pickup on the SG-CNC should allow for easy placement, "in theory and when it's running". I've simply got to get it to the board shop soon or I'll be in big dudu.

This system is designed for flex circuits. For now, because it is considerably cheaper the optimal solution is to use conventional IDC connectors. Ultimately, at least for a short time this means the electrode count for the head will be slightly less than the usual 350.

The system support the VR goggles was, sans googles, was made in 1996. I call it the Mark 96. My first attempt at building an EEG helmet was in 1970. It's called strangely, the Mark 70.

The Mark 16, that would be this year, in it's final operational form, consists of an array of signal processing nodes. Each node is an individual switch within the switched fabric consisting of a PIC32MZ processor butted against an Artix XC7A35T FPGA. Each node includes 1 PCIe port and up to 7 additional slower speed < 20 MHz, sensor ports. In the overall signal flow, and the engineering effort involved, its far easier to use the PIC32MZ for the initial signal processing. The PIC32MZ grabs the data, typically SPI, converts it to 32 floats, formats the data into the outgoing packet and kicks it out to the TX buffer in the FPGA.

The Mark 16 makes use of the USB-C connector. You know, the connector that does 40 Gb/s which is of course deep into the microwave. Is that COOL or what? In the most direct path the data goes from the sensor to the memory space of a GPU, certainly within the time of a single clock cycle of the data sample clock. I've not worked out exactly how fast the signalling process is, but I would estimate that it takes around 1 uS. The point is that there are no protocols, no network interfaces, no network stack, all of that does not exist.

The system state is simply a part of the PCIe memory space. Usually it's more effective to copy that data into the GPU's memory space. That will be a happy day.

Big day tomorrow. The PMOD to Cyince PCB interposers are arriving from China tomorrow. Yea! That means I'll be able to start streaming data via a variety of FPGA Dev systems. I'll start with the already running Beagle Bone Black (BBB) married to a ValentFX Spartan 6. Next in line will be the Artix 7 by Digilent, and then finally the Snickerdoodle with dual ARM 9s running on an FPGA! Last is the fabric nodes made up of the PIC32MZ/XC7A35Ts.

What I've learned recently is that while compiling directly on the embedded processor is great, and very effective. It may in fact be better to use the cross compiler and keep an image for the embedded process on the host. On that topic I'm not there yet.

And then of course there is the device tree. For that your going to want to use the XIlinx SDK.

I feel like this guy.