10/06/2016 at 22:02 •
I've got all the necessary parts for the lollipop interface now, so I've gone from this...
to this, with 9V battery for scale:
The quality of the boards (made in China) is really nice. I used a conventional soldering iron, as there aren't too many parts, and the sizes are manageable. I used a file to round off the corners of the PCB and get rid of the sharp edges, and cleaned off the solder flux with brake cleaner (not sure what's in this, but it smells like it should be banned).
On the BLE interface connector, the microcontroller MOSI, MISO, SCK and /RESET pins are accessible, and can be used to program it. I programmed a quick bit of code to flash one LED, and it lives! I noticed a mistake I'd made in the circuit - I had thought that port C was a full 8-bit I/O port, but the top two bits can only be used as ADC inputs. This meant that two mux control lines weren't accessible.
Luckily I could connect them to the SDA and SCL lines, which I had reserved for communication with the Raspberry Pi. I hadn't decided whether to use I2C or the UART, so made both SDA/SCL and TX/RX available on the interface pins. The required data transfer rate is rather low, and the UART will suffice, so the SDA/SCL lines can be repurposed as general purpose I/O to control the two missing mux control lines. I added a couple of wires to the top of the board to make the connections and no track cutting was necessary. I've updated the schematic appropriately.
I've also made a connection cable for the Raspberry Pi, that plugs into the GPIO pins 1-10 with a short bit of IDC connector. The cable connects the 5V, 3.3V, ground, TX and RX lines from the Pi to the lollipop interface board. A 9V battery clip provides the power connector for the lollipop 3-18V CMOS supply.
A momentary pushbutton switch provides a means by which to shut down the Pi when running headless. An LED hanging from the 3.3V line is connected to the Pi GPIO4 pin via a resistor, pulling the pin up to 3.3V. The push switch is connected between this pin and ground. When the switch is pressed, it activates the LED, and by pulling the GPIO pin down to ground can also be detected by the software. If necessary, the software could also turn on the LED when the switch is not pressed, by driving the pin low, though I need to avoid driving the pin high to prevent a possible short to ground via the button.
Here's a mockup image of how the lollipop board might look with the BLE interface fitted (not intended for the current prototype, but to be supported in the future):
The next steps are to write the microcontroller software, and then get the Raspberry Pi talking to the lollipop interface via the UART connection.
10/02/2016 at 20:55 •
Whilst waiting for the PCBs to arrive, I started on the software that will run on the Pi. This needs to read a sequence of still frames from the camera, convert them to a low resolution and then send them to the interface lollipop. For now I'll concentrate on the reading from the camera, and leave the interfacing part until later.
The final prototype will run headless, but during development I wanted a basic on-screen user interface that will allow the selection of some test patterns, and the ability to view the processed camera image in real time. As I'll be writing the software in C, I decided to use the very nice OpenVG library from Anthony Starks, which offers a quick way to draw text and graphics on the Pi independent of X:
The images will be captured by using RasPiStill in signal mode. Every time we send SIGUSR1 to the raspistill process, it will save an image from the camera. The following techniques are used to improve the speed:
- The -bm flag selects burst mode, where exposure parameters are only set once, at the beginning of capture
- The -e flag is used to save the image in .bmp format, for quick reading without any CPU-based decompression
- To save in monochrome, we use "-cfx 128:128"
- The preview image is sized and positioned to fit in the allocated space on screen
- The saved image dimensions are set to 128x128 pixels
- The image is saved into a RAM disk and not on to the SD card (each saved frame is deleted as it's processed)
Using these techniques, the software can easily process 20 frames per second on a Raspberry Pi 2B, including the simple conversion down to 14x14 pixels. The limiting factor becomes the screen redrawing time. To improve the speed over the ajstarks library functions, the software implements a faster rectangle plotting function that reuses an existing path object, rather than creating and destroying a path for each plot. Similarly, the greyscale colours are cached and used repeatedly.
Rather than using system calls to identify the raspistill process, the software forks at the beginning and the child process becomes raspistill, passing the PID to the parent.
I've defined several test patterns in the software, which can be individually selected as an alternative to the camera image. There is also a basic function to stretch the contrast of the image, or to convert it to black/white only. The output window shows in real time the data that will be sent to the lollipop interface.
With the software running on the Pi, the screen output looks like the screenshot below, with some pretty pictures at the top, then the various test patterns that can be selected, and on the bottom the live preview from the camera and the actual output that will be sent to the interface lollipop:
Test patterns 6 and 7 are animated, with 6 ramping up in brightness and 7 rotating around the centre point.
The C source and Makefile are available on github (MIT license):
...and now the PCBs have arrived!
09/28/2016 at 21:20 •
So far the biggest part of the project has been the layout of the interface PCB, and I began working on this a couple of weeks before starting to enter the project details and logs. The layout is now complete! Even as a beginner I found Eagle to be quite straightforward to use and produce a nice layout. The resulting board is about 80x30mm in size, which I think is fine for the prototype. I managed to pack the components reasonably tightly, but I only populated one side of the board, so there might be room for a bit of future size optimisation (although there's plenty of routing on the reverse side).
I've reserved space to stack the Adafruit BLE module on top of the board without increasing the footprint. I've tried to keep the layout neat and tidy, and added some labelling on the silkscreen layer to identify the pinouts. I also added a small drawing of a platypus - a mammal that can sense electric fields!
Here's the final (prototype!) board layout:
To make the gerber files I referred to the sparkfun tutorial I referenced before, using their suggested CAM file:
I wanted to check the gerber files in a tool other than Eagle, as a sanity check. I found gerbv, an open-source gerber viewer that works really well:
I used this to check particularly that the solder mask around the vias was dimensioned correctly to produce the annular rings.
I plan to make all the design and manufacturing files freely available, but first I want to make sure that it actually works. I've placed an order for 5 boards from pcbway in China. I don't have any experience of this supplier (or any supplier, in fact), but the European manufacturers seem to be quite expensive if you want a short lead time, and to get boards reasonably quickly it's cheaper to get them from China, with airmail shipping.
I chose lead-free HASL surface finish, which is cheap. ENIG (gold) would be more appropriate for the sense matrix, but I wanted to keep the cost down for these first boards, as I don't know yet whether there are any mistakes in the design that would make the board unusable. So there's no gold finish for now.
I ordered all the SMD parts from RS components in the UK, and found you can even buy 0805 capacitors in quantities of 5! I did check that I could easily get all the required parts in the intended packages before I started the PCB layout, to avoid designing for something that was unobtainable.
With the sense lollipop components and boards on the way, it's time to think about the supporting software on the Raspberry Pi.
09/27/2016 at 20:07 •
After drawing the lollipop interface schematic in Eagle, I started with the PCB layout. As a beginner, I found the following tutorials from Sparkfun very helpful, to get to know the basics for both through-hole and SMD layout:
The challenge here was to make the board as small as possible. After drawing a few vias with standard 0.6 mm drill hole diameter, I decided that a matrix size of 14x14 pixels was optimal to avoid making the board too wide. The routing of signals from the vias is a consideration as well as the space required for the vias themselves. This number of pixels should be more than sufficient to create a usable image.
In order to make the required layout for the electrodes, I used the polygon tool to create a rectangular strip of copper for each column, on the reverse side of the board, based on a 0.025 inch grid. After placing all the vias within the copper strips (but not connected to them), I made the rows by joining the vias vertically with tracks on the front side of the board.
To gain the necessary clearances for the outer ring around each via, I set the design rules to specify a track-to-via clearance of 8 mil, and a stop mask clearance of 16 mil. This produces an annular ring of thickness 8 mil around each via. When it comes to the production of the PCB, these dimensions need to be respected for the board to function as intended.
After drawing the complete matrix, the detail of the electrode construction looks like this:
I began to place the other components. After trying the autorouter I realised that there would be no chance of making a compact layout unless the routing was done manually. Starting with the drivers for the rows and columns, I came up with a fairly compact routing, using both sides of the board. The other parts are in approximate positions with the routing for them still to do. The board is starting to take shape:
09/21/2016 at 19:10 •
To recap, the lollipop interface circuit will consist of:
- ATMega328P microcontroller
- MC14555P CMOS high-side row multiplexer
- 74LS156D open-collector column multiplexer
The CMOS part will have a variable supply voltage. To allow it to remain compatible with the control signals from the microcontroller, I'll use a buffer IC to perform level conversion. By using a part with open-collector drivers, the outputs can be pulled up to the CMOS voltage:
- 74LS07 hex non-inverting buffer with open collector outputs
With the addition of pullup resistors, a few decoupling capacitors and a crystal for the microcontroller to allow accurate timing, it should be possible to design a relatively small board, given the low number of components.
I would also like the option of a bluetooth interface, to allow pairing with a smartphone. The SPI version of the Adafruit BLE module looks suitable, so I'll try to support this. This is a 3V module, so it makes sense to run the microcontroller at 3V as well - this won't cause any problems with the control signals for the 5V 74LS parts, as a quick look at the datasheets show that the levels remain compatible.
Although I have some experience of building on stripboard/veroboard, I've never designed my own PCB before. So it will be a challenge to make a well laid-out board while learning how to use the software tools.
After doing a bit of research, I decided to try Eagle, mainly due to its ubiquity and the availability of a Linux version. I can use the freeware licence with this project, as it's non-commercial and the board size will be below the imposed limits.
Here's the completed circuit diagram, as drawn in Eagle:
09/18/2016 at 20:30 •
To determine a suitable size for the interface lollipop, I made some samples with cardboard. I decided that the part in contact with the tongue needs to be no larger than 40x30mm, to fit comfortably inside the mouth. But for the initial prototype, it will be difficult to include the sense matrix and the required driving electronics in such a small area. By making the board slightly longer, but still 30mm in width, there should be enough space. It will stick out of the mouth, but this will give something to hold on to and won't look too awkward.
I decided to design the driving circuitry for a sense matrix resolution of 16x16 pixels. I'm not sure yet whether this number of pixels will fit into the available area, but this is the goal.
To drive the sense matrix I chose the ATMega328P 8-bit microcontroller, because of its versatility and ease of use. It's available in a TQFP package with 20 I/O pins, plus SPI and UART, and is straightforward to program in assembler or C. It's also widely available at low cost, and can be flashed using a simple parallel cable.
To drive the 16 rows and 16 columns, I will need to multiplex the limited number of I/O lines. I intend to use standard logic parts to do this. For the high side (rows), I will use a CMOS part, MC14555P, a dual 1-of-4 mux. By using two of these ICs, I can use 6 I/O lines of the microcontroller to individually select 1 of 16 rows. I can set the row voltage to anything within the allowed CMOS supply range, ie. from 3 to 18V, simply by varying the supply voltage to the mux ICs.
For the low side (columns), I will use a similar dual 1-of-4 mux, but with open collector outputs. A suitable part is the 74LS156. When an output is high, it will float, rather than being driven to the 5V supply voltage of the 74LS156, so no current will be drawn through the sense matrix. When low, the column will be pulled down to ground and the pixel in the intersecting active row will see a voltage across it.
To test out the concept, I built a circuit on breadboard using a high side CMOS part and a low side open-collector 74LS part. The parts used were a 4049 (inverters) and 74LS03 (open-collector NANDs) as I had these to hand. The 74LS was powered by a 7805 regulator, and I used a 741 op amp as a voltage follower to generate a variable supply for the 4049. This let me set the CMOS voltage between 3V and 12V or so when connected to a 16V plug-in supply. I held the high and low output wires against my tongue, about 1mm apart.
I found that this setup worked quite nicely, and by adjusting the CMOS voltage I could set the intensity of the stimulation. At 3V it wasn't noticeable, with the effect starting at about 4-5V. This will depend on the amount of moisture present. Around 6V-7V was best, with 8V being high and 9V rather too high for comfort.
So the concept of using standard logic parts to drive the rows and columns seems to work, and the use of CMOS high side and open-collector low side allows the stimulation voltage to be easily controlled. Now we need to connect these to the microcontroller.
09/15/2016 at 20:41 •
The interface lollipop needs an array of contacts that will stimulate the surface of the tongue. How will this be implemented? I plan to use standard PCB features without any exotic small dimensions, to allow for cheap and easy manufacturing.
To avoid having an enormous number of lines to control each "pixel" individually, I will make an array of rows and columns, where each row and each column has a separate driver. Only one row and one column will be driven at a time. The intersection of the active row and active column will switch on a particular "pixel".
I plan to implement this using a 2-sided PCB with vias for each "pixel". Each column of vias will be connected vertically on the non-contact side of the board. On the contact side, horizontal rows of copper will surround each row of vias, but the vias will be isolated from the copper by a thin annular gap. By selecting the appropriate diameter for the soldermask removal around each via, a copper ring will be exposed around each via. The soldermask will cover the rest of the contact side of the board.
The vias (each column) will be driven high when selected, and the copper rings (each row) will be pulled low. The voltage across the gap from via to ring will stimulate the tongue (the voltage will be quite low!).
When a particular row and column are driven, there will also be a voltage drop between all the vias in the column, and all the rings in the row. But the distance between them will be much greater than the small separation on the intersecting "pixel". My hope is that the surrounding "low level" pixel stimulation will be negligible, but it remains to be seen how effective this method is. There is plenty of scope for adjusting the voltage and the pulse duration to an appropriate level to get a good result at the target pixel, while limiting the spurious effect along the associated row and column.