Close

Tiny Book Review on TinyML by Pete Warden & Daniel Situnayake

maxmax wrote 04/23/2020 at 18:40 • 7 min read • Like

On the one hand, I feel lucky to have been born where and when I was, which was in Sheffield, Yorkshire (God's own county), England, in 1957. Due to my good fortune, I got to see the 1960s firsthand, albeit through the eyes of a young lad. I also got to see the very first episode of Doctor Who in 1963, the first humans land on the Moon in 1969, the first microprocessor-based home computers in the 1970s, the first IBM PC in 1981, and the introduction of the internet and World Wide Web to the general public in 1993.


I'll be 63 this year, which means I'll be celebrating the 21st anniversary of the 21st anniversary of my 21st birthday, and that's not something you get to say too often. Of course, on the basis that it's always nice to have something to look forward to, I'll be celebrating my 100th birthday next year if we count in octal (base 8).

Generally speaking, I have few regrets. Having said this, I wish I knew more about digital signal processing (DSP), but I fear the math is beyond the capabilities of my poor old noggin. When I graduated high school and commenced working on my degree in 1975, the engineering department at the university was in possession of an analog computer only. There was a digital computer in another building, but this was shared across all of the university departments.

We had to create our programs in FORTRAN, capture them on punched cards, and hand-carry the deck of cards to the guardians of the machine, to be added to the run schedule at some indeterminate time in the future. The typical debug cycle ("Missing comma on line 2") took a week to resolve, so the best we could hope for was get one simple program to work each semester. Creating a DSP program simply wouldn’t have been feasible, even if the students (or the lecturers) had a clue what the DSP acronym stood for.

More recently, I've observed the rise of artificial intelligence (AI), machine learning (ML), and deep learning (DL) (see also What the FAQ are AI, ANNs, ML, DL, and DNNs?). I'm amazed by what I hear about the high-end cloud-based AI systems created using tools like Google's TensorFlow. I've also been blown away by machine-vision AI applications that can perform object detection and recognition. In addition to things like face detection, some systems can determine age, gender, and emotion ("There's a 98% probability that 25-year-old man is not happy I'm looking at him!")

I've also been interested to see the rise of new companies and devices that are especially targeted at AI applications. Just a couple of weeks ago, for example, a new company caller Perceive emerged from stealth mode. The folks at Perceive claim to have reinvented neural network mathematics using information theory, and they've created a chip called Ergo (which is a Latin word meaning "therefore").


Ergo delivers over 4 TOPS (tera operations per second) peak performance at less than 1/10 watt peak power. I don’t care what anyone says, that's a lot of TOPS.

Wading through the bumf, we discover that Ergo can run artificial neural networks (ANNs) with an excess of 100 million weights and a size exceeding 400 MB. Furthermore, it can run multiple networks concurrently, so one network can be detecting and identifying objects, another can be honing in on faces, while a third is processing sound.

Now, all of this is very exciting, but I fear creating AI applications to run on something of Ergo's caliber is beyond my humble capabilities. My first glimmering of hope was when I was exposed to the concept of the NanoEdge AI Studio from Cartesiam (see also Any Embedded Developer Can Create AI/ML Systems).

The idea here is that NanoEdge AI Studio starts by asking you a series of questions, including what sort of processor you intend to run on (choices are Arm Cortex M0, M0+, M3, M4, and M7), how much RAM you wish to devote to your AI/ML solution (it can generate solutions that require only 4K to 16K of RAM), and the number and types of sensors you wish to use. NanoEdge AI Studio then generates the AI model you integrate into your own code. Furthermore, it trains the model for you using real-world data from your sensors. How sweet is that?

The idea of a small AI system running on a relatively low-end microcontroller unit (MCU) falls under the category of TinyML (“tiny machine learning”). I only recently learned that this is a thing. It even has its own industry gathering in the form of the TinyML Summit (see also TinyML Packs a Punch).

In fact, it was my becoming aware of the TinyML concept that also introduced me to the recently published book TinyML: Machine Learning with TensorFlow Lite on Arduino and Ultra-Low-Power Microcontrollers, which was authored by Pete Warden & Daniel Situnayake. The paperback version of this little beauty is available on Amazon Prime for $33 and let me say right here and now that it's worth every penny.

The book commences by teaching the fundamental concepts of machine learning along with the deep learning workflow: decide on a goal, collect a dataset, design a model architecture (on a PC), train the model (on a PC), convert the model (to run on your MCU), run inference, and evaluate and troubleshoot the results. I learned so much in the first three chapters (only 28 pages) that I was imbued with a new-found confidence.

In addition to a host PC that you use to build and train your model, you also need a suitable MCU development board on which to deploy your TinyML system. If you wish to do something interesting, you're going to need a board that has a microphone, accelerometers, other sensors, and the ability to attach a camera, but which board should we choose?

It turns out that the authors worked with the folks at chip manufacturer Ambiq and the guys and gals at SparkFun to produce the $15 SparkFun Edge Board. In addition to its MCU, this board features two MEMS microphones, a 3-axis accelerometer, and a connector to interface to a camera. Sad to relate, the reviews thus far have been "so-so," but I believe a new revision of this board is in the works.

Fortunately, the authors also recommend other boards that will run some, or all, of the experiments described in the book. One board that runs all the experiments is the Arduino Nano 33 BLE, which costs only $26 on Amazon Prime. In addition to its MCU, this little scamp boasts a microphone; a 9-axis inertial sensor; temperature, humidity, and barometric pressure sensors; and a gesture, proximity, light color, and light intensity sensor.

Now, I should remind you that I'm a hardware design engineer by trade. Although I can write rudimentary C/C++ code, my software skills are not of the highest order. I've also heard dreadful tales about the complexity inherent in creating ANNs. Thus, if the truth were told, if you'd asked me before I read this book, I would have said that creating even a rudimentary AI/ML/DL model would be beyond my capabilities. However, it turns out that some really clever people have created some amazingly sophisticated tools and environments that remove most of the complexity as seen by a clueless user like your humble narrator.

All of the software you need to create, train, convert, and run your TinyML applications can be accessed for free as described in the book, and the authors walk you through everything step-by-step as you create, train, run, and refine your models to perform tasks such as wake-word detection, person detection, and a magic wand, where the latter can be used to "cast spells" based on waving it in different ways.

On the bright side, I love all of the technologies that are heading our way. I fervently believe that, in the not-so-distant future, we will have headsets that combine AI with augmented reality (AR), Diminished Reality (DR), and augmented virtuality (AV), and that the result will change the way we interface with our systems, the world, and each other (see also What the FAQ are VR, MR, AR, DR, AV, and HR?).

On the downside, I must admit that I was starting to fear being left behind. I've had visions of sitting in a rocking chair on the front porch in my dotage watching the world go by -- only a user of technology, no longer a creator (sad face). But turn that frown upside down into a smile, because there's promise for the old dog yet. It turns out that, against all expectations, when armed with a book like TinyML, you can teach an old Max new tricks!

Like

Discussions