I always liked toy robots and electronics pets, but they rarely were actually smart. Most just made random noises and sounds as a crude facsimile of "intellligence". If lucky you might come across one that that can respond to sound from another.
So i figured that since i got the skillset to at least try to make my own. Why not do that?
just bringing in a wide-angle camera and using OpenCV by itself would allow for something eclipsing the toys. Also a great opportunity to try my hand at managing a complex project by dividing its components into chunks via a messaging system like ROS2
First step to get anywhere is to get ROS2 and the camera working. Which ideally would have been a matter of some apt-gets. Except there were a few complications:
ROS2 is native to Ubuntu LTS.
Latest LTS is Ubuntu 22.04.
Camera Module 3 is a new module from 2023 and requires the latest camera library and Kernel Drivers. Otherwise it ain't detected.
Only the new Debian based Raspberry Pi OS and Ubuntu 23.10 have this.
So no native binaries exist...
This means that i don't get a easy "apt install ros-iron" and call it a day. Either i need to get the driver working on the older distro, wait till may 2024 for a new LTS ubuntu + ROS2, or get ROS2 working on the newer via alternative means. Since the driver is meant for a significantly newer Linux Kernel and i know nothing about drivers, the former is pretty much off the table. I gotta find a different way to get ROS2.
There are two ways to do this. Either i build it myself OR i use a Docker container.
I would prefer not to wrestle with docker just yet, even though it is the easiest way to get things running. If i can run something without any overhead i'd rather use that. so i figured building ROS2 would be worth a try. I can always try to migrate to a newer Distro with Docker later and see if it has any impact.
Let me pre-empt by saying I came to regret this decision and urge anyone to just use the 24.04 LTS+ROS2 Jazzy (if available by time of completion) or use Docker to run it on Raspberry Pi OS. The following is more for the curious/brave. Took weeks to get things compiled and working on a Zero2 (memory contraints). NO Guarantee the following even works!!!
How to BUILD ROS2 on RPi Ubuntu Server 23.10:
This is assuming a freshy flashed OS just booted into for the first time and is being accessed via SSH.
Step 1) Prep the Distro.
First things first. Ubuntu doesn't start with swap space, which isn't a good thing on a tiny pi like the zero2. Gotta get that fixed, preferably before the Kernel starts to throw a fit and the Pi freezes. First we allocate about 2Gb to a swapfile, Give it permissions and than make it a swap.
This should hopefully get every that is needed. If some parts don't install, note them down for later checking. Some python parts may not want to install via the apt install approach. Often you can try to forcefully install them anyway via "-pip install (PACKAGE) –break-system-package". The break package flag tells it to ignore the system package manager and do it anyway.
Step 3) Prep the source and get the ROS dependancies
First things first. Trying to figure out some things to look at and experiment with.
The idea isn't the most complex. No actual mobility is needed it just needs to be able to emote. For which the ability to move its head should be enough along with some OLED eyes. I came up with an idea involving 6 servos. 3 for the head movement, 1 for the neck (to tilt forward/backwards) and two for ears.
Overall skeletal structure would be akin to your typical Egyptian cat statue posture. Body holds logic with ribbon cables going to the various components. Outer shell is mounted with screws onto the skeletal structure. Initial aim is for a low-poly look. Cause i think it would make my life a little easier and not look ghastly for a first try.
Only thing i am not certain on. Is how to deal with the Camera. I am dead set on using a wide-angle camera. They give a very wide field of view (about 120 degrees) which makes them ill-suited for photos, but great for detection. Mounting in the head would be the obvious way, but the wide view may be enough for a torso mount. Something to try out first.
This will take a bit more than your average controller to pull off. So a single-board computer is the obvious choice and on that the Raspberry family stands unrivaled. Plus they apparently aren't made out of unobtanium anymore.
Gonna try to get the job done with a Zero2 first. It is the smallest and the lack of USB and Ethernet is of no concern for this project. Only limitation is Memory. At 512Mb it doesn't have that much room, but by going headless it should be possible to keep memory use low enough.
To handle some of the more specific I/O such as controlling the servos a RP2040 is to paired up as a co-controller. A advantage is that the Raspberry itself can program the RP2040 by bitbanging a SWD interface.
The main form of input will be a Camera. Ideally the Camera Module 3 Wide is used as its wide field of view is great for detection tasks.
Other inputs are figured out as i go along.
This kind of scope project is kind of annoying to do in a single program. So many components that need individual testing, calibration and constantly iterating. Since the project is already planned to be realized on a single board computer with regular Operating System, it makes sense to utilize its ability to handle task scheduling and divide the project over multiple individual programs that together make up the animatronic's software.
To let the processes operate in tandem a method of inter-process communication is needed that lets them pass data between eachother. e.g. the Vision program tells the behavior Agent that a face was spotted, the behavioragent then tells the animation program to track, etc. The most common way is to use something that enables Inter-Process Communication. A simple Subscriber/Publisher or Request/Reply message system should do the trick.
Ideally i would be able to use the Robot Operating System 2 (ROS2), which was specifically intended for this kind of task. It gives any CPP/Python project run through it, access to a Data-Distribution-Service (DDS) that enables them to discover each-other and pass messages to one another. It is however not the easiest to get working as it is maintained specifically for the Ubuntu Operating system. For others you gotta build it yourself.
Alternatively if it proves difficult to get operational (you never know with Embedded linux). ZeroMQ can act as an alternative. Much like ROS2 it enables processes to pass messages to eachother, but instead of using a DDS it uses more conventional methods such as TCP for inbetween devices or a local file that acts as a bridge inbetween processes. It is lightweight, but the use of a in-between file means hammering the flash storage with constant writes. I'd rather avoid that.