Close

Concept

A project log for ROSFOX

A silly desktop animatronic using an rPi, OpenCV and ROS2

foxhoodFoxHood 12/06/2023 at 19:150 Comments

First things first. Trying to figure out some things to look at and experiment with.

Mechanical

The idea isn't the most complex. No actual mobility is needed it just needs to be able to emote. For which the ability to move its head should be enough along with some OLED eyes. I came up with an idea involving 6 servos. 3 for the head movement, 1 for the neck (to tilt forward/backwards) and two for ears.

Overall skeletal structure would be akin to your typical Egyptian cat statue posture. Body holds logic with ribbon cables going to the various components. Outer shell is mounted with screws onto the skeletal structure. Initial aim is for a low-poly look. Cause i think it would make my life a little easier and not look ghastly for a first try.

Only thing i am not certain on. Is how to deal with the Camera. I am dead set on using a wide-angle camera. They give a very wide field of view (about 120 degrees) which makes them ill-suited for photos, but great for detection. Mounting in the head would be the obvious way, but the wide view may be enough for a torso mount. Something to try out first.

Hardware

This will take a bit more than your average controller to pull off. So a single-board computer is the obvious choice and on that the Raspberry family stands unrivaled. Plus they apparently aren't made out of unobtanium anymore.

Gonna try to get the job done with a Zero2 first. It is the smallest and the lack of USB and Ethernet is of no concern for this project. Only limitation is Memory. At 512Mb it doesn't have that much room, but by going headless it should be possible to keep memory use low enough.

To handle some of the more specific I/O such as controlling the servos a RP2040 is to paired up as a co-controller. A advantage is that the Raspberry itself can program the RP2040 by bitbanging a SWD interface.

The main form of input will be a Camera. Ideally the Camera Module 3 Wide is used as its wide field of view is great for detection tasks.

Other inputs are figured out as i go along.

Software:

This kind of scope project is kind of annoying to do in a single program. So many components that need individual testing, calibration and constantly iterating. Since the project is already planned to be realized on a single board computer with regular Operating System, it makes sense to utilize its ability to handle task scheduling and divide the project over multiple individual programs that together make up the animatronic's software.

To let the processes operate in tandem a method of inter-process communication is needed that lets them pass data between eachother. e.g. the Vision program tells the behavior Agent that a face was spotted, the behavioragent then tells the animation program to track, etc. The most common way is to use something that enables Inter-Process Communication. A simple Subscriber/Publisher or Request/Reply message system should do the trick.

Ideally i would be able to use the Robot Operating System 2 (ROS2), which was specifically intended for this kind of task. It gives any CPP/Python project run through it, access to a Data-Distribution-Service (DDS) that enables them to discover each-other and pass messages to one another. It is however not the easiest to get working as it is maintained specifically for the Ubuntu Operating system. For others you gotta build it yourself.

Alternatively if it proves difficult to get operational (you never know with Embedded linux). ZeroMQ can act as an alternative. Much like ROS2 it enables processes to pass messages to eachother, but instead of using a DDS it uses more conventional methods such as TCP for inbetween devices or a local file that acts as a bridge inbetween processes. It is lightweight, but the use of a in-between file means hammering the flash storage with constant writes. I'd rather avoid that.

Should be enough to get a start.

Discussions