I started this project about November 2018.
My initial intention was just to build a cheap RC tracked robot for the kids, with Wall-E as an inspiration for the overall look.
Then I thought about actuating the arms, fingers, and head.
Then I thought about giving him sight.
Now I think about giving him some level of perception and autonomy... Not sure where/when this adventure is going to take me!
Note: I don't have much background in IP management. I'd gladly share my creations. But I often build on others' work (character, models, code, ...) so if I offend the IP rights of Pixar/Disney or any other part, please let me know and I will withdraw any offending content.
Wall-E's software took me quite some time to develop. In fact (as anticipated) it took much longer than 3D design, mechanical or electronic assembly, etc.
Development was not linear. A couple of times, when frustrated by the bugs or unreliability, I thew it all, and rebuilt it again piece by piece, so that I could debug interactions between all parts of the software, and assess the reliability of each part. Now, while not 100% reliable, I am pretty satisfied with how it works.
You can browse (copy, modify, reuse, etc.) the software, which is available at :
The sequence diagram below presents an overview of the control and communication loop between the three software entities:
* the ESP8266 microcontroller,
* the ESP32-Cam microcontroller,
The ESP8266 runs several "cooperative tasks" in parallel (programmed in non-blocking style).
* During start-up, the ESP8266 shows a typical Wall-E boot screen on the OLED screen, and briefly moves the actuators. This serves as both self-test, and as an audible notification when a unintended reset occurs.
The ESP32-Cam is independent from the ESP8266. When a websocket client is connected, it takes pictures at max. 24Hz frequency, and sends them to the client.
The code was developed using PlatformIO. It should be able to compile in the Arduino IDE, probably with a few modifications (lib path, sketch filename, etc.)
There are still a few features in the dream backlog, I'm not sure when or if I can have it work: filter IMU data and compute heading, interact with user based on video stream (e.g. follow face, mimick pose, ...) so I won't say the project is complete. However it's fully functional, and the kids like to play with it so I would say "mission already accomplished!"
The ESP32-Cam is connected to the power rail only.
The Wemos D1 ESP8266 controls a single I²C bus, where it commands:
* an OLED screen (actually I replaced the Wemos OLED shield with a slightly bigger .96" model) * a Wemos Motor shield, to interface geared motors for the tank tracks, * a PCA9685-based 16-channel PWM controller, to interface all servos, * a GY-80 IMU, to measure Wall-E's orientation (Madgwick orientation filter is not functional yet).
The same 5V power rail, derived from a 2x18650 battery + controller pack (not shown below), powers the actuators (servos, geared motors) as well as the sensor and both microcontrollers. The ESP8266 and ESP32-Cam both have onboard 3.3V regulators, I count on that to compensate for the voltage drop that probably occur when motors & servomotors draw too much current. I initially powered it all from a single 18650 battery, and experienced spurrious resets (probably from brown-out detector). With a 2x18650 module, the system is way more stable.
The unused servo headers of the PWM controller make for a handy power rail connector, where I plug in the battery module, ESP32-Cam, motor shield power input, etc.
The ESP32-Cam is a fantastic way to add video streaming at low cost.
However the example video application is somewhat unstable: the video stream most often stops after only a few seconds of streaming. I thus had to rewrite the video server.
The video stream is much more reliable this way.
Next step: I plan on using Posenet (running in the browser through tf.js, not on the ESP32 !) to identify people in front of Wall-E and enable support some physical interaction with them, e.g. turn head towards their face, use gestures to tell Wall-E to move, etc.
I have added a ESP32-CAM module on the front panel... see that little black hole on his torso? now Wall-E can stream video to any web browser over the local network!
The camera mount point is a bit too high though, or viewing angle too narrow, for obstacle avoidance. See in the picture above, only top half of the right hand is visible in the camera feed. I'll have to try with a fish-eye lens.
I could not yet get the .95" color OLED screen to work on the ESP32. Maybe pin assignments are conflicting with the camera module. I'll try again later with a simpler I2C, black and white model.
I also hope that all those modules (especially servos) don't draw too much current on the poor 18650 battery module.
Anyway this camera module opens up a lot of new opportunities... First I shall integrate the video feedback in the remote control webpage. Then I think I should learn learn how to use ROS to perform monocular SLAM.
Additionally, I think a minor redesign of the head might allow to hide another ESP32-CAM module inside the head, and have another video stream directly from the eye (that is, with the head pan/tilt movements).
Now that I have 3D-modelled the electronics modules stack, it seems it can fit in the back half of the body, behind the motors. Not represented here is a mess of wires connecting the interface modules with motors, servos, power supply. Anyway there's still some room left in the front half. Maybe I can add a 0.95" oled display for instance (the greyish cutout in the torso), and speaker with wav/mp3 module?
Additionnally, I'm not fully satisfied with the accuracy of my hand-sawn plywood parts, so I intend to rebuild the whole body out of laser cut plywood or MDF. If I use yellow tainted MDF instead of plywood, I could probably have burn additional aesthetic details on the surface.
I have no experience in laser cutting & drawing though. Can anyone give me some advice on how to design and assemble a beautiful body?