Close

Ground Zero

A project log for Swarmesh NYU Shanghai

Scalable swarm robot platform using ESP32 MESH capabilities and custom IR location

rodolfoRodolfo 05/13/2019 at 06:130 Comments

The very first conversation about this project happened at my office back in 2018. A couple of students reached out to me asking if I could support a robotics club. We had since then have been working on different concepts, running activities to include others in our small world and trying to explore where to go from there. 

This project page at Hackaday is our internal space to reflect about the findings and possibly about interesting experiments we have been carrying out. We started our journey into what became the Swarm Robot Research Team by doing literature review and exploring the different technologies available. 

Before even starting with the development, we did an extensive literature review on what other universities have been working on. Regardless that our research group is within an undergrad program , it was good to learn what other people have been developing. The most noteworthy examples of robot swarms have been:

RICE's R-One 

Harvard's Kilobot

Stuttgart's Jasmine Swarm

Platforms we tested:

- Raspberry Pi with OpenCV

- arUco Markers transmitted by radio to micro:bit

- ESP-32 with IDF as dev environment

- ESP-32 with Arduino as dev environment

- ESP-32 with micropython

Even setting up the Raspberry Pi to work with NYU Shanghai wifi was a struggle. We were able to configure it thanks to Jack B. Du's tech documentation. Nevertheless for the Raspberry Pi 3 and for certain WiFi dongles, the drivers were also a challenge to set up.

The experience with IDF for ESP32 was not as smooth as we wished. The errors were different in the diverse OS we had and installing all dependencies seemed to be a challenge itself. After a whole day of debugging, we finally were able to blink an LED on each ESP32. This proved to be a clear obstacle for our development, where we would be forced to first become familiar with the environment. 

On the contrary, we already had expertise using Arduino IDE. Setting up ESP32 with the board manager was just a matter of minutes and we quickly started coding a webserver and a set of RGB smart LEDs with the FastLED library.

It was also interesting to find that there were already WiFi mesh implementations ( easyMesh and painlessMesh). That would be great to satisfy with the requirement of scalability that WiFi alone could not satisfy if we decide to use ESP32.

Next steps with ESP32 is to add it to a mobile platform. We are in the process of wiring WSP32 to Plobot because it has many of the sensors we need for the project.

Besides of the communication between the different robots being scalable, the relative position from each other is critical for the project. We replicated the technique of Kilobot of reflecting infrared to calculate distance. 

We also explored using computer vision as an exploration to detect positions and from there calculating the distances. This worked well when using a PC (mac mini, Windows, macbook pro). The only issue was that it required to calibrate the algorithm for the specific camera and we had to handle logistics by hanging a webcam on the roof plus getting an extra long (5m) USB cable.

In contrast to the x86 architecture setup, it was an outstretch when we tried to set it up on a Raspberry Pi. The two reasons were that managing dependencies when moving to ARM architecture. When using RPi, we had to compile the OpenCV with the specific modules and only after that see that we had a poor frame-rate. More work should be developed 


Discussions