Close

Swarmesh 2nd Iteration: Week 3

A project log for Swarmesh NYU Shanghai

Scalable swarm robot platform using ESP32 MESH capabilities and custom IR location

nh8157nh8157 06/22/2020 at 06:150 Comments

The past week was mostly for doing housekeeping with the work done in the past weeks and to look ahead, think about what to do with the platform we have so far. 

Hardware:

Following the pattern of week 2, we received PCB v1.0.1 on Monday and assembled our model immediately for testing. We kept our fingers crossed, hoping that everything (or at least most) will work. Having said that, we encountered a number of setbacks as usual.

Figure 1: PCB v1.0.1                                  

Figure 2: Robot v1.0.1

  1. The FT232RL FTDI USB to Serial Converter Adapter was not able to successfully program the ESP32 module. After several tests and probing, we figured out the mistake was an easy fix but pretty foolish: one of the S8050 transistors was misconnected. 
  2. The RT232RL FTDI has 6 pins: DTR, RX, TX, VCC, CTS, and GND. The documentation for the ESP32 devkit asks for RTS instead of CTS, so we had to solder a cable from RTS and insert the RTS pin instead of the CTS pin for the programmer.
  3. During the tests, we also found out that we should never use the 3.3V output of the FTDI as a power source for ESP32. It is very underpowered, as we have confirmed with a multimeter. 

These three revisions then allowed for a successful connection between the programmer and the ESP32, and programs could upload without any issues.

  1. We kept resetting the ESP32 manually by touching the ends of a cable on the appropriate pins. To avoid such inconvenience, in the next version, we will also add a reset button. 

We conducted tests for each component on the PCB individually to check their functionality, and they were all successful, except for the IMU, which we are only inserting 4 out of the 8 pins, VCC GND SCL SDA in the pins for I2C. Surely, it must be another small mistake, but we cannot upload the program onto the ESP32 with the IMU inserted as of now.

We commenced the process of creating our next version PCB v1.0.2. Other than the aforementioned changes, we will also be switching the connectors for sensors modules to holes for actual sensor components (TCRT5000L, IR EMITTERS, IR RECEIVERS).

In between refining the details of component placement, we did some exhaustive tests on the functionalities that have high consumption on 3.3V. We turned on the WIFI functionality, moved the motors, flashed the LEDs, and inserted the IR modules. With these all working simultaneously, we ran the robot until it was out of juice and recharged it for a specific amount of time and measured the run time. We repeated this cycle for only 3 cycles as of now with the results as follows:

10 min charging, 20 min running

60 min charging, 100 min running

60 min charging, 105 min running

We will conduct more tests of this sort, but the pattern seems to be that the run time is twice as charging time with some fluctuations when charging time increases (to be confirmed).

Next week, we will finalize PCB v.1.0.2, hopefully, we will be able to integrate the software we have written with the hardware by the end of the week.

Software:

Last week, we tried cleaning up the software by modularizing the code, parsing all essential parts using Object Oriented Programming. 

We arranged the robot’s code in the same fashion as its functionalities, divided them into classes of Locomotion, Communication, Tasks. Each of these classes has public functions exposed as interfaces. Details of these interfaces are written in their corresponding header files. In addition to these fundamental functions, we also defined another class, Robot, which holds essential properties of the robots, such as ID, current position, battery level, etc. It also integrates all the instances of the classes mentioned above, exposes interfaces that are easy to understand. 

We also used a simulation software named Webots to simulate our project. With the help of Webots, we created a rectangular testing area that has the same size as our physical testing area, which is about 3 meters in width and 2.3 meters in height. Considering the workload and the shape of our robots (hexagon) that are not easy to be simulated, in this first stage, we didn’t create exactly the same robot on Webots. We only controlled its size and wheels(for locomotion) to be similar to our physical robots.

Among all the stuff involved in simulation, the implementation and adaptation of our algorithms is the most tricky one. There are lots of differences between our real implementations and the available functionalities on Webots. For instance, in our real implementation, we wrote a program to create a server using a computer network to send essential information to all the robots, such as their world coordinates, direction, destinations, and etc. Since we can’t create such a server on Webots, we use a Supervisor with an Emitter (included in Webots) instead to send the relative information to the robots. Also considering the information itself, JSON libraries are not available in Webots, which we have used in the real implementation for the sake of easy abstraction and manipulation of the information. For this problem, we currently used a string containing information in our standard format to achieve such “easy abstraction and manipulation of the information”. Besides, since we don’t have aruco code in Webots, we can not tell the robots therefore world coordinates. Therefore, we added a GPS component(included in Webots) to the robots to ensure their acknowledgement of their world coordinates consistently. We couldn’t tell their direction either, so we added a Compass component(included in Webots) to the robots. We spent much time dealing with the value obtained from Compass because certain translations are needed based on different standard directions that are set for the Compass. By doing so, we also had to change our action protocol for our robots accordingly. 

After we completed our simulation, we ran it several times. It turns out that ignoring the collision, the testing results of the simulation are promising and are really similar to our real testing. In the next step, we plan to greatly reduce the errors of the moving distance and direction of the robots since we want the simulation to be run in an ideal condition that minimizes all the errors concerned. We also plan to better simulate our robots physically in its size, appearance, and its functionalities (especially components). 

What’s next?

In the past week, we have been thinking about the next step. Indeed, it would be cool if we could develop our own path finding, collision avoidance algorithms and implement it on our robots. Yet as a group of undergraduate students, with limited time, resources and knowledge, this is rather hard to achieve. Here we tried changing our perspective, thinking about what we can contribute to academia, specifically within the scope of swarm robotics. Below is our conclusion after some discussion we had. 

  1. Focusing on the field of Human-Robot Interaction, implement applications such as patrolling, rescuing on our robots. 
  2. Using it as a testing platform by implementing algorithms developed by others, compare real-world data with simulation data, therefore validate their algorithm.

Discussions