Close

A few more findings and how the basic "coding" mode works

A project log for Custom control app for Mind Designer Robot

This project is about reverse-engineering the BLE communications protocol and getting a custom app working

adriajunyent-ferreadria.junyent-ferre 12/28/2019 at 19:510 Comments

I captured a new communication log, this time using the "coding" mode in the app, which allows you to define a sequence of movements to draw a shape before pressing the "play" button to let the robot go through the sequence. I tried a few different sequences: one where the robot would move forward by 5 cm (the default distance) and then turn to the right by 90 degrees, another one where it would turn 45 degrees, another one where it would move forward by just 1 cm and a further sequence where it would combine moving forward, turning to the right, moving backward and then turning to the left.

I opened the log in Wireshark and found a couple of tricks to make the inspection of the log much easier. First, I filtered the view to show only the messages sent from my phone to the robot, this was accomplished by using the filter "bluetooth.dst ==" followed by the MAC of the robot. I also created a new custom column where I displayed "btatt.value", which displays the content of the message in hexadecimal form. Finally, I set a few colouring rules to mark a few special messages that I will describe later. 

This is the result:

In brief, each "step" in the sequence was sent as a separate message. These messages were longer than those I saw the other day when analysing the real time mode, however the format was pretty straightforward when the message is read as an ASCII string:

move forward by 5 cm: "0001  0005"

turn 90 degrees to the right: "0003 0090"

move backwards by 5 cm: "0002 0005"

turn 45 degrees to the left: "0004 0090"

So basically, the first four characters identify the specific command and second set of four characters separated by a space (0x20) gives the distance in cm or the angle in degrees.

Obviously, the app must tell the robot where the sequence starts and where the sequence ends. This is done using the following messages:

start of sequence: "1100"

end of sequence "1101"

These two messages are very similar to the "stop" command that I found in the real time mode, which is "1000" when read as a string. Similarly, the commands for forward, right turn, left turn and backward in the real time mode are identical to the first four character of the same commands in the "coding mode", (ie "0001", "0003", "0002" and "0004").

When testing all this using nRF Connect by sending the messages by hand, I also realised that timing is important and the robot may disregard the sequence of messages if there's a big delay between the messages. I don't know how long that delay can be but it must be of a few seconds maximum.

Finally, when looking at the communications log, I noticed once again that the app sent messages that made the robot play an audio sample that would say something along the lines of "I'm going to start drawing". This seems to be irrelevant for the functionality of the robot, you can get the robot to follow a sequence without making it play an audio sample. Still, I spent some time creating a list of audio samples of the robot by sending slightly modified commands to the robot. The results are quite surprising: many audio samples say the same but using slightly different words. I suppose they tried to make the robot sound less repetitive. Anyway, here's the list of audio samples that I found, I wrote the description in Spanish because this is the version of the robot that I have (it is available in English, Italian and Spanish):

30 31 30 30 20 30 30 30 31 hola!
30 31 30 30 20 30 30 30 32 hey!
30 31 30 30 20 30 30 30 33 ganas de dibujar
30 31 30 30 20 30 30 30 34 hola! que te apetece hacer hoy yo tengo muchas ganas de dibujar
30 31 30 30 20 30 30 30 35 te apetece programarme?
30 31 30 30 20 30 30 30 36 listos para programar?
30 31 30 30 20 30 30 30 37 aprendamos a programar y a dibujar juntos
30 31 30 30 20 30 30 30 38 dibuja y programa conmigo
30 31 30 30 20 30 30 30 39 que forma geometrica queremos dibujar?
30 31 30 30 20 30 30 30 3A dibujemos juntos
30 31 30 30 20 30 30 30 3B formas geometricas
30 31 30 30 20 30 30 30 3C figuras geometricas
30 31 30 30 20 30 30 30 3D ilustraciones geometricas
30 31 30 30 20 30 30 30 3E imagenes
30 31 30 30 20 30 30 30 3F figuras complejas
30 31 30 30 20 30 30 30 40 tracemos formas
30 31 30 30 20 30 30 30 41 que quieres hacerme dibujar?
30 31 30 30 20 30 30 30 42 que dibujamos?
30 31 30 30 20 30 30 30 43 ponme sobre una hoja de papel e introduce el rotulador
30 31 30 30 20 30 30 30 44 que te apetece hacer hoy
30 31 30 30 20 30 30 30 45 estoy listo para empezar
30 31 30 30 20 30 30 30 46 de acuerdo
30 31 30 30 20 30 30 30 47 entendido
30 31 30 30 20 30 30 30 48 empecemos
30 31 30 30 20 30 30 30 49 venga vamos a jugar
30 31 30 30 20 30 30 30 4A en espera de instrucciones
30 31 30 30 20 30 30 30 4B cargando
30 31 30 30 20 30 30 30 4C pulsa ok para comenzar
30 31 30 30 20 30 30 30 4D cuando termines pulsa ok
30 31 30 30 20 30 30 30 4E donde tengo que ir?
30 31 30 30 20 30 30 30 4F atencion!
30 31 30 30 20 30 30 30 50 [yawning]
30 31 30 30 20 30 30 30 51 venga
30 31 30 30 20 30 30 30 52 arranquemos
30 31 30 30 20 30 30 30 53 volvamos a empezar
30 31 30 30 20 30 30 30 54 sobrecarga de memoria
30 31 30 30 20 30 30 30 55 crey!
30 31 30 30 20 30 30 30 56 yo tengo muchas ganas de dibujar

Final thoughts and next steps: 

Reading the commands in hexadecimal made it hard for me to guess what was going on but once I checked them in ASCII the format of the commands became very obvious. I suppose it is handy for developers to use plain text when creating a custom protocol that isn't really subject to any serious constraints in terms of bandwitdh, reliability, robustness, etc... 

There are still a few things for me to figure out about the communications protocol. Specifically, I want to see what happens with the loops that the "advanced coding mode" allows you to use. I suspect that the phone application unrolls those loops but I have to test it. I read somewhere that there's a limit of 40 steps in the sequences that are introduced by hand using the keys in the back of the robot but I don't know what the limit is when programming from the Android app. If loops are unrolled, this may become a concern when trying to draw smooth shapes like circles and bezier curves.

I also want to try if the "freehand drawing mode" requires a different type of communication with the robot. I suspect that the only difference is on the Android app side, because the freehand mode allows you to draw shapes on the screen that are translated into motion commands for the robot but I want to see if that's the case. The reason why it might be different is because the freehand drawing mode allows you to draw arbitrary lines on the screen that may require the robot to make turns that do not correspond to integer angles while the commands I found in the coding mode only allow you to send integer quantities for distances in cm and angles in degrees. If the freehand used special commands, this would enable us to make fancier drawings.

Finally, I want to note that I haven't implemented any custom app yet and I've kept using nRF Connect for all my tests. Still, I will soon give the custom app a quick try. The easiest way to implement such app is to use the MIT App Inventor. I used this in my Electric Wheelchair project and it worked like charm; therefore, most likely I will stick to the same method this time.

Stay tuned for further updates!

Discussions