It's been a little while since I posted an update here, but I have been doing some work in the meantime, slowed down a bit by illness, decorating, and the new 8 GB Raspberry Pi 4 getting released.
Let's start with that:
ESP32 development on Pi 4
<tangent>
I've been keen for quite some time on using Arm powered machines for as much as I can. Back in 2103 I bought the first Samsung Chromebook which used a dual-core Cortex-A15 Samsung Exynos chip, and I used that as my main laptop until I eventually bought an Intel Inside Thinkpad at the end of 2016. My main machine at my day job is an Arm powered HP Envy x2 with a Snapdragon 835, but at home I've been using an i5 2500k desktop for the past 10 years. I've been itching to try and replace at least some of that machine's time with something Arm based, but the devices just haven't existed. They're all too underpowered (Cortex A53 cores), too low on RAM (less than 8 GB), or too pricey (the $4000 Ampere eMAG desktop). I came very close to picking up a Honeycomb LX2K, but before I actually did, the 8 GB Pi 4 was announced (somewhat out of nowhere) - so that was a no-brainer: A usable amount of horsepower (4 x A72), with an excellent software ecosystem, for less than £100. I bought one on release day.
So, since then I've been leaving the i5 desktop turned off and using the Pi 4 almost exclusively. I've installed tuptime to see how much time I use each machine for, and I expect I'll do some more blogging about my experiences over on my blog at https://blog.usedbytes.com.
Anyway, how is any of that relevant? Well, this ESP32 project is my primary project at the moment, so I need to be able to work on it on the Pi 4.
</tangent>
To set-up the ESP-IDF development on a Pi 4 I did have to do some messing around. I'm running the new 64-bit Raspberry Pi OS image, which means it's a native arm64 system. ESP-IDF doesn't have any support for arm64, and therefore some hoops need jumping through.
Building Toolchains
Espressif do provide builds of their tools for 32-bit Arm, which we can use. They also provide source code for all of them. I started out by building the Xtensa toolchain using Espressif's crosstool-ng fork, and their decent instructions: https://docs.espressif.com/projects/esp-idf/en/latest/esp32/get-started/linux-setup-scratch.html
It took about 2 hours on the Pi 4, building on an SD card, which to be honest isn't too bad (my desktop probably would have taken half that. I'm sure a fast modern high-core-count x86-64 machine like AMD's Threadripper could do it in minutes).
I found that this was only the first step. Upon trying to run the ESP-IDF install.sh script it started complaining about the other tools I was missing. Frustratingly, the first one was xtensa-esp32s2-elf-gcc - You need a whole separate toolchain for the new "S2" version of the ESP32, even if you aren't planning to target it!
Anyway, I set up crosstool-ng to build that toolchain too, and left it another two hours.
Installing 32-bit Arm prebuilts
After that, you need the binutils for the low-power coprocessor, as well as an openocd build - and I didn't feel much like building those from scratch so I started trying to use Espressif's 32-bit Arm prebuilt packages.
There are some threads on the Espressif forums about modifying the "tools.json" file to add an entry for arm64 which refers to the "linux-armel" binaries, but I just went the manual route of downloading the packages and putting them on my $PATH.
Next issue is when you try to run any of the arm32 components, you see the unhelpful error:
$ openocd -bash: /home/pi/sources/esp/openocd-esp32/bin/openocd: No such file or directory
I've done enough working with Arm boards to know that this usually indicates a problem with the dynamic linker/loader. The reason the message is so unhelpful is because the program really can't do anything much until it opens the loader - and that's failing.
We can use the "file" utility to get a look at what's up:
$ file `which openocd` /home/pi/sources/esp/openocd-esp32/bin/openocd: ELF 32-bit LSB pie executable, ARM, EABI5 version 1 (SYSV), dynamically linked, interpreter /lib/ld-linux.so.3, for GNU/Linux 3.2.0, BuildID[sha1]=f229b58b5b2556ebc80d4e826b8fe1c1e8e229ae, stripped
Here's the relevant bit:
interpreter /lib/ld-linux.so.3
And:
$ ls /lib/ld-linux.so.3 ls: cannot access '/lib/ld-linux.so.3': No such file or directory
Fine. So the loader doesn't exist. This is because the application was built for a single-architecture (not multiarch) 32-bit Arm system, where the loader is simply called /lib/ld-linux.so.3. However, this Pi 4 is a 64-bit multiarch system, with two supported architectures: AArch64 (64-bit arm) and "armhf" (which is basically 32-bit Arm). So we have these two loaders:
$ ls -1 /lib/ld-linux* /lib/ld-linux-aarch64.so.1 /lib/ld-linux-armhf.so.3
The armhf loader should work OK (I'm not 100% sure if there can be problems related to hard versus soft float?), so we can just set up a symbolic link to point the loader the application is expecting to the armhf loader and we're in business:
$ sudo ln -s /lib/ld-linux-armhf.so.3 /lib/ld-linux.so.3 $ openocd -v Open On-Chip Debugger v0.10.0-esp32-20200526 (2020-05-26-09:28) Licensed under GNU GPL v2 For bug reports, read http://openocd.org/doc/doxygen/bugs.html
Next, we need to persuade the ESP-IDF scripts to use our local tool installations instead of trying to download them itself. The relevant script is "export.sh" which is the thing you have to run to get the ESP tool(s) added to the environment. It calls idf_tools.py, which handles getting and loading tools, and it has a "--prefer-system" argument to make it just use already-installed tools instead of installing its own.
I didn't find a way to pass this to the export.sh script, so I just patched it:
$ git diff
diff --git a/export.sh b/export.sh
index 9be1b0f5d..db4d492ad 100644
--- a/export.sh
+++ b/export.sh
@@ -76,7 +76,7 @@ function idf_export_main() {
# Call idf_tools.py to export tool paths
export IDF_TOOLS_EXPORT_CMD=${IDF_PATH}/export.sh
export IDF_TOOLS_INSTALL_CMD=${IDF_PATH}/install.sh
- idf_exports=$(${IDF_PATH}/tools/idf_tools.py export) || return 1
+ idf_exports=$(${IDF_PATH}/tools/idf_tools.py export --prefer-system) || return 1
eval "${idf_exports}"
echo "Checking if Python packages are up to date..."
The last part is making sure we have the appropriate Python virtual environment set up. This would normally be done by the "install.sh" script, but we can't run that due to the unsupported architecture. Thankfully, idf_tools.py does all the heavy lifting, and you can ask it to only set up the Python environment. Oh, but you need to make Python 3 your default interpreter first (Why oh why do the Pi foundation still ship Python 2.7 by default in 2020?) And, you'll also need g++ for one of the python dependencies:
$ sudo apt-get install python3 python3-pip python3-setuptools $ sudo update-alternatives --install /usr/bin/python python /usr/bin/python3 10 $ sudo apt install g++ $ tools/idf_tools.py install-python-env
And now, finally we can successfully export and run idf.py. It warns that a few tool versions aren't exactly what it prefers, but that should be harmless:
$ . ./export.sh Setting IDF_PATH to '/home/pi/sources/esp/esp-idf' Adding ESP-IDF tools to PATH... WARNING: using an unsupported version of tool cmake found in PATH: 3.13.4 WARNING: using an unsupported version of tool openocd-esp32 found in PATH: v0.10.0-esp32-20200526 WARNING: using an unsupported version of tool ninja found in PATH: 1.8.2 Checking if Python packages are up to date... Python requirements from /home/pi/sources/esp/esp-idf/requirements.txt are satisfied. Added the following directories to PATH: /home/pi/sources/esp/esp-idf/components/esptool_py/esptool /home/pi/sources/esp/esp-idf/components/espcoredump /home/pi/sources/esp/esp-idf/components/partition_table /home/pi/sources/esp/esp-idf/components/app_update /home/pi/.espressif/python_env/idf4.2_py3.7_env/bin /home/pi/sources/esp/esp-idf/tools Done! You can now compile ESP-IDF projects. Go to the project directory and run: idf.py build
Back to business: Adding "Services"
I was unsure for a while how to proceed with the software part of the project from an architecture point of view (Not sure that's a good thing, as my day job is as a software architect, but heigh ho).
My microcontroller projects are generally simpler, bare-metal interrupt-driven affairs, but the ESP32 is a much more complex system with an RTOS and multiple cores, with a pretty complete C library with filesystems, sockets and all sorts. More akin to a full Linux system than a microcontroller. In this kind of environment, it seems like multiple tasks are the way to go, so I've written a simple message-based service architecture, where I have multiple tasks or "services", which can talk to each other by sending messages across queues.
The interface to the service manager looks like so:
struct service *service_register(const char *name, void (*fn)(void *self), UBaseType_t priority, uint32_t stack_depth);
struct service *service_lookup(const char *name);
int service_send_message_from_isr(struct service *service, const struct service_message *smsg,
BaseType_t *xHigherPriorityTaskWoken);
int service_send_message(struct service *service, const struct service_message *smsg,
TickType_t timeout);
int service_receive_message(struct service *service, struct service_message *smsg,
TickType_t timeout);
int service_stop(struct service *service);
int service_start(struct service *service);
int service_pause(struct service *service);
int service_resume(struct service *service);
void service_sync(const struct service *service);
// For use by service "fn" routines only
void service_ack(struct service *service);
And a basic service implementation would look like:
static void simple_service_fn(void *param)
{
struct service *service = (struct service *)param;
while (1) {
struct service_message smsg;
if (service_receive_message(service, &smsg, portMAX_DELAY)) {
continue;
}
switch (smsg.cmd) {
case SERVICE_CMD_STOP:
// Do "stop" stuff
break;
case SERVICE_CMD_PAUSE:
// Do "stop" stuff
break;
case SERVICE_CMD_START:
// Do "start" stuff
break;
case SERVICE_CMD_RESUME:
// Do "start" stuff
break;
case SIMPLE_SERVICE_CUSTOM_CMD:
// Do whatever custom cmd is meant to
break;
default:
// Unknown command
break;
}
// Acknowledge the command
service_ack(service);
}
}
The lifecycle is similar to what Android uses. Start and Stop are intended to be infrequent and fully initialise/de-initialise whatever the service is managing. Pause and Resume should be lighter weight - just temporarily stopping activities to be resumed later.
I've implemented four services:
- Main service
- Basically the main task loop
- Implemented as a service to allow it to receive messages from the other services
- GPS service
- Manages the GPS module
- Start - powers up the module and configures message/rate parameters
- Stop - powers down the module
- Pause - Not implemented, but will put the module to sleep without powering it off
- It can publish "GPS locked" and "GPS position" events to other services
- Accel Service
- Samples the accelerometer on a timer
- Calculates a variance value per-second (will form the basis of a movement detector)
- The accelerometer powers up and down instantly, so no distinction between Start/Stop and Pause/Resume
- PMIC Service
- Handles the AXP192 power management IC
- Allows other services to request voltage rails
- Monitors the IRQ/power button
- Will add support for battery monitoring etc.
I expect to be adding at least a few more services:
- Logging service
- Handle writing the track data to files
- Network service
- Keep an eye out for network connections and upload files
- LoRaWAN Service
- TTN mapper and GPS tracking
Next steps
Now I need to tidy up the actual logging. I think I have decided to use the "FIT" file format used by Garmin devices. This is a format with a detailed open specification: https://www.thisisant.com/resources/fit-sdk/, but not such wide support in tools. I spent some effort adding file-writer support to a golang library: https://github.com/tormoder/fit, and to be honest I quite like it as a format.
So, next step will be to add support for creating FIT files on the device, containing GPS and accelerometer data.
I'm a bit worried about the file size. I've only got around 1-2 MB of flash available, which I estimate to give an hour or two of logging time with high-rate accelerometer data. I want the full accelerometer data to begin with, so I can use that to work out good road-quality metrics - perhaps I can get the data-rate down once I have done some analysis.
I'm also looking at compression - perhaps heatshrink: https://github.com/atomicobject/heatshrink, which should knock around 30% off the file sizes.
As always, the code is on GitHub: https://github.com/usedbytes/tbeam
Discussions
Become a Hackaday.io Member
Create an account to leave a comment. Already have an account? Log In.