• Technical Product Management with a distributed team; Or, Product Plate Spinning.

    Susan Alllen8 hours ago 0 comments

    Sometimes I refer to my job as “Chief Plate Spinner”. There’s something about the comical yet determined spectacle of spinning plates that feels like product management. FieldKit is certainly a technical cross-media product with multiple moving parts. With a lean team of 8 distributed across Los Angeles, Portland OR, New York and Connecticut, we strive to keep our particular plates spinning by continually building on good habits, optimizing workflows, and with a regular dose of face time.

    I’m a big fan of starting as you mean to go on. At project kick off, we took the time to set the tone as the whole team gathered in LA to align on FieldKit’s purpose, value and success criteria. Over the next two days, we hashed out our hopes and fears for the product, strategizing about team roles and risk management, and aligning on a clear shared vision.

    We tackled the big ticket items first, roughing out an overarching schedule for the year –  what the major moments might look like for hardware and software design and production, deployments, and community development. This long term forecast contained key milestones, a prioritized deliverables list, and a critical path for key work streams. This gave us a macro mechanism to monitor progress, anticipate upcoming time and resource crunches, and pivot as necessary. Much of a project like FieldKit is being able to set the ideal pace. Then knowing when to push to meet critical milestones, versus when to re-group and course correct.

    With those guideposts in place, we set up a regular rhythm of more detailed systems and processes that would support the work and facilitate the right kinds of conversations. That’s a fancy way of saying we put regular meetings in the calendar, set up an internal wiki with key product documentation and a task tracking system.

    The aim of the macro/micro approach was to facilitate productive communication, catch potential issues as early as possible and drive decision-making. We have a ton of ambition, even for a lean mean fighting team! So a concern was potential “analysis paralysis” and a lack of timely decision-making. While one remedy – the “Oh, sh*t!” method – was to visualize just how much we were trying to do and how little time we had left, another was to simply structure regular meetings less around discussions and notes, and more around decisions and actions.

    At the heart of the job is greasing the wheels to keep things moving. While sometimes that means pointing nervously to schedules and facilitating decisions, it also means entrusting your team with the expertise and responsibilities they have to deliver on the product vision. Especially with the highly technical aspects of electrical engineering, software programming, and 3D modelling needed to make FieldKit a reality. Other times that means leaning into lean – beyond job titles and technicalities – to hybrid moments that will get the job done. When we filmed the FieldKit prototype film, Lauren our product designer was the talent, I jumped in to help direct, and our executive sponsor Shah was the director of photography!

    What remains crucial – especially for a remote team – is coming together for key in-person check-point meetings to not only hold each other accountable, but more importantly to build rapport, celebrate the wins, and galvanize our passion for the work. It’s a privilege to be working on such an important and optimistic product for the world with such a kick-ass team, and sometimes the best way to remember that is to raise a real life glass!

  • From Sensor to Sensorium

    jer.blprnt09/12/2019 at 00:35 0 comments

    An abstraction of data from a FieldKit station in Peru, monitoring water conditions in the Amazon basin.

    Five years ago, I met a neuroscientist who was wearing a strange, vibrating vest. The device, hidden under his clothing, took real-time data from the stock market and sent it to an array of quietly buzzing pads that were in direct contact with his skin. The vest was an experiment in sensory substitution— the idea was that if he wore the vest for long enough, his brain would accept the stock market stimulus as sensory data and he'd begin to actually hear, or to see the data. He would, in effect, gain a new sense.

    It's a really cool idea. Scott Novich and his then advisor David Eagleman have gone on to start a new company marketing the vest and other similar technologies to people with hearing and vision loss, and to budding cyborgs. I'm writing about it here because it's a good reminder that our own experience of the world isn't so much about the sensors— our eyes and ears and taste buds —as it is about the sensorium, the whole brain and body system that gives us the ability understand the things that are around us.

    Much of the focus of environmental sensing has been on gathering data. We deploy our humidity sensors and anemometers and geophones to turn real world conditions into numbers, which we dutifully write to SD cards and hard drives and file into sqlite databases. Less attention has been paid to what happens next. How do we recognize important patterns in the data? How do we share these findings with others, and how do we make impactful visual narratives to tell data stories to the wider public?

    In our FieldKit design sessions, we're asking questions about how data from sensors might lead to real value for individuals and communities, regardless of technical ability.

    We've spent much of the year at FieldKit designing interfaces and workflows that let users easily visualize their data, without having to write code or learn complicated tools. Core to our data platform is the capability to make comparisons, between data from separate sensors or from different time periods, and then to share these comparisons with other. A user in North Carolina might notice that water levels in the Tuckasegee are rising faster than they did for the same period in the previous year; by sharing this finding with collaborators and comparing their data to information from other FieldKit stations along the river, they are able to put their discovery into context.  FieldKit also makes downloading and sharing data easy, so people can perform more detailed analysis, post their findings to social media, share the data with a governmental monitoring agency, or turn use it to make a sculpture, or a performance or a poem.

    FieldKit's mission is to break down the existing barriers around environmental sensing. This means lowering the cost of sensors, but it also means empowering a wide range of individuals and communities (not just the usual suspects) to be able to discover and tell the stories that are encoded in the data they collect. It means that we're not only making sensors, but also designing an entire sensorium which writes people into the full process of collection, analysis, and telling of environmental data. 

  • The value of FieldKit in the Real World

    Shah Selbe09/10/2019 at 19:28 0 comments

    People often ask us about what we are trying to achieve in building FieldKit. There are a lot of sensor products out there and a ton of tutorials around building home-brew dataloggers. We kept realizing that those solutions didn't meet the needs of the users that we encounter at Conservify in the conservation, ecology, environmental, and education spaces. These are: 

    1. Create something with thoughtfully designed ease of use that gives people ownership over their own data.
    2. Leverage recent innovations in technology to build something that doesn't break the bank, or at least something that is accessible to the majority of environmentally curious folks out there.
    3. Build a tool for scientifically rigorous monitoring that doesn't tie you to unnecessarily proprietary formats or clunky software tools. 

    These things drove us to continue to try and build FieldKit for use in the greater world, and open source our designs to allow others to modify, build on, and take ownership over the future of this platform. 

    That means building something that doesn't stop at the hardware. It means understanding the users needs the whole way through. It means thinking about what happens to FieldKit data if Conservify were to disappear. It means thinking about what hardware would look like if it were deeply modular at its core. It means thinking through how data is visualized. It means thinking through how others build on FieldKit as a stepping stone. All of these things are concepts that we have been deeply considering over the last two years. Particularly as we consider the "whole product experience."

    There will be more on that soon, but I wanted to share a quick vision of what that experience would look like in the following video: 

    I am also proud to announce that FieldKit was chosen as one of the 20 finalists in the 2019 Hackaday Prize. It is a huge honor to be in this pool of very interesting and capable projects. The FieldKit team is excited about this opportunity and is looking forward to share FieldKit with the folks at Supercon this fall. 

  • Designing Lower

    Bradley Gawthrop08/24/2019 at 01:33 0 comments

    There are not a lot of surprises here, but the power system is surprisingly hard to simplify. It seems simple enough, Battery, Solar Panel, USB power, but edge cases pile up quickly! In any event, there's a battery charging IC (MCP7384) and a Maxim 'fuel gauge' style monitor (MAX17055) and their support circuitry. 

    The Wi-Fi and GPS duties are handled by a pair of Venerable modules (ATWINC1500 and FGPMMOPA6H). "Venerable" here is a word I use instead of "A bit long in the tooth."  and they're the things most likely to be replaced in the near future. Each of them has a dedicated voltage regulator, and there's a third for all the stuff up on the Upper board. Hard experience has taught us things don't always turn off when asked nicely, so having control of their regulators can be a nice added measure of control.

    Not all potential radios are good candidates to live on this board. LoRa and Celular radios and any future wacky development (satellite?) are provided for with a pair of FPC connectors providing power and the usual signaling buses (i2C, SPI) for future-proofing. More on this when we come to the LoRa radio board!

    Speaking of connectors, the Radio board also plays host to the Backplane into which the modules are connected, so a horizontal board-to-board connector is on the bottom, again breaking out the needed power and signals. Different configurations for modules make separating this board a good hedge. 

    As a final trick up its sleeve, we're now working a single module footprint onto the back of Radio for small deployments in unusual enclosure situations (say, stuffed in a PVC pipe.)

    That was a quick one, but buckle your seatbelts, because next time we're talking about Backplane, which is simple, and designing a standard for modules, which is not.

  • An Interlude on Debugging

    Jacob Lewallen08/22/2019 at 20:09 0 comments

    I spent a lot of my waking hours staring at a screen like this:

    I wanted to throw together a quick post to talk about some debugging related things I’ve found myself doing more and more that I think others might benefit from. I’m just gonna rattle them off:

    1. I exclusively load and run firmware from inside GDB. In fact, I now spend all my waking development time inside GDB. Prior to this development cycle I would rely heavily on uploading new firmware over USB, leaning on that functionality being provided by the standard Arduino bootloader. This was annoying in some cases, especially when dealing with hard faults and other serious bugs.
    2. We use Segger’s JLINK brand of programmers, which I think are fantastic. They are scriptable and very predictable and reliable. Their only downside is you need easy access to a programming connector on your hardware, which Bradley kindly provided to me. Being able to step through and over code and inspect memory is invaluable. I should mention that the Grand Central M4 [1] board by Adafruit comes with this header ready to go, which was awesome before we got our own hardware up.
    3. The JLink hardware includes some pretty cool additional features. My favorite so far is the RTT “real-time-transfer” functionality [2] Prior to this I would use a dedicated debug UART or USB CDC communications to view the console output from the firmware. Unfortunately, we were clean out of SERCOM peripherals (!) on the new hardware and so had to get creative. It was right around then that I discovered the RTT stuff. Basically, you dedicate a region of your own RAM to hold circular buffers and the JLink programmer is able to find this region and slurp the data for viewing on your host computer. It’s highly configurable and very fast. I’m a huge fan. In fact, I employ a similar linking trick as above to ensure these buffers are in the same location across all binaries so that the logs are seamless among binary transitions, say from the bootloader to the main firmware.
    4. I write my own custom gdb commands now, typically in Python, for repetitive actions. Especially for things like the build, load, and run flow that I do hundreds of times a day.
    5. Some GDB commands I like that took me a while to find existed:

      n, s, ni, si Next, step, next instruction, step instruction. Many of us know these.

      finish

      Run until execution returns from the executing function.

      b *<address>

      Break on a specific instruction, very handy for skipping over loops, for example.

      disassemble /m

      Disassembles the current function.

      p/x, p/t

      Most people know of p/print. You can change the format with these suffixes to print hexadecimal, binary, etc...

      x/32x, x/32

      Dump memory at the specified location. This is a complex command, like p and so should be looked up.

      I apologize for the formatting of this table, I wish I could do better
    6. I’m a big fan of the custom GDB dashboards that are out there, specifically this one [3] There’s so many and you can learn a lot about what’s possible with GDB by reading their source code and tweaking them to your liking.
    7. I can't stress how useful Matt Godbolt's Compiler Explorer is - https://godbolt.org/ It's a tool I use all the time to get insight into the instructions my compiler is producing, especially when experimenting with C++. You'll also learn so much about the things going on behind the scenes.

    I have a habit of learning just enough of a tool to get by and then halting my learning there. For tools like GDB this is horrible because of the sheer volume of utility lurking behind the common ways people use the tool. I'm trying to do better :)

    [1] https://www.adafruit.com/product/4064

    [2] https://www.segger.com/products/debug-probes/j-link/technology/about-real-time-transfer/

    [3] https://github.com/cyrus-and/gdb-dashboard

  • (Position) Independent Code

    Jacob Lewallen08/22/2019 at 20:05 0 comments

    Disclaimer: This post is highly technical and on a complicated subject. I learned enough to "get things working" and so I’ll probably get some things wrong and as always, criticism is welcome.

    All of the above work is actually in preparation for phase two, which is the ability to load and run custom module firmware and extra versions of the main application firmware. In the short term, we fully intend to just bake our module firmware right into the main firmware, for simplicity. Long term, though, this won’t work. Especially as we get into users providing their own drivers for custom modules and the like.

    Anybody that has gotten creative with firmware has inevitably ended up in this quagmire. My first foray was while I was trying to keep two separate versions of firmware around during upgrades so that the earlier version was available as a backup. This simply won’t work without some effort.

    The problem comes down to knowing where data and code are during runtime. You see, when your code is being linked the locations of variables, functions and instructions used in loops and conditions are typically fixed in the binary as absolute addresses.

    void say_hello_world(const char *message, int a, int b, int c) {
        printf("Hello, world! %s", message);
    }
    
    void test() {
        say_hello_world("How are you?", 34, 12, 76);
    }
    


    When you call a function the compiler turns the call into assembly instructions for passing arguments and receiving return values as per the architecture’s calling convention and then a jump to the absolute address of the callee. During link time the linker knows the memory regions the code is intended to run in (from the linker script). Typically this means the code expects to find itself loaded at some memory address and so that offset is baked into the absolute addresses used when calling functions as well as to finding global variables.

    In many typical cases the running code is offset by the memory set aside for the bootloader. So if the bootloader occupies 0x0000 to 0x4000 then the application firmware is expecting to be loaded at 0x4000 and so all jumps and access will be offset from there. This means that loading firmware at a different location in memory will cause all those absolute addresses to be horribly wrong!

    So, how do we fix this?

    Thankfully this is a problem that’s had a solution for a long time. Under GCC, the first trick is to enable position-independent code [1] using the -fpic compiler flag. This tells gcc to avoid using absolute addresses and include link time information to decouple the binary from a predetermined location in memory. My solution involved combining this with a few other flags and so two different mechanisms come into play:

    1. Relative addressing. By combining the above flag with -mpic-data-is-text-relative many of the addressing issues go away. This flag tells the compiler that the data section is always at a predictable location relative to the code section. So relative addressing can be used to find symbols in that section. Most jumps to functions will also be relative by virtue of using -fpic. This leaves the location of global variables as the problem.
    2. Being able to find data relative to the code segment gets us close, but we’re still falling short because the code running needs to be able to find mutable variables that are in RAM. This is important because it’s very easy for two separate binaries to try and occupy the same region of memory for their variables, and so using relative addressing wouldn’t really help in that situation. Instead, the linker introduces a Global Offset Table [2]. This is a table that contains the addresses of global variables. (See “The Fundamental Theorem of Software Engineering” [3])

    After compilation the compiler will have wrapped global variable accesses in an indirect...

    Read more »

  • Clever Title About Memory

    Jacob Lewallen08/22/2019 at 19:28 0 comments

    A few of the following posts will require a basic understanding of the memory layout used by the microcontrollers we're building FieldKit on top of. For many software developers this will be old news, but I wanted to take the time to make sure we're on the same... page.

    Basically, memory is laid out in specific regions from a functional/hardware perspective and then further subdivided by software. So, let's start with some common diagrams:

    Hardware Memory Layout

    Many of us have seen this, I'll just give a quick summary:

    • Code: This is where program code and static data is stored. This memory is changed very infrequently. Understanding that data can also live here is important. The data that lives here cannot be changed, though without jumping through significant hoops. One common piece of data is the default vector table that's used by the hardware to kick off the execution of software.
    • SRAM: Ah glorious RAM. This is where the state of our software is stored during its execution. It's a treasured and valuable resource and the one most significantly imposed upon by software. By the way, you can run instructions from SRAM, at least on our architecture.
    • Peripherals: The physical hardware and how the software interacts with it is mapped to memory in this region. It's how the hardware makes itself available and is, hopefully, abstracted behind an intuitive software layer. We'll be talking the least about this region.

    Now, summary imposes its own structure on memory, too. When our firmware first runs, part of its initial tasks is to setup and initialize various parts of RAM. Establishing a layout like this:

    RAM Layout

    This is where things are getting interesting!

    • Data: Our software contains variables and data that begin with a known value and will be modified over time. This is where that data lives. One of the first tasks of the software is to copy this data from the read-only Code memory to SRAM. Not only that, but to a place in SRAM where the executable expects that data to be! This caveat will be important later.
    • BSS: Some variables don't have an interesting initial value and just begin with 0. Rather than storing a huge block of 0's we save space by just remembering how big this region needs to be and initializing that are to 0. The name BSS is a throwback to the early days of computing [1].
    • Heap: This is where memory dynamically allocated by the software comes from. It is the first region that changes size and typically starts with a size of 0 at the start of execution. Notice that this region grows upward over time and can also shrink. Calls to malloc/free pull memory from here, as well as the plain calls to C++'s new/delete.
    • Stack: This region stores local variables, function parameters, and return addresses. It grows downward, towards the heap. It also plays a big role in handling interrupts in that certain state is pushed onto the stack prior to calling those routines. It will come up later when we discuss multi-tasking.

    Now that we're... aligned, we can move onto more interesting subjects :)

  • Softer Side of Darwin

    Jacob Lewallen08/22/2019 at 18:22 0 comments

    As Bradley mentioned in his debut post, sometimes you gotta build something to really understand how to build that thing the right way. Unfortunately, it's not at all uncommon with software to find that the version that was written as a prototype to get user feedback and prove a concept ends up being the animal that you take with you into production. All software developers are familiar with the terror of feeling like they're building an airplane while it's flying.


    During the 2018 field season I maintained a fairly long running list of things that I wanted from the firmware that weren't critical enough to implement but would be incredibly useful as the platform matured. The two most important where:

    1. A very slick and flexible firmware upgrade process that was as end-user friendly as possible - from the beginning. The latter part is very important, because this means we could distribute stations to friends and family and be testing and supporting them. I started this work and mentioned it in early blog posts, but there were still some significant gaps, most notably in how the firmware was compiled and run that made certain upgrades tricky. Also, I desperately wanted the ability to upgrade from our mobile app.
    1. Actual tasks/threads as part of a minimal real-time operating system. A lot of work and effort was expended with the first version of the firmware trying to make tasks non-blocking and asynchronous so that multiple systems could be “active” and cooperate while running concurrently. This made many things more complex than necessary and therefore, more brittle.

    A side effect of deciding to overhaul the microcontroller situation was an opportunity to make significant improvements to the firmware. I can’t stress how happy I was the first time I ran code on the SAMD51 and saw that I had more than 256k of RAM at my disposal. Life on the SAMD21 was getting extremely cramped and I didn’t feel like I had the room to do the kinds of things I really wanted. Not just from a RAM perspective. The SAMD21 has 256k of FLASH memory and with the architectural changes around modules this simply wouldn’t be enough to work comfortably going forward.

    Because the modules no longer had their own microcontrollers, the firmware to drive those had to live on the core SAMD51.

    On that chip we have 1MB of flash memory, ignoring the QSPI memory Bradley added. This was more than enough room to store additional module firmware for our first modules.

    In the beginning, the module firmware would simply be compiled into the core firmware as a kind of super-binary. Long term, this would need to be handled in a more flexible way, for a few reasons:

    1. FieldKit is a modular system that is intended to grow beyond the offering we are able to anticipate, so the firmware handling situation needs to allow firmware from sources other than Conservify and the initial FieldKit team.
    2. Upgraded module firmware needed to be possible, independent of the core firmware.
    3. Modularity means that over time it will become impossible for all firmware for all available modules to live on a single device, so juggling and maintaining that firmware becomes the responsibility of the core firmware preinstalled on the hardware.

    So, while I waited patiently for the new hardware I started to lay the foundation for these things. I spent a considerable amount of time at home writing some small, focused libraries to deliver on this extra functionality. Oh, and yes testing on Adafruit's Grand Central M4 :)

  • Designing Upper, and the dreaded 'Swamp Finger'

    Bradley Gawthrop07/29/2019 at 21:07 0 comments

    With a general architecture established, Upper seemed like the obvious choice for a first build. One of the first things to think about was Swamp Finger. 

    FieldKit stations go to unfriendly places. They typically live in water-resistant enclosures, but when those enclosures do get opened, the conditions may well be less than ideal. The hand reaching in, to probe at the delicate electronic guts may well be dirty, wet, or clumsy. Jacob coined the term "swamp finger" to describe the hazards of living in these environments. This immediately ruled out otherwise attractive options like capacitive sense 'buttons', but it also led to one of the more striking design decisions of the process, which was to place components only on the inside of the sandwich made by Upper and Lower, where no swamp finger is apt to roam. 

    This somewhat complicated our desire to add a screen. Previous FieldKit hardware attempted to indicate all needful data with LEDs. This is not ideal in the field, or for that matter, for battery consumption, so the plan was to go with one of the ubiquitous I2C OLED displays which have been in so many projects in the last few years. Since we were only populating the invisible side of the board, this meant reverse-mounting the OLED display and cutting a window in the PCB so that it could be seen from the other side.

    Then it was just a question of choosing the right tools for the job :

    Storage
    Once upon a time, we relied entirely on the SD card for storing the data gathered by FieldKit stations when deployed. The price is right, but as anybody involved in #badgelife can tell you, SD cards are The Worst and probably not to be trusted for mission-critical jobs of that kind. Darwin needed on-board memory, so we put in four slots for high capacity SPI NAND flash. We typically use 2Gb chips, giving us a 2-8Gb installed range, depending upon need. The SD card is now for backup and contingency offload if the radios fail.

    Microcontroller
    The ubiquitous ATSAMD21 series had served us well, but we were entirely saturated for pins and memory. That said, we didn't want to leave the Atmega Cortex line entirely, as community support for and understanding of the chips is very good, so we ended up with the biggest of the line, the ATSAMD51 in a 128 pin TQFP package. Bring me all your pins and RAM!

    In support of the ATSAMD51 we installed a fairly hefty QSPI flash chip for bootloader duties. 

    RTC
    In theory, we could have used the internal RTC on the ATSAMD51 for RTC duties, but prior experience made me slightly gunshy on that point, so we used an external RTC, and supplied a supercapacitor to serve as its 'battery' backup. Previously, we've used CR2032 cels for this, but the irony of making a conservation-oriented product with primary lithium batteries struck us as a little hard to justify. Since this RTC also had clock out, we used it to pump a clock into the microcontroller and only needed the one crystal. 

    MISC

    2.54mm pitch box headers seemed like a natural choice for mezzanine connections between Upper and Radio. They are ubiquitous, durable, and tall (the GPS module is a big boy). 

    What about a programming header? Well, we decided not to have one. All the pins required for programming are on the mezzanine connector. Batch-programming jigs were always in the roadmap anyway, and for in-circuit debugging, well, Jigs Ahoy!

    In the interest of monitoring and predicting battery life in cold conditions, we added an inexpensive temperature sensor as well. Thought it looked cute, might delete later.

    Next up : Radio!

  • Identify Yourself, Firmware!

    Jacob Lewallen07/29/2019 at 20:36 0 comments

    It all began with a common firmware header. This header is at the beginning of every binary our build system produces and contains metadata about that particular binary. Information like the timestamp of the build, the hash of the binary, the git commit of the source tree, the binary’s size and, critically, symbol information.

    There's a few ways to do this. The quick and dirty way is just to concatenate the header and the generated binary. This approach would work but also leaves a little to be desired, especially when compared to "The Right Way". In this situation, the right way is to include that firmware header as an actual symbol declared in the source and to carry it through the entire build process. So this means I got to spend a lot of time learning about linking and linking scripts.

    Linking in modern C/C++ is incredibly complex and so this project was a good introduction to prepare myself for future functionality and this is by no means an exhaustive description of that process. Our build chain looks something like this:

    1. Compile *.c, *.cpp, and *.s to object files.
    2. Archive grouped source files into static libraries.
    3. Link those libraries together to form an ELF file.
    4. Run that ELF file through a custom tool [1], generating an “FKB-ELF” file (FKB is FK Binary)
    5. Developers can then use that generated ELF with gdb or dump a binary using objdump.

    To get our headers working, it all starts with a declaration.

    Typically, the very first chunk of data in your firmware binary (for a Cortex-M chip) is the ISR vectors table. This table starts with the initial stack pointer value, and then a table of pointers to the functions for handling various IRQs. This is where the hardware finds your Reset_Handler function, which is the first function to be invoked.

    Executable files are composed of multiple sections, or segments. Each of these has a special purpose. For example, executable instructions are stored in .text segments. If you refer back to the post on Memory, data is stored in a .data segment, and there's also a .bss segment, though it's not present in the binary and just managed so that we can determine its size. In all major compilers you can override the section/segment that variables and functions are kept in.

    What I wanted was for the FKB header to occupy the leading bytes of the final binary, before the vectors table so that the bootloader and other tools could find them. This is very easily done by assigning variables to custom sections in the source, in my case using gcc’s __attribute__((section())) mechanism. So, the header is declared like so:

    __attribute__((section(".fkb.header")))
    const fkb_header_t fkb_header = {
      .signature = FKB_HEADER_SIGNATURE(),
      .version = 1,
      .size = sizeof(fkb_header_t),
      /* Etc */
    };
    

    The linker script then places this section before the ISR vectors, being sure to maintain the alignment the hardware expects on that table.

    .data.fkb.header : {
        KEEP(*(.fkb.header)) . = ALIGN(0x1000);
    } > FLASH
    

    I should mention that the header as compiled is basically empty and filled with default values. I wanted to be able to customize this header after compilation. Especially because certain things become tricky if you try to inline the header values during compilation, like how do you include the hash for the final binary? Catch-22 town. I also knew that there would be other steps that would have to be performed after linking, which we’ll get to later.

    Next, enter our custom firmware tool. I wrote this tool in Python using the libraries provided by the LIEF project [2] This is a library for manipulating ELF files and has been great. With this library it was very easy to open up the fresh ELF file, find the section I was looking for and replace...

    Read more »