Remote sensor platform for Conservation, Science, and Education.
With a general architecture established, Upper seemed like the obvious choice for a first build. One of the first things to think about was Swamp Finger.
FieldKit stations go to unfriendly places. They typically live in water-resistant enclosures, but when those enclosures do get opened, the conditions may well be less than ideal. The hand reaching in, to probe at the delicate electronic guts may well be dirty, wet, or clumsy. Jacob coined the term "swamp finger" to describe the hazards of living in these environments. This immediately ruled out otherwise attractive options like capacitive sense 'buttons', but it also led to one of the more striking design decisions of the process, which was to place components only on the inside of the sandwich made by Upper and Lower, where no swamp finger is apt to roam.
This somewhat complicated our desire to add a screen. Previous FieldKit hardware attempted to indicate all needful data with LEDs. This is not ideal in the field, or for that matter, for battery consumption, so the plan was to go with one of the ubiquitous I2C OLED displays which have been in so many projects in the last few years. Since we were only populating the invisible side of the board, this meant reverse-mounting the OLED display and cutting a window in the PCB so that it could be seen from the other side.
Then it was just a question of choosing the right tools for the job :
Once upon a time, we relied entirely on the SD card for storing the data gathered by FieldKit stations when deployed. The price is right, but as anybody involved in #badgelife can tell you, SD cards are The Worst and probably not to be trusted for mission-critical jobs of that kind. Darwin needed on-board memory, so we put in four slots for high capacity SPI NAND flash. We typically use 2Gb chips, giving us a 2-8Gb installed range, depending upon need. The SD card is now for backup and contingency offload if the radios fail.
The ubiquitous ATSAMD21 series had served us well, but we were entirely saturated for pins and memory. That said, we didn't want to leave the Atmega Cortex line entirely, as community support for and understanding of the chips is very good, so we ended up with the biggest of the line, the ATSAMD51 in a 128 pin TQFP package. Bring me all your pins and RAM!
In support of the ATSAMD51 we installed a fairly hefty QSPI flash chip for bootloader duties.
In theory, we could have used the internal RTC on the ATSAMD51 for RTC duties, but prior experience made me slightly gunshy on that point, so we used an external RTC, and supplied a supercapacitor to serve as its 'battery' backup. Previously, we've used CR2032 cels for this, but the irony of making a conservation-oriented product with primary lithium batteries struck us as a little hard to justify. Since this RTC also had clock out, we used it to pump a clock into the microcontroller and only needed the one crystal.
2.54mm pitch box headers seemed like a natural choice for mezzanine connections between Upper and Radio. They are ubiquitous, durable, and tall (the GPS module is a big boy).
What about a programming header? Well, we decided not to have one. All the pins required for programming are on the mezzanine connector. Batch-programming jigs were always in the roadmap anyway, and for in-circuit debugging, well, Jigs Ahoy!
In the interest of monitoring and predicting battery life in cold conditions, we added an inexpensive temperature sensor as well. Thought it looked cute, might delete later.
Next up : Radio!
As machine learning becomes more and more ubiquitous, the development of smart devices and IoT that’s “on the edge” should become dramatically easier. Given this, I was interested in exploring applications for machine learning in conservation biology, specifically in animal recognition for camera trap images.
Camera traps, used by environmentalists, filmmakers, and researchers, are deployed into the wild to monitor animals. Triggered by infrared sensors, camera traps can detect movement and autonomously capture photos and videos of animals. This is a boon and curse for environmental monitoring because image results can vary widely, ranging from false positives to shots worthy National Geographic. Similarly, researchers may collect an overabundance image data or none at all. The variability in camera trap results makes image processing and deployment management an incredibly tedious process.
Using object detection and image classification models, we can make animal monitoring much more efficient. The range of efficiency, however, depends on how machine learning is applied. In this article, I will provide a brief introduction into two possible avenues to explore.
But before jumping in, I would like to first cover the limitations of machine learning for image processing. While it would be immensely convenient to use machine learning to identify species all over the world, there are simply too many species for one model to cover. The most practical application of machine learning would be to build an object detection model to identify the presence of animals in an image, and then complement the object detection model with a regional, geographically-specific image classification model. The results of object detection models for animals have been proven to be quite robust, reducing the time spent sifting through false positives. Furthermore, image classification models trained on smaller subsets of animals have also provided accurate results.
So, the first and most straightforward method of increasing animal recognition efficiency is to simply run camera trap images into an object detection and image classification model once all images are collected. In my time perusing the internet for available models, I found an open-sourced animal object detection model run on InceptionNet that was built by Microsoft’s AI for Earth project.
Unfortunately, I was unable to find any open sourced image classification models because of its lack of generalizability. There have been multiple research efforts examining the accuracy of animal classification, and results have shown that a model trained for a specific location will perform significantly worse when applied to a new geographic region. An example may be found here.
Most models were built to research image classification accuracy, trained for specific locations such as the American Midwest. When applied to a new location, image classification accuracy decreases significantly. If you are interested in training your own image classification model, you may find valuable datasets in Lila Science, a library for environment-related datasets. I would recommend using the datasets, Caltech Camera Traps or Snapshot Serengeti.
The second and more ambitious method is to build a camera trap “on the edge.” There are currently no camera traps that offer on-device machine learning, but it is quite easy to build a nascent one using accessible hardware.
To test this possibility, I simply hacked a Google AIY vision kit, which comes with a Raspberry Pi, Vision Bonnet (Google’s ML processing board), Pi Camera, and other hardware pieces to tie it all together. To make it a real camera trap, I added my own PIR sensor that would trigger the camera to take an image and process it with a model.
On the software side, there were already a number of built in TensorFlow models that...Read more »
The current assortment of available scientific tools for environmental monitoring, field science, and data-driven storytelling is too expensive, unnecessarily proprietary, and not user friendly.
At FieldKit, we want to create an environmental monitoring platform that’s accessible to everyone – that is, low cost and available to experts and amateurs alike. The FieldKit app plays a critical role in setting up the physical environmental sensors. These two elements have to feel like complementary parts of the same overall FieldKit experience. Our goal is to create a seamless hardware-app experience that is engaging and reassuring for all our users.
The Design Process
To create a seamless user experience, we followed the design-thinking process of user and landscape research, problem definition, and solution design and testing through iterative prototyping:
User Interviews: Empathize and understand users’ needs and pain points
Competitive Landscape Research: Research of the environmental monitoring industry
Designing and Prototyping: Creating the user flow, wireframes, and prototypes
To better understand what features would be useful in the software, we first had to understand the users. Who are our users, and what were their current behaviors, needs, and concerns? How might our findings help us design an experience that brings them value?
FieldKit User Personas
Overall, we conducted 18 user interviews from a variety of target user types – field scientists, educators, citizen scientists, and environmental justice advocates. During the interviewing process we discovered that generally these roles interact with FieldKit at different levels – either as a data collector, a data author, or a data consumer. For example, we learned that field scientists were not actually going to out into the field to deploy sensors and collect data, the graduate students were doing that. The field scientists were the data authors, setting up the project parameters for Graduate Students (data collectors). This discovery focused our efforts to subsequently interview graduate students in order to create a new persona and gain further insight.
We found that the 5 different user personas that had unique behaviors, all had similar needs. The main concerns were easy to use and easy to visualize the data. We took user feedback a step further by collecting data through an online survey. The survey included questions about their role, what sensors are most interesting, and what features were important.
We gained the following insights from the online survey:
Competitive Research & Journey Mapping
Understanding the current environmental monitoring product landscape was an important step in defining FieldKit’s unique value proposition and informing the user experience design. What were the available products out there? What were their limitations, and how might that understanding reveal opportunities for the FieldKit experience?
Although there were a number of consumer products on the market, it was tough to find a direct competitor that offered the type of scientific grade environmental sensors and easy-to-use user experience that we’re aiming for with FieldKit. It thus benefited us to look beyond the environmental monitoring...Read more »
If you've ever learned a new skill by doing, you're familiar with the experience of working your way through a project to completion only to immediately want to start over with all the understanding you've gained in the process. Building hardware and deploying it for scientific fieldwork is a very similar crucible. The process of building and assembling tens of remote instrumentation stations will tell you where the unexpected time sinks and pain points are. Deploying them will reveal where the hardware is fragile, inconvenient, or counterintuitive. All the while, you'll be coming up with features you wish it had.
When I joined the FieldKit project, the general shape of our future hardware was already starting to emerge through this process. Essential as it was, the current hardware couldn't take the project where it needed to go. A new generation of hardware was going to be required, and since you should never miss an opportunity to give a project a good internal code name which makes you feel like a secret agent, I dubbed this next step in FieldKit evolution 'Darwin.'
Up until Darwin, a Fieldkit station typically consisted of two boards. The "Core" board handled radio communication, GPS, data logging, and power management. The Module handled tending the sensors, whatever they were in a given situation.
So, in an ideal world, what do we want?
It might be useful to point out that our goal has never been to hit big venture capital, and have 20,000 units of Fieldkit produced in China for a few bucks apiece, so the pressures that cause many an engineer to condense everything into a single compact board for easy mass production don't really apply here. That said, starting to break the FieldKit system into separate PCBs does actually improve the manufacturing equation by turning the slowest evolving part of the system into something which can be produced by contract manufacture in larger quantities, without committing us to producing large numbers of those parts of the system which evolve faster. I'll be writing more on our manufacturing philosophy later.
So, if you work your way through this list assuming that you're going to end up with several boards and connect them with board-to-board means rather than cables, you end up with an architecture that looks more like this :
Now in an ideal world, Power Management, which is always basically the same, might have ended up on the Upper board, but there were good mechanical engineering reasons not to go that way. (Curse you Meatspace, ruining my fantastic theoretical diagrams!)
IRL, that ended up looking like this :
There were lots of decisions to be made at each of these boards, so I'm going to write them up individually in a series of posts starting tomorrow.
Pretty much every sensor deployment we've done has been to remote areas with little or no connectivity. It can take days to reach some locations, either off roading through unforgiving terrain, boating in over crocodile infested waters, or hiking over rocks, ice, and snow. Sometimes we've been able to get status over satellite, but the bandwidth and power budget usually mean that the truly useful status and diagnostic information is left sitting idly on disk until the station can be visited again physically. It's stressful setting up a station and then leaving the poor thing behind, hoping that nothing was forgotten and that enough testing was done.
Over the last few months our efforts have largely revolved around some work we're doing with WCS and FIU in the Amazon jungle. Most of the stations there have been of the breed we're used to, left on their own to fend for themselves. Lately we got word that a future site would have WiFi, which for us is a pretty unique opportunity for a few reasons. First, we'll be able to get higher fidelity diagnostic information and data from these stations. In addition, given the right preparation, we'll be able to service the firmware on these stations remotely.
Being able to remotely upgrade firmware is a feature I've been wanting for a while. Given the state of the FieldKit project we've never really had a reason to expend the effort for the feature, though. This recent news was a great opportunity to justify that initial groundwork work.
Now that the feature is implemented and being tested, I wanted to write up a post going over what the feature took. So, get ready, this is a software heavy post.
At a high level, the basic premise is that the station would periodically check with our servers to see if there is new firmware available. If there is, the firmware is downloaded and then stored in the Serial Flash chip. Once completed and verified, the MCU sets a flag in memory indicating the self-flash should be done and then restarts itself. At startup our custom bootloader checks for this flag, and if set will reprogram the MCU's flash memory from the binary in the external flash chip.
When remotely upgrading module firmware the process is very similar. The Core module (the one with the WiFi) will check to see if any of the attached module's firmware is outdated, downloading the binaries if necessary. Then that binary is transferred to the module over I2C, verified, and the module restarts itself in a similar fashion.
This is one area where us deciding to include serial flash memory as a standard "Module" feature was a good idea. This process would have been more awkward, otherwise.
It's important to us that all of the work we do fit comfortably within the OSS/OSH ecosystem that's evolved from Arduino and similar platforms. This work represents the largest deviation from that work, so far. Though it's possible to use our code/hardware with standard bootloaders and simply forgo that functionality in your own projects.
Most "maker" focused development boards in the Arduino ecosystem come pre-installed with a bootloader of some kind. This is a small program, usually less than 8k or so, then runs before application code and provides friendlier ways of programming the MCU. For example:
Now would be a good time to mention that all of our boards use the ATSAMD21G18 chip, the same one from the Arduino Zero boards and the Feather M0 line. So most of what's here applies to them and another Cortex M* chips....Read more »
Botswana's Okavango Delta is one of the most incredible places on this planet. Named a UNESCO World Heritage site for it's biological diversity, the Delta is a pristine habitat for all the charismatic megafauna that subsaharan Africa is known for: elephants, hippos, lions, giraffes, and more. It is one of the most incredible places that I have ever been and the need to monitor and protect it has never been more necessary. It was in this magical place where FieldKit was born, with support by the National Geographic Society.
FieldKit was inspired by a collaboration between National Geographic Explorers Shah Selbe, Steve Boyes, and Jer Thorp. Steve was conducting biodiversity surveys of the delta from canoes year-after-year in the same old ways that scientists have done for decades (if not longer). While working in the field in Botswana, Angola and Namibia, the team realized that there were few good open source hardware and software tools that met the specific needs of field research. Not only in sensor technology but also ways to organize and visualize the data. Responding to this need, Shah and Jer began to prototype software and hardware solutions and field-tested these approaches from 2014 to 2017.
We wanted to share the science and the story behind the expedition real-time, so anyone could join and provide insight or support. By turning Into The Okavango (ITO) into a live-data expedition, we have been able to bring thousands of people along with us on expeditions in the Okavango Delta (including an astronaut that was following along from the International Space Station). We collected, stored, and shared 40 million open data points and continuously measured ‘the heartbeat’ of this crucial ecosystem through large-scale open source sensor systems.
This experience we had with ITO was transformative, and it made us realize that we should bring these same capabilities to anyone anywhere in the world by giving them a publicly available, fully featured ITO of their own. The lessons learned and understanding that came from years of continuous field use allowed us to architect FieldKit in way that can be scaled and expanded across various users regardless of how much they know about engineering and computer science. Scientists have already been embracing social media and blogs to share their expeditions with the world visually, but there wasn't a good tool out there for them to do that same sharing scientifically. @Jacob Lewallen has been helping with the hardware and software development on a volunteer basis since the beginning, and stepped in as FieldKit's Principal Engineer at Conservify in 2017.
We already have additional working partnerships with scientists to use FieldKit in their efforts, which include:
In our previous post, Shah outlined our most recent project with the Tropical Rivers Lab at Florida International University and gave a high level overview of the work we've started with them. I wanted to take some time to talk about how valuable the real world work is in providing feedback on FieldKit ecosystem.
It's inevitable in field work that you'll find yourself working with conditions or constraints that you didn't anticipate, even more so when you're the one who has designed the hardware you're using. Ever since my first field experience I've tried my best to keep meticulous notes on the issues I encounter, their solutions, and ways they could be avoided and mitigated in the future. These notes are incredibly valuable compared to in-lab testing of hardware and physical enclosures by virtue of many important facts:
I'm going to quickly go over some of the changes we've made recently, directly due to the efforts of preparing these stations for the field.
Board sizes evolved haphazardly until recently. I've begun rounding them to easy to remember dimensions.
Oh connectors. Connectors have been one of the largest frustrations I've had. We've recently begun incorporating Molex connectors on our boards to link modules to the Core board. The connector is a 5 position one, carrying I2C, power, and Vbus (unused for now) This went so well that one change I wanted to make was use them for even more things, not just the Core/Module connection. We decided to increase our use of these connectors in these ways:
So far used we've been using side-entry USB and JST connectors exclusively. Somewhere around the third board I realized that vertical entry USB would be way more useful in situations where the enclosure is an off the shelf box. Side entry, especially on the ends of boards, requires internal space to be be able to use the connector. Otherwise, you've got to lift the board from inside the enclosure to insert a USB cable when flashing or charging, etc.. The same goes for the JST connectors we used for batteries, though to a lesser extent because they tend to be more maneuverable than USB cables. Right now we're testing hybrid USB and JST footprints that allow either part to be soldered on. We'll be reporting back on how successful this is, especially with regard to the frankenprint we're using for the USB.
FieldKit is designed to work with many different scientific applications, across various deployment scenarios and geographic locations. That requirement came from the type of work we do at Conservify, which can vary from partner to partner. One goal for this Residency at the Supplyframe DesignLab is to get us into a product line focus instead of the constant one-off projects that we have been doing time-after-time. We want to build something useful for the scientific community, that allows for anyone with any scientific level of knowledge or goal can go out and start learning about the world. We wanted to standardize the tools so we could start to focus on the projects in the field and getting the critical data that was necessary for the scientists and conservationists.
One of our implementation partners for FieldKit is a project called Ciencia Ciudadana para la Amazonia (Citizen Science for the Amazon) that is run by the Wildlife Conservation Society through support by the Moore Foundation. This past week we had a milestone for the project: we were sending out the first prototypes of the environmental sensors that will be used in the Ecuadorian Amazon. This milestone kept us very busy this last week, so much of our 0x03 residency was focused on getting the hardware and software ready for deployment. These included:
The photos above were of a few of the systems in their final integration, before we left them in Miami for the trip to Ecuador. Later project logs will cover the construction of these and some of the changes that we will undertake from the lessons learned over the last few weeks.
The plan was to take this hardware out to Quito (Ecuador) for the 2018 AQUATROP conference, where the team behind the Amazonia project had a large scientific representation. There was a showcase where the FieldKit sensors were deployed and many additional implementations were discussed with future partners.
Our partners for this first deployment (seen in the photo above) is the Tropical Rivers Lab at Florida International University. Every Conservify project partners with scientists (either at a university, government, NGO, or even citizen scientists) to handle the specific scientific questions around the technology we develop. This can be things like where to deploy, what to measure, and the specific requirements around the data we need. Getting the scientific question correct is fundamental to the technology development pathway.
Over the course of this project, we will outline some of the other work we are doing with other FieldKit partners and the expeditions that we go on during the development and deployment.
Posting this slightly late project log from last week, as we prepare for some upcoming tasks for FieldKit. We have quite a bit happening in both in the handheld device and in some of our upcoming deployments in the Amazon Rainforest with the Wildlife Conservation Society. More on both those in the coming weeks.
We sent out for the panels for the handheld version to a New Zealand-based PCB supplier that Dan had worked with in the past. This is both for the main MCU and sensor board. They are separated (as Jacob mentioned in an earlier post) so we can mount the sensor board close to the enclosure for more accurate readings. Those sensors measure temperature, humidity, ambient sound, and ambient light.
Jacob has been pulling together the component orders for the pick-and-place machine, which has been an interesting exercise. We have only hand-built PCBs before so our quantities were much lower. Once you start looking into reels and trays of components, the numbers can really add up. Fortunately the common providers like Mouser and Digikey have smaller sub-reels that would be perfect for the runs we plan to do here at the DesignLab. We are still finding the right balance on quantities since we are not in full production yet.
For all the other sensor boards (water quality, weather, etc), we are still ordering the trusty purple boards from out friends at OSH Park. They have been fantastic supporters of FieldKit since the Open Hardware Summit last year.
Product Design Ideation
I have been working a lot on what the look and feel of FieldKit should be as we get into the enclosure design. Jacob and I both come from engineering backgrounds, so some of these industrial/product design stuff is new to us and we were really excited about those opportunities that would come out of the DesignLab residency. I already had a fantastic chat with Majenta and Giovanni about best practices around designing a product like that. The plan is to start on the handheld version (what we frequently call the "Naturalist") and design some solid 3D prints of potential enclosure designs that we can touch and feel. Using those, I will ask for feedback and we can discuss which one meets the feel that we are seeking for FieldKit in general. As these are to be used out in the field, we want them to have an outdoorsy vibe (much like the stuff that Best Made does so well):
We also want the product to follow the branding that we have built for Conservify and started to develop alongside FieldKit partners Office for Creative Research (which, tragically, is no longer around) and the National Geographic Society. For those who haven't been following along for long, FieldKit came out of work we had done with OCR since 2014 as part of the Okavango Wilderness Project. That project came out of a collaboration between a few National Geographic Explorers to bring live data from the field during a biodiversity survey expedition in Botswana's Okavango Delta (more here and here). The branding and color schemes (as currently stands) looks like this:
Finally, there are some functional characteristics that we want for this handheld version. These are:
Happy Monday everybody, hope you’re ready for another update! Things were fairly quiet this week as Shah was in Washington DC for the National Geographic Society Explorer festival. My focus was on the preparation for scaling up the manufacture of our FieldKit Naturalist boards.
We have two goals on this front. The first is the assembly of a panel of boards using the pick and place machine at the DesignLab and the second is having the capability of sending away for assembled boards from a 3rd party. In order to be ready to send away for a panel of PCBs, a quick design review was necessary.
This was fairly straight forward as we have some working prototypes that I hand assembled in our lab and so we have pretty high confidence in our current board design. We did make some changes, though:
In ramping up for the pick and place we need to begin ordering pick and place friendly parts - reels and trays. We try and keep all of our BOM information in our schematic files. This means that going into this process each part already had a supplier name and supplier part number, typically from Mouser. Because many assemblers have their own preferred supplier for basic parts like passives, Dan suggested we keep manufacturer details in the schematic as well.
One of the things I love most about Kicad is how scriptable things are. It was easy to export a CSV of the parts and then fill in MFN (Manufacturer Name) and MFP (Manufacturer Part) there and then update the fields in Kicad from that. We also have scripts that go over all of our boards and their parts and look for parts with out-of-sync manufacturer and supplier information. We want the authority to be the schematic, but with several boards keeping the details consistent can be time consuming. Scripts help with that.
Soon, I hope to write a project log about our “grouped spread and place” script. This is...Read more »