Early on in the development of FieldKit, we had a meeting to discuss the project's goals, in as broad a sense as possible. We also wanted to look closely at what we thought were the project's values– the things that would define how the project existed as an entity in the real, human world. We ended up with five: Reliable, Legible, Open, Responsible, and Accessible.
Reliable means that FieldKit is not disposable tech. We want you to use our tools for years and decades. It also means that the data from FieldKit is certified to be accurate and scientifically valid.
Legible means that data from FieldKit should be easily read and understood by anyone, not just people with science and math training. Likewise, our documentation and instructions should be readable by everyone (not just english language speakers)
Open means that we release our hardware plans and software code, but also that we share our processes and plans, our philosophies and values. We open ourselves to collaboration and critique.
Responsible means that FieldKit products are built and shipped using environmentally responsible methods, and that we teach users how to deploy responsibly. It means that we have a plan for recycling our hardware. It also means that we take our users privacy and their data rights seriously.
Accessible means that the entire project - hardware, software, community - is open to everyone. Specifically this means people who have historically been excluded from tech projects like FieldKit, including women, people of colour, LGBTQ+, the elderly, disabled people, and users from the global south.
As we've been building the three core parts of FK (the hardware, the software platform, and the community), we've tried to keep these five pillars of the project in clear view. With any step, small or large, we can ask ourselves how well we're fitting to these values. If a new interface feature seems really cool, but gets in the way of accessibility, we might revisit or even discard it. Same goes with hardware development– if there's a high environmental cost for a certain part or process we'll work to find better alternatives.
Responsible is the value that I've thought the most about. I've been involved in tech for twenty years, and in that time have had a front row seat to the damage that (mostly) well-intentioned products have inflicted on our lives, and on the environment around us. Our political systems are being hacked, and our landfills are filling up with toxic waste in the form of discarded phones and tablets.
In the last two years, I've become specifically aware of how much harm data systems have caused - from invasive ad-tech to facial recognition, the collection of data and its operationalization continues to put people and ecosystems at risk. We've become very good at envisioning the benefits of data collection, but not good at all at imaging its possible risks.
In building FieldKit, we've focused on two paths toward data responsibility: mitigating potential harm and educating users. The former means building our data systems on secure protocols, and giving users control over how their data might be shared and to whom. The latter means asking questions about possible environmental and social harms of data collection before a FieldKit user heads out into the real world to deploy:
- Have you spoken to local wildlife experts to ensure your FieldKit station won't disrupt the ecosystem you're putting it in?
- What is your plan for retrieving and reusing/recycling your station?
- What indigenous territory are you deploying your station in and do they have rights to the data you're collecting under data sovereignty claims?
- Are there local schools or community centers where you might bring the results from your station back to the people who live where you're collecting data from?
- Are there other parties who might be interested in using this data? How do their interests differ from yours? Will making this data public cause potential harm?
As we head toward a public release of FieldKit in 2020, we'll be continuing to build on these questions. More broadly, we'll be working on ways to evaluate how well we're doing as a team to stick to our values. A big part of this is speaking these things aloud, so that our community can hold us accountable if and when we stray out of bounds.
(Photo from D.J. Patil - Ethics + Data Science)