Desktop Warehouse

Reducing waste by not just storing objects, but remembering and generating information about them.

Similar projects worth following



When our high tech devices die, all or most of the resources and effort that went into their production are lost. Either they are buried in a landfill, or they are recycled. If they are recycled, usually the task is accomplished by melting the tech down to its constituent materials, discarding all that is then useless, and reselling the rest as raw material.

The impact this has on our world is awful. Many are working hard to make scalable technologies that will address this problem in the larger world. But I would like to develop technologies that address this problem on the scale of individual hackers and makers - the people who both use and develop the technologies that risk ending up as polluting waste.



The aim of this project is to develop a suite of tools that make it possible for individuals and organizations (like hackerspaces) to efficiently reuse high-tech in new creations without first spending energy to reduce them to raw materials and then send them through the production cycle again.

Included in this scope are tools and systems that:

Store arbitrary objects efficiently so that they can be called upon when needed

Automatically learn information about objects that they are exposed to in order to facilitate reuse

Make objects and the information connected to them available to wider networks of people, so that they can more rapidly find an efficient reuse



There are many potential tools that would address the purpose laid out above. However, I'm interested in the following specific ideas:

  • A scalable system for efficiently storing small objects and the information related to them. Many hardware hackers accumulate thousands of parts, which usually end up as a large disorganized mess. Even when the needed materials are available, new ones are obtained because those old ones have been forgotten in the chaos. A solution to this problem is a modular digitized object store. Items are submitted, the system connects information to them automatically, and then they are stored in a space efficient manner. When a part is needed, it is searched for and selected in a digital interface, and then automatically brought to the hands of the user by the system.
  • Cheap sensors for collecting information about arbitrary objects. An example is a 3D scanner that could be placed at the entrance of the storage system explained above in order to obtain a 3D scan of every object entering storage. The information from such sensors would be tagged to the objects it pertains to, so that it can be used by other systems and by the end user.
  • Machine intelligence for automatically tagging objects with relevant information. Such systems could: read the part number off of ICs and then attach the corresponding datasheet to the IC. Identify the category that an object belongs to (screw, circuit board, resistor, etc.) Collect information relevant to a particular object class (the type and thread of a screw, for example). Partially or fully reverse engineer a circuit or schematic from a PCB.

  • First Test of Focus Scanner Image

    Owen Trueblood07/15/2014 at 21:12 0 comments

  • Focusing Mechanism Redesign

    Owen Trueblood07/14/2014 at 19:02 0 comments

    Now I'm using a USB microscope and physically changing the focal point with a stepper motor driving a large laser-cut gear. Next I'll hook an EasyDriver I have laying around to an Arduino and start capturing some test focal stacks. Using those I should be able to finish the software that converts the focal stacks to 3D models. Less immediate concern is to package it more nicely so that I can orient and position it precisely.


    This scanner will be at the heart of a sorting machine:

  • Test of Scanner Hardware

    Owen Trueblood06/22/2014 at 22:49 1 comment

    I've been working a lot on the software for converting a focus stack into a depth map. Writing in Clojure and testing with focus stack images found online, I managed to produce images representing depth maps of the objects in the photographs. But the method that I'm currently using to calculate the depth of each of pixel is to take the layer of greatest focus as the depth value, which has so far produced poor results. The depth maps are noisy and inaccurate.

    My plan is actually to use a different approach, where a Gaussian curve is fit to the focus values in each pixel's stack and the depth is taken as the position of the peak of the curve. However, after implementing that algorithm I found it to be far too slow. It would have taken two weeks to process the simple focus stack that I was testing it with, and I expect to need to use it for focus stacks 100x to 1000x thicker. I tried again with an approximation that took the log of the values in each pixel stack and fit a parabola. It was much faster but still too slow, and the results were very poor. I believe that the results were poor because of the small number of images in the focus stack I was testing with. But the speed of the algorithm is a much bigger issue.

    I wanted to try the parabola fit approximation on higher quality data to see if that would improve the results by giving me focus stacks actually resembling Gaussian curves. So I set up the scanner hardware that I built to do a "dumb" run. The motor controller still isn't ready for action, but I can still apply power directly and move the z stage. I was planning to run the motor slowly while shooting a video, and then extract the frames from the video to produce a reasonable-quality focus stack for testing. But while I was fiddling with the machine I realized that it has a fatal flaw.

    In order for the naive algorithm that I'm implementing to work, each pixel must stay correlated to a specific point on the object being scanned. So the flaw should be obvious: when the camera moves the correspondence between each pixel and its point on the object is lost. The projection that the camera makes should be orthographic instead of perspective, or the position of the focal plane should be changed in the optics instead of by physically moving the object to be scanned.

    I am annoyed that I didn't catch this flaw while working out the design or doing research. But the fix is obvious. I'll switch to using a stepper motor to change the focus of the webcam directly.

  • DFF Mechanics Completed

    Owen Trueblood06/04/2014 at 18:40 0 comments

    It took many hours of modelling in SolidWorks and running to the laser cutter and back, but finally the Z axis for the depth-from-focus 3D scanner is completed. Mainly the work was designing parts to adapt components together, like the camera board to the remnants of the micromanipulator, the knob of the Newport 443 Series linear stage to a gear, and the geared motor also to the stage. Activating the motor moves the apparatus up and down beautifully, so I can move on to designing, building, and writing firmware for a motor controller to let me control the scanner mechanics from a computer.

View all 4 project logs

Enjoy this project?



Mike Maluk wrote 10/23/2015 at 05:07 point

Love the idea, can't wait to see it mature!

  Are you sure? yes | no

Mike Szczys wrote 06/05/2014 at 15:48 point
Wow, cool concept! Thanks for entering it in The Hackaday Prize.

You have a good start. I can't wait to hear how the device captures, stores, and shares the data about the components that it salvages.

  Are you sure? yes | no

Owen Trueblood wrote 06/05/2014 at 17:57 point
Thanks. Updates coming soon, and regularly.

  Are you sure? yes | no

PointyOintment wrote 06/05/2014 at 04:11 point
Great minds think alike! I have a huge collection of random electronic and mechanical parts and assemblies, and over the past few months I've been thinking about making an inventory system to keep track of them all. A few weeks ago I thought of a machine that automatically disassembles them into their components for easy reuse and looks up info on the components. Just the other day I had the idea to 3D scan all objects both to facilitate efficient packing and to just have the extra knowledge about each item, to make it easier to reuse them. I'm not ready to really begin working on my system yet, but I'll be watching your implementation with great interest.

  Are you sure? yes | no

Similar Projects

Does this project spark your interest?

Become a member to follow this project and never miss any updates