What, Why, and How (roughly)

A project log for Vision-Based Grasp Learning for Prosthetics

Building an intelligent, highly-functioning hand prosthetic on the cheap using the power of deep learning.

Stephanie StollStephanie Stoll 08/26/2017 at 18:090 Comments


The aim of this project is to develop a hardware and software system capable of semi-automatic grasping of objects. For this a prototype hand as well as a software control system are being developed. The central idea is that by giving the hand the ability to determine the best grasping strategy for an object, the human control signal can be simplified significantly.

Ultimately I want to provide a fully functioning, low-cost prosthetic hand that can make intelligent grasping choices using only a binary (open/close) input signal provided by myo-electric sensors attached to the amputee's arm muscles.


There are some really great prosthetics available today that can significantly improve an amputee's life. However, any sophisticated solutions still come at a high development and production cost. To make robotic hands and prosthetics affordable a growing number of hobbyists and engineers have made it their mission to take findings from academia and the industry and use them to develop low-cost, functional robotic hands and prosthetics. Especially with the advent of consumer 3D printing, low-cost PCB manufacture and off-the-shelf components this has become more and more feasible. However, most affordable hands still need to be manually controlled, either by a PC interface, a controller, or muscle sensors, the latter only giving rough control. I want to change that by developing a low-cost, but effective robotic hand prototype, that has the intelligence to make its own grasp choices depending on the object of interest and its position in relation to the hand.


There are several challenges to developing low-cost yet effective hand prosthetics:

The human hand is highly complex and versatile in its dexterity. In order to approximate the human hand’s range of motion, and sensing abilities, a tremendous amount of development time, expertise in varied disciplines, and resources is needed. As this is a one-woman show I decided to reduce the human hand to its basic components and use manufacturing technologies such as 3D printing to assure quick development cycles and reproducibility.

The number of actuators used directly influences the hand prototype’s range of motion, and the number and quality of sensors determines the quality and quantity of data that can be collected and used to control the hand. When trying to keep the cost as low as possible, actuators and sensors are the two materials that will likely suffer from low quality. To get rich, valuable data for low cost, I decided to rely heavily on visual information provided by a wrist-mounted camera. This is used to collect data used to build the hand’s control software. Potentiometer feedback from the hand’s actuators is used as low cost tactile sensors.

Grasping is a highly complex task, and its success depends on multiple factors, which are relative to specific situations and conditions, such as the shape and position of an object, and the characteristics of the hand. Finding a heuristic approach that caters all possible settings, seems impossible. However, using deep learning allows to develop a trainable, data-driven control system capable of making grasp choices and only requiring one external control signal (open or close).