AR Breadboarding

So, AR needs markers, but given the predictable structure of features of a breadboard, it should be possible to do so without a marker.

Similar projects worth following
Use a webcam to overlay tags on a feed of a breadboard and use pc monitor to display said feed.
Language - Python, using OpenCV
Status -
Got feed,
Run Harris Corner detection and SURF to find keypoints,
Needs calibration
Next - Parse keypoints to map to breadboard

So the other stuff that needs to be done is

1. color threshing, for wires..

2. Region covered by electronics..

3. 3D-ization

  • Update

    arun.mukundan10/25/2015 at 07:08 1 comment

    To whomsoever it may concern,

    The project turned out to be a bit tough. So I joined a PhD in Computer Vision. The method I would like to experiment with is bag-of-words. Therefore, I will try to detail this as hypothesis, experiment and observation. Also, rather than look at this particular application, I would like to switch to an exploratory way of going about it, so that anyone wanting to implement a computer vision project may have a theoretical template on which to base their application.

    Hypothesis :

    (Originally) Given a model of the breadboard, we can track it, without using any markers. This is vague, and with regards to just tracking, is already done quite well in state-of-art tracking applications, so I will modify it a bit.

    (New-explorative?) Try many alternate pipelines to create a robust application using computer vision. (Still vague, but my kind of vague)

    Experiment & observations :

    The application is broken down as follows:

    Real Life --> Data --> Analysis --> Inference --> Output

    A log so far for each step, forgive the lack of detail.

    • Real-life : We are using a single webcam. The field of view is < >*. The focus is < >*. We have a breadboard, which may be of many types and appearances. We will see clutter on a desk, there may be occlusion due to hands, instruments, components, wires, etc.
    • Data : We capture a 2-D projection of the scene, in BGR. As pre-processing, we have many images of a simplified instance, where there is only the breadboard from different perspectives. An addition of hand holding the breadboard is also present, as incremental attempt to move towards actual use case.
    • Analysis : (So far, only training) The elements of both the data and the inferences need to have representations. Let the image be denoted as x. Then we have transformations Ti which act on x. Further, let the composition of such Ti-s be denoted f such that Y = f(x). The result Y is a set of points that may be of interest (f is thus the SIFT detector). Let Z = g(Y), where Z is a set of descriptors for each point in the set Y. The following is the plot of all the descriptors in the training images, visualized in 2 dimensions by reducing the 128-dimensional descriptor to 2-dimensional points using PCA analysis.

    -----> Will post some theory, links, codes, data and images soon...

    * To be ascertained

View project log

Enjoy this project?



arun.mukundan wrote 04/02/2014 at 16:17 point
Hey man.
Yeah, thats the plan.. check out this earlier post on the same thing...
Im trying to do it with a webcam, without any marker, while the above was with a tablet, and the marker tells the tablet where to look.
I'll try to post some code once i get something working :P

  Are you sure? yes | no

Eric Evenchick wrote 04/02/2014 at 14:17 point
Is the plan to overlay components on a breadboard to show how to build the circuit? I know nothing about AR, so this sounds pretty nifty!

  Are you sure? yes | no

Similar Projects

Does this project spark your interest?

Become a member to follow this project and never miss any updates