In this post, I'll explain the importance of Computer Vision and how we plan to accomplish this. I'll also cover the state of our computer vision system and where we're going from here.
In the picture above, you can see a Raspberry Pi camera, looking at itself in a mirror, in order to see the nozzle, and and parts that might be hanging around there. We can use computer vision to calculate and adjust the part offset and rotation, to make up for the fact that it's impossible for us to pick a part from a component feeder with enough precision. We can also use computer vision to calculate the offset of the nozzle to the camera, not only at zero degrees rotation of the nozzle, but also for other rotations, which means we can calibrate out the wobble of the cheap Luer lock syringe tip that we use. The mirror technique has some challenges though, so we will be offering a traditional upward-looking vision system, via a second camera, in the weeks/months to come.
There are a few different types of vision on an SMT pick and place machine:
- Downward looking vision - Mounted on the end effector, its job is to look down at SMT parts and fiducials. We use the downward looking camera to perform drag-feed operations on the component feeders, and to recognize fiducials on the circuit boards.
- Upward looking vision - Usually mounted stationary on the machine, pointed up. The end effector moves over the camera to check the part alignment on the nozzle. Adjustments are made, and the part goes off to be placed on the board with a corrected angle and offset.
- Flying vision - This is where the fun starts. If you mount a camera ON the end effector, that's capable of swinging down and looking UP at the bottom of the part, then you can do upward looking vision without moving the part over to a particular spot. This means you can center your parts via CV-calculated offset, ON THE FLY. W think flying vision is awesome, and we hope to support it in the very near-future.
This post will focus mostly on the the upward and downward vision. We won't be able to get the flying vision done before the HaD contest deadline :(
We accomplish downward vision by attaching a standard Raspberry Pi camera to the end effector. We chose the Raspberry Pi camera, over something more common, like a USB pen camera or webcam, for the following reasons:
- Raspberry Pi camera gives us hardware control over important things like shutter speed,
- Raspberry Pi camera is offered in an Infrared version. Infrared is actually pretty great for computer vision. We're currently investigating this.
- Raspberry Pi camera is 5 megapixels and has very low distortion
- Raspberry Pi camera is reasonably cheap at ~$30 USD.
- Lastly, if we spec this camera, it will give all of the beta testers / developers a common platform to start from, rather than people going out there and hooking up whatever they can find. That makes for a much more consistent platform to develop on.
This is all great for a prototype, when development time really matters. Later on, we'd like to offer cheaper cameras based on cheaper image sensors, like the OV2640, or OV9650, similar to what OpenMV uses :D We'll need some time to fully integrate this camera into whatever single-board Linux computer we end up using for the commercial version(s).
Here's an example video of another DIY machine performing upward-looking vision:
We will be adding something like this to our machine in the coming weeks/months.
We can also use the upward-looking vision to calibrate the nozzle-to-camera offset, and the nozzle wobble of our Luer-lock system. I'll update this post in a few days/weeks hopefully with how we accomplish that. I'm trying to get some other stuff done this weekend to meet some upcoming deadlines (July 20 video submission for HaD contest, and I'm also trying to have the machine operational before OSCon on July 22-23).