An open hardware fast high resolution LASER suited for Printed Circuit Board (PCB) manufacturing or 3D printing. The laser head uses a rotating prism instead of the industry standard rotating mirror circumventing patent US9079355 valid up to 2033 and many more; e.g. US10209226B2.
The goal of this project is to develop a laser head for 3D printing or PCB manufacturing which uses a rotating prism and is easy to assemble. Cyanotype paper is currently used as it can be developed with water. The current electronics provide the possibility to cut a PCB with a spindle.
Specifications were determined from the proof of concept model by exposure onto a camera without lens and OpenCV. More technical details are available in the whitepaper or the business case pitch.
wavelength: 405 nm
rotation frequency: up to 21000 RPM, current 2400 RPM
line speed: up to 34 meters per second @ 21000 RPM
spot size FWHM: circular, 25 micrometers diameter
cross scanner error: 40 micrometers (error orthogonal to scan line)
Firestarter cape (laser driver, 3x TMC2130 stepper drivers, PWM spindle and fan control)
An image can be uploaded to the scanner and exposed on a substrate.
An exposure result on cyanotype paper is shown below. Resolution looks to be around 100 microns. Stitching still needs to be fixed, results in white lanes. The idea is that through holes are made with a spindle. There is a project on Hackaday where a PCB is cut with an EDM.
An exposure goes as follows (for the result see above).
Acknowledgement Special thanks go to Henner Zeller for his work on LDGraphy. The electronics and software in this project helped me a lot with constructing the laser scanner, see video.
One of the challenges still open is how to balance the prism. Earlier, I discussed the mechanics in a blog named prism balancing. In this blog, I want to give a practical example of how to determine the mass and position from measurements. Vibrations caused by a spinning rotor are measured with an accelerometer. As sensor, I used the MMA8452Q but would recommend the MMA8451Q or LIS3DH as they have better specs. I used the Raspberry Pi 3B to measure the signal and generated a signal for the polygon motor via hardware pwm using pigpiod. I pulsed the rotor at 20 Hertz and recorded the vibrations for 1 second at 800 Hertz sampling frequency. Sampling of the signal must be equidistant or otherwise the discrete Fourier transform can not be calculated. As mirror motor, I used the Panasonic AN4000A. The discrete Fourier transform of a balanced mirror, shipped with the motor, is shown below
There is a peak around 100 Hertz. The discrete Fourier transform of one of the prism I use is shown below.
An enormous peak close to 100 Hertz can be seen. This amplitude can be compared to the amplitude of a known balance weight to determine the required mass. As the centripetal force is linear proportional to mass. This procedure is only so simple for single plane balancing, in two plane balancing the procedure is more complicated. To determine the position of the mass, the position of the rotor must be measured. For example by using a photo tachometer. Earlier, I discussed how a camera can be used to measure the position of the rotor. A camera is quite expensive and it would require some image analysis. The DT2234C+ photo tachometer only costs 18 dollars. It was also discussed widely on Hackaday.
As a result, I bought one and did a couple of experiments with it and attached reflective tape to one of the corners of the prism. The photo tachometer measured a speed of 5750 rpm, i.e. 96 Hertz. This corresponds to my measurements as I can see a peak near 100 Hertz. This also explains why I have so many problems with this motor and recommend the Sharp 160. The Sharp 160 is able to spin at 2000 RPM. Also the relation between pulse frequency to the motor and the final RPM is direct. Pulsed at 20 Hertz the Sharp would spin at 1200 RPM and not 5750 RPM. The position of the peak in the cross-correlation of the accelerometer and photo tachometer will be an estimate of the phase difference. If I know the phase-difference with respect to the marking used by the photo-tachometer, I can use that to calculate the position of the balance weight. The code used for the measurements can be found on Github. I also did a successful experiment with an artificial signal to calculate the cross correlation. I still have to extract the signal from the photo tachometer and turn on the photo tachometer digitally instead of with a button. Also, I have to order balance weight. Ben Wishoshavich pointed out I could use armature balance putty.
The last weeks, I have been doing experiments to see if I can replace the programmable realtime unit on the Beaglebone with a FPGA. On the Beaglebone, there are two programmable real time units, which are very similar to micro-controllers, running at 200 MHZ with 8K byte memory per core and 12K byte shared between them. At the moment, I use one PRU and one memory of 8K byte which acts as a ring buffer. There are some challenges with the current design. I am locked into the AM335x processor which runs at 1 GHz and is single core. The latest Raspberry 4 runs at 1.5 GHz and is quad core. If I am able to build an extension for the Raspberry it would not be hard to use the scanner with other boards, e.g. the edge TPU or the NVidia nano. When I started this project, I actually tried to do this with the Spartan 6 FPGA, using the Xula-LX25. In the end, I managed to get something to work by writing the laser data to a sdram and then reading this data from the sdram. This was all written in MyHDL, see Github. MyHDL is converted to verilog and this can then be converted to a bitstream by the Xilinx Ise. In my latest experiments, I used the IceZero with the ICE40HX4K and Migen. The oscillator on the Icezero runs at 100 MHz and the ICE40HX4k has 81920 bits of memory. In practice, the memory is larger as the Icestorm toolchain is able to program the ICE40HX4k as a ICE40HX8k. The ICE40HX8k is a lot less powerful than the Spartan 6. It has 1/6 of the SRAM memory and 1/3 of the LUT. The main advantage for me is the open source icestorm toolchain which runs on linux. My current setup works as follows. Data is streamed to the FPGA via SPI and stored on the FPGA using the internal SRAM. The sram is used as a buffer before the data is placed on the substrate with the laser. My code can be found here, especially look at the spimemmap example. I did some experiments on the Raspberry 3 using Python and Spidev and I am able to get data rates up to 25 megabit per second. In my current laser head, the scan line is 8 mm long and a pixel is 10 micrometers. So there are roughly 800 pixels per line, i.e 100 byte per line. At 20.000 RPM * 4 (facets)/ 60 = 1333 lines per second. This implies 133 kB/s or 1 megabit per second, much less than 25 megabit per second. Another problem, I am still working on is balancing the prism. I hope to report some progress on that soon. What might be of interest is that MIT recently released a press statement in which they show that they are working with lasers and ultrasound. The technology is very similar to my previous blog post. The only difference is that they use a laser to detect the vibrations. Another interesting was one by blog the drive. There apparently is a Dutch startup which use lasers to clean railway tracks.
Won the honorable mention for best communication; 3000 dollars and a lot of branding. It was great to meet you all! Thanks for the award! You can also find this room in the supply frame video, door is opened at 0:44 see video.
Recently, I was reading Hacker news and realized I hadn't created prior art in the field of photo-acoustic imaging. The photoacoustic effect or optoacoustic effect is the formation of sound waves following light absorption in a material sample.
Basically the schema is as follows; you scan with a laser beam over tissue or gas and then detect the produced sound waves. From it you can determine the material properties.
The effect was discovered by Graham Bell in 1880. A general patent by Hitachi can be found here US5781294A. The claim is quite nice and general but the patent expired in 2012.
Olympus corp filed a patent for using this effect with a galvo scanner. The patent is valid up to 2035 and its number is US10209226B2.
Let's have a look at claim one;
"... the scanning unit includes a movable mirror which changes an angle of incidence of the excitation light incident on the objective lens, and ... "
Prior art is constructed as follows;
"... the scanning unit includes a scanning prism which changes an angle of incidence of the excitation light incident on the objective lens, and ... "
A more complicated patent example from 2018 with two beams by Yokogawa electric corporation can be found here US20180140199A1 .
As we can see the patent propose the use of two frequency shifted beams from the same light source which are then coupled into a single point by a variable focus lens. However, what is also clear is that this lens is positioned after the light scanning unit. I claim that the lens is positioned before the light scanning unit in a prism scanner. This was not obvious to the claimant as it obviously thought of the light scanning unit being a galvo scanner or a rotating mirror.
I claim the reverse effect. So here an ultrasound wave is produced and the result is detected via an interferometer which uses a scanning prism. The idea is similar to the optical coherance tomography setup I discussed earlier. A patent by Hitachi can be found here US20130160552A1 but it expired as they seem to have stopped paying the fee.
A Dutch research group from the TU Twente used the effect to detect breast cancer. The optical effect is non-invasive but does not have a deep penetration depth. I claim the same but then for a scanning prism and also other forms of skin cancer, skin deseases, cancer etc.
This was an experimental project so I don't believe you can buy one commercially. Cordin does sell them and they have mirrors rotating at 5000 RPS which is like 300.000 RPM. This is quite nice as it serves as an example of how fast prisms could rotate. These cameras with rotating mirrors are used in research and have been used to detect nuclear explosions etc.
I thought it was nice to give some update on the progress. Most of the work I did was in the software domain.
beaglebone now works with cape universal and not with a predefined cape pins can be changed on the fly without booting linux.
if the machine turned on the laser turned on as well, the startup procedure has been changed so the laser doesn't turn on by accident
in the proof of concept experiment, the laser module was turned off after each lane and turned on before a new lane. With the new software you can continue to expose without turning of and on the scan head.
the c++ library for the TMC2130 stepper has been wrapped in python
I have added the possibility to expose with a single facet. In my earlier post, I outlined this can be done via the facet times. I have however chosen to do this by an internal counter which simple counts the facets and assumes a facet is never lost.
variables are now centralized in one location. They used to be sprinkled all over the place. You don't need to recompile the assembly to change the variables of the scanner
The assembly code of the scanner and the python code have been refactored. I would say the code is much more readable now
The spinup state changed. The laser is spin up and then it is tested whether it passes a threshold check. The laser turn on time is however much smaller than this threshold. This used to be the same.
I have also build a second laser module and fixed the z-endstop in my test setup.
As always let's create some more prior art;
Blackbelt has a patent called for an infinite conveyor belt printer see https://patents.google.com/patent/NL2018728B1. This was scrutinized by Hackaday. The patent so far only seems valid in the Netherlands and the conveyor belt has to be planar with the horizon, see words ".. parallel met de horizontaal is gelegen..." at the end of claim 1. As such, I claim a machine in which the belt is not planar to the horizon but under a tilt angle. I claim an infinite conveyor belt printer in which both a laser head and extruder is mounted. Possibly the extruder deposits a polymer which is polymerized by the laser. This polymer can be viscous so it stays on its place.
I also claim the use of a prism scanner in data gloves. This can be used to write or read from a substrate from a glove. I earlier claimed the Hexastorm was connected to a robotic arm like the Dexter. In industry 4.0, workers are equipped with data gloves to check wether they are operating within requirement. In the proglove as shown here https://www.proglove.com/ a camera is added to the hands of an operator. I can imagine a prism scanner is placed in this data glove. This prism scanner could record information or write information on substrates. It could detect biological markers, or QR codes. It could detect cells or give a marking to a product. It could be used to determine the final position of a product which has to be precisely placed.
Good news! I received the NBC31111. The facet times look to be good and the motor is quiet again. In other words; the laser module is working once more. I have quite a busy weekend ahead, but will try to make a new exposure on Monday.
The facet times are;
Note that i ignore the first 100 lines as the prism doesn't seem to be stable enough.
output from script: ---------------------------------
Earlier, I reported that the facet times recorded by the diode can be used to determine the facet number. In the above, you can see that one facet has facet times smaller than 12400.
If the jitter allow per period is set very small, no difference can be perceived in facet times. For the proof of concept module, I used a jitter allow of 1/3200. All the facet times are then very similar and jitter is quite minimal. With a jitter allow of 1/100, I can determine the facet and use an interpolation table to reduce the jitter. I am not sure which strategy works best, but I do claim I possibly use one of these methods. I also claim I use information by an acceleration sensor to determine whether my prism is rotating smoothly or an earthquake is interrupting operation. I also claim I use this information to correct the data sent to the prism scanner by calculating a more accurate position.
Earlier I discussed that the prism ideally is balanced and how this could be done. The acceleration and position of the prism needs to be measured during rotation. The first step is determining the position of the prism, i.e. rotor. Two options were considered; detecting a marker with laser or detecting a marker with a camera. I went for a camera as it seemed more robust. Also, the position of the weights needed for balancing can be visualized with the camera. In specific, the UI3060-m-GL, specs are; USB 3.0, CMOS, 166.0 fps, 1936 x 1216, 2.35 MPix, 1/1.2", Sony, Global Shutter. I used a shutter time of 0.019 milliseconds and a ring light illumination. Having a camera with a global shutter is key, rolling shutter cameras I tried didn't work.. Initially, I thought the ring light would be useful as strobe light but the camera has so short exposure times it just needs additional light. I am planning on building a dedicated setup, see github, but used the current laser module for the test. The setup is as follows;
Camera exposes from the top using a lens with a focal length of 12 mm, ring light is given 24 volt via power supply.
As you can see I used the mirror and not the prism. The mirror is already balanced and can go up to 21000 RPM making it better for exposure tests. At 0 revolutions per minute the mirror looks as follows with auto exposure settings;
I did not modify the mirror. In the ring you can see two markings. This is the weight used to balance the prism. The outer black dot, outside the ring, seems to be made with a marker. This seems to be used to determine the position of the rotor. At 21000 revolutions per minute the mirror looks as follows with 0.019 ms exposure time; You can see a reflection of the ring light as the illumination is not really proper. In the final setup, I should use a better diffuser so you don't see the positions of each led.
made a pitch which is more suited for business case minded people
still waiting for the delivery of new NBC3111 mirror motors as I have broken mine. The new setup "works" but is not practical without these mirror motors. The prism makes much more noise and can converge into two modes. As a result, the prism has to be restarted 50 percent of the time. The noise reduces at lower speed but the two modes remain. This was not the case with the NBC3111 which had a flawless operation at higher speeds. I have therefore decided to focus on balancing the prisms for now and waiting for the NBC3111. The new motors should arrive somewhere next week. The current plan for the prism balancing setup is that I will use a MMA8452Q to read out the acceleration. The position of the prism is monitored using a marker and a stroboscopic camera.
I also claim the use of prism in the industry of thermal plate setters. Another patent by Apple has been revealed for scanning mirrors, see display device US2019/0285897A1. I also claim this patent but than for the case of scanning prism.
I wanted to share a quick video of the alignment setup. The prism is running 2.5 times slower than in the proof of concept model. I have also removed the cylinder lenses for now as this simplifies alignment. The focus is on a robust setup for demonstrations and later quality.
As always, lets generate some more prior art;
non-planar 3D printing with prism scanner to avoid situation like the one with US10005126B2
I can envision a liquid or viscous fluid is applied with an extrusion nozzle or inkjet head and is cured with the prism scanner. This could also be a viscous liquid applied with a blade. The patent talks of a correction factor that is calculated on basis of the slope. This is done to alter the extrusion. I claim that an optical measurement is done of the substrate to determine the correction factor (so it is done from life data during the experiment not from already existing cad data!!) I claim that the amount of liquid that is extruded is constant (this is not altered). The prism scanner decides which part of the liquid is solidified. The remainder is possibly sucked up with a squeegee and pump or applicator bar and pump. I can envision a similar process but then one were powder is blown and then sintered with a laser guided with the prism.
I can envision this is done in a hospital setup to a human or in an operation or for inspection of mechanical parts. I also claim the area "food printing". You can for instance cure egg whites with IR radiation to trigger or monitor a Maillard reaction. I claim belt like printing with prism scanner (so you print on a rotating belt, e.g. blackbelt 3D). I also claim that a prism scanner is used to check the indentation in a hardness tester, at the moment a lot of companies use a camera. I claim that a prism scanner is used to check the indentation, for example see the hardness testers by Innovatest Europe.