Beginnings and purpose

   I created another soldering iron in the past and I've been improving the firmware ever since then. 

https://hackaday.io/project/94905-hakko-revenge

   This soldering station has the following sensors which I plan to keep on this new iteration and use in the new firmware algorithm: Tip temperature, PCB temp (and initial ambient temperature), Voltage measurement sensor, Internal timer. 

   So these pictures I uploaded into the gallery are the actual prototype I experiment with. It uses the PCB from the Hakko Revenge project and once I'll have some measurable results with the firmware I plan to move ahead and design a 2020 PCB version for this one. 

   Since I use a t12 (or Ts100) tip which can heat up really quick, I was able to achieve good heating times. This previous project of mine goes up from the room temperature (25*C) to the soldering temperature (above 320*C) in about 6 seconds. Also, when it heats up from the standby temperature (190*C) to the soldering temperature it only takes about 3 seconds. Of counrse, this can only happen if I power it at higher voltages between 24 and 29V. 

   Fast, fast, fast! I always wanted to be able to make it faster and smaller and since I am redesigning everything now, I thought this is a good time to come up with a new approach. As you can see from these recent pictures, I also added a 0.96'' OLED display to it and now looks way much better. 


Why I cannot make it faster in the conventional way?

This type of t12 tip might be small and of a low mass, but it has the drawback of having the thermocouple connected in series with the heating element. This means that I have to power the heating element blindly for a while and then disconnect the power source and perform the temperature measurement on the same two wires. So this results in not being able to keep the heating element always on and it must be controlled through a PWM cycle that cannot be greater than 60 or 70% in some cases. So 30% of the time I have to read the tip, hence, I get to power the heating element less.

In order to make it even faster than it currently is, I tried to experiment with the PWM and with shrinking down the thermocouple reading times so I can expand the time frame for powering the tip. 

   It doesn't work. I cannot shrink down the sensor's reading window, since if I do the reading right after I apply power to the lines, I get all sorts of inductive noises bouncing off inside this circuit. So I really have to wait for some couple of ms for the line to quiet down before I can perform the reading of the thermocouple. I even tried to setup the PWM module to operate the heating element automatically at different frequencies and to only interrupt this PWM from time to time less often to perform the reading. I got the same thing and if I try to do the reading less often, then the tip's temperature becomes more bouncy and less accurate. So this was not a solution.  


Solution: May the force be with you

The solution i'm thinking of is to power the heating element at 100% PWM blindly, for an undefined period of time at startup. <grin> so not only I have a sports car that can do 0 to 60 in 3 seconds, but the driver is also blindfold. How can I power the heating element for the right amount of time without reading the temperature sensor not a single time during this operation and still stop at the proper temperature? Moreover, how can I do that at different input voltages or at different room temperatures?

   This is where the machine learning part comes in. I am planning to have the microcontroller sample different heating times at different input voltage values and then based on these dates, to work out a linear regression and estimate the necessary heating time. Then it should subtract the ambient temperature ∆⁰C variation from the nominal 25⁰C and add this difference to the target estimation. 

How it works

I have to give an example first to make my idea clearer. I shall have a memory stack of 10 or 20 rows for data acquisition. These rows will initially be filled with average initial values which I will previously calculate. The first initial heat up curve will be based on these initial values and might not be so accurate. Then each time the user picks up the iron from the stand and makes it heat up from the standby temperature to the soldering value, the controller will handle this in a normal way by sampling the temperature as it should. Still, it shall measure and integrate the total time of fully powering the heating element. If the system would be an ideal one and I wouldn't use too much heat during temperature read, this total heating time will be something like:

Based on this measurement made during normal operation, it could then write a first sample into the table. This first row would contain the total heating time for this temperature interval and the input voltage and it would push the data in the stack upwards.

   Each time the user will go and pick up the soldering iron from the stand, the machine will learn another heating curve characteristic to that specific voltage. It trains itself through this supervised learning.

   Then, when the soldering station power cycles and it starts heating up from the room temperature up, the microcontroller can calculate the regression and estimate the necessary heating time. It can then blindly drive the heating element at full power to the user set temperature, achieving the fastest time in doing so.  

Algorithm limits

Although I can be really sure that I can adjust the final target temperature based on my ∆⁰C ambient temperature deviation, I expect that my heating curve is not a linear one in reality. I still plan on using a linear estimation with this regression and I don't know how far off that could be.

Another issue might be that my learning samples could only be taken between the stand by temperature and the soldering temperature (190 - 400*C). Based on this data, I would then have to create an algorithm that will estimate the heating time from room temperature to the soldering temperature (25 - 400*C). So the estimated time falls outside of the sampled one and I'm kind of stretching the machine learning theory rules here.

I use C++ for coding and this adds some limitations compared to Python.  

Hardware limits

I am using a MSP430G2553 and it only has 16MHz, 16bit processor. For this linear regression it should be OK, given the fact that I will always do a limited number of calculations for 10 or 20 table values. I already have like 70% of the flash full and I would have to see how can I optimize the entire code to fit there. 

   When I talk about hardware limits, I am also thinking of that tip and how many times it could actually do this fast heating cycle without cracking. I would have to see about that. I guess only tests can tell. 

How cool is that every lightsaber is customized to its master? 

 - One of the best features of this soldering iron would be its reaction speed. So I expect that  I will one day pick it up from the stand and it will be ready to solder only in the time it takes my hand to move from the stand to the PCB. All the other classical soldering stations on the market are taking minutes to heat form room temperature to ready. 

 - Accuracy. Like any t12 or ts100 tip irons, it will just be so precise and avoid having a temperature gradient between the internal heating element and the actual tip of the iron. So it would display on the screen exactly what you have on the tip. 

 - Since it's based on machine learning it will have heating times and calculated curves that are characteristic to that specific user, it's environmental temperature, it's external power brick voltage and it's tip type. So if the user is mainly using a t12 B instead of a t12 C, the learned heating curve will be having that flavor. The saber will be learning it's master's hand and the master will learn how the tool behaves until the two shall become one.