Charger MOSFETs and how to drive them

A project log for LiFePO4wered/ESP32

ESP32 IoT core board with flexible power and flexible communications

Patrick Van OosterwijckPatrick Van Oosterwijck 08/22/2019 at 19:430 Comments

Gate drive level concerns

I had unexpected difficulties in finding good power MOSFETs to implement the charge converter.  My desire to have a large 5 to 28 V input voltage range turned out to be making life difficult here.  It's easy enough to find MOSFETs with high Vdss breakdown ratings.  It's also easy enough to find MOSFETs with low Vgs thresholds.  But having the two requirements combined in a single part turned out to be harder than expected.

A cursory glance at part specs might make you think differently.  There are plenty of parts that are listed as having low threshold voltage, but usually the threshold voltage is specified for silly low drain currents such as 250uA or 1mA.  I need a part that is fully on, with low RdsON, at the minimum microcontroller voltage of 1.71V.

The reason is that I want to keep the charge circuit as simple as possible.  In the custom project that inspired this charger's implementation, I was using a boost converter to generate 5V gate drive voltage and a MIC4600 gate driver chip to drive the MOSFETs.  For the LiFePO4wered/ESP32 I want to do away with those parts and drive the converter MOSFETs directly from the microcontroller, if at all possible.

Keeping these requirements in mind, while also keeping cost and size down, and considering Safe Operating Area (SOA) so I don't have MOSFETs burst into flame at high input voltages this time, the best option I have found is the DMN3020UFDF.

With a Vdss breakdown voltage of 30V, a maximum RdsON of 40 milliohm and drain current of 10 A specified for a Vgs of 1.8 V, plus the best looking SOA graph I could find, it looks like this part should work well to implement an efficient charger without needing to add a gate voltage booster or separate gate driver chip.

Switching speed concerns

At least when it comes to gate drive voltage level that is.  What I'm still worried about is the switching speed I can achieve when driving the MOSFETs directly from the micro.  After all, the function of a gate driver like the MIC4600 is not only to provide the right gate voltage levels, but also to provide powerful enough drivers to quickly charge and discharge the power MOSFET's gate capacitance.

The DMN3020UFDF specifies a total gate charge of 15 nC at 4.5 V Vgs, 27 nC at 8 V Vgs.  This seems to be scaling linearly (makes sense since Q = C * V), so I'm assuming about 12 nC gate charge at 3.6V and 6.6 nC at 2 V.  I will be driving the gates from high drive capable pads of the microcontroller.  They are specified at 20 mA (0.5V drop) for 3.6 V supply and 10 mA for supplies less than 2.7 V.  This gives an estimated switching time of 600 to 666 ns.  That is... slow.  Compare this to the typical switching time of about 15 ns for the MIC4600 gate driver.

The problem with slow switching is twofold.  First, it directly limits the switching frequency because obviously you want on and off times that are significantly longer than the transition between them.  Second, during switching is when the MOSFET dissipates the most power as heat.  When fully off, the voltage across a MOSFET is high but no current flows, so there's no power loss.  When fully on, the current though the MOSFET is high but the voltage across it is really low (due to the low RdsON) so power loss is low.  It is during a switch between on and off that significant values for both voltage across and current through the MOSFET are present at the same time, causing power loss (P = V * I) that is dissipated as heat.

Heat causes circuit problems and reduces efficiency.  The solution to minimizing this problem is to switch less often (less time spent in switching transitions).  But lower switching frequency demands a larger value inductor, which will either have to become physically larger and more expensive, or will have higher resistance, again reducing efficiency.

For a given physical size of inductor there's some optimal point that can be found where losses in the MOSFETs are balanced with losses in the inductor for maximum efficiency.  But without actually building it and iterating, I don't think there's a good way to find this optimal point, or even find out if it's "good enough".  It may be better to reconsider the "drive the MOSFETs directly from the micro" idea if a small low cost piece of silicon that can switch them faster could be found.

Of course, the analysis above is a rough estimate based on what I can glean from the datasheets.  For instance, the micro likely charges the gate capacitance with higher current than 20 mA, at least initially.  The 20 mA number comes from the spec for how much voltage is dropped across the GPIO driver.  It implies a worst case drive resistance of about 50 ohm at 2.7 V and below, and when combined with the 1304 pF gate capacitance, this implies a time constant of roughly 400 ns.  Dropping 0.5 V from the gate voltage will also not turn on the MOSFET as hard, which will increase RdsON.  Looking at the spec, this will be the worst at cold temperatures when the Vgs threshold is the highest, but luckily this should warm the MOSFET up so it's self-correcting.

Another concern is that the substantial switching current to charge and discharge the MOSFET gate capacitances from the micro is going to affect the micro's power supply, possibly affecting stability or at least affecting the accuracy of minute ADC readings.  The power supply around the micro will have to be carefully designed and decoupled to minimize such issues.

Bottom line is that I need to actually test if and how this works before I commit to the design.  I have ordered a dev board for the micro and MOSFETs, and some bench testing will have to clarify if it all works well enough or if a separate gate driver will be required.