Close
0%
0%

Lepton 3.5 Thermal Imaging Camera

Documenting my experiments with the FLIR Lepton 3.5 thermal imaging camera.

Similar projects worth following
I bought a FLIR Lepton 3.5 mounted on the Pure Engineering breakout after using a friend's thermal imaging camera to analyze heat generation on a PCB I designed. The Lepton 3.5 can output absolute temperature data (Radiometric) at 160x120 pixel resolution. I decided a thermal imaging camera would be a good tool to have and decided to build my own. I have started with a Teensy 3.2 based test platform but eventually want to turn a Beaglebone Black with 7" LCD cape into a full-featured, networked, camera using the PRUs to handle the real-time video SPI feed from the Lepton. I hope the documentation in this project is helpful to others who might also want to play with these amazing devices.

The long-term goal is to create a capable thermal imaging camera using the Beaglebone Black and a 7" LCD cape as the platform matching some of the features of high end commercial products.  However it soon became obvious that I'd need a simpler platform to learn how to use the Lepton module when I started reading the documentation and playing with the various demo codebases.  It is a very capable device with a moderately complex interface, both firmware and hardware.  Although the device has good default settings I found enabling some features wasn't well documented and the video SPI interface (VoSPI) challenging to implement due to its real-time constraints.


There are a lot of other great projects online to help get going with the FLIR sensors.  Pure Engineering is to be commended for making these devices available to makers and provides a wealth of code examples, many designed to work with the previous Lepton models.  Max Ritter's DIY Thermocam is probably the most mature and well known.  He has done a great job and I pored over his code.  Damien Walsh's Leptonic is also really well done and works with the Lepton 3.5 as well.  Both Max and Damien were very gracious when I sent them various questions.

I decided to follow Max's lead and build a test platform using a Teensy 3.5 that I had (selected for the multiple SPI interfaces and copious RAM).  Unfortunately after soldering the Teensy to a Sparkfun breakout board I stressed the processor BGA package and made the board unreliable.  So I replaced it with a Teensy 3.2 hoping it would have enough resources to successfully interface to the Lepton.  It does, barely, and the next project log describes the test platform hardware.

teensy_schematic.pdf

Test platform schematic

Adobe Portable Document Format - 26.67 kB - 07/10/2018 at 19:11

Preview
Download

View all 18 components

  • Pi Zero experiments + app experiments

    Dan Julio08/24/2018 at 07:03 0 comments

    I finally got around to trying the lepton on a Pi Zero.  Not a good outcome but an interesting result.  The interrupt-driven version of leptonic failed miserably.  It could not keep up with the VoSPI data.  However Damien's original version did work partially.  It could get data but constantly lost sync and frequently returned garbled data.  I didn't look at the SPI bus on a scope to make sure there was nothing funny going on there but it doesn't seem the Pi Zero is well matched - at least for user-space programs - with the Lepton 3.  No doubt a kernel driver would work but that's beyond me at the moment.

    I have also been hacking around at the application level to make sure that the cross-platform application development tool I use (xojo) is up to the task and could be integrated with the zeromq messaging library since it makes working with sockets very simple.  Much more success there.  Here's a test app running on my Mac and also on a Beaglebone Black simultaneously getting a feed from the leptonic server on the Raspberry Pi 3. Getting the full 9 fps from the lepton.  I think I'm ready to try to use the PRUs as a VoSPI engine.

  • Working with a Pi

    Dan Julio08/17/2018 at 16:12 0 comments

    Ultimately I'd like to create a solution that comprises a linux daemon communicating with the camera and a socket-based interface to a display application for local display as well as web server for remote display.  The Beaglebone Black solution will make use of the VSYNC signal to synchronize transfers.  Before tackling the BBB's PRU coding I think it will be a good idea to get the daemon interface running.  Since this architecture is like Damien Walsh's leptonic project, it made sense to play around with his code on a Raspberry Pi.  His code uses a thread to constantly read the VoSPI interface and even on a Pi 3 has troubles remaining synchronized sometimes because of user process scheduling.  I decided to port his C server to use VSYNC and a user-space interrupt handler to see if this might be more reliable.

    Some testing showed that the read system call resulted in a fast SPI transfer so the main technical issue seemed to be implementing a fast user-space interrupt handler.  I started with Gordon Henderson's wiringPi library because I had experience with it.  The result was strange.  Latency between the VSYNC interrupt and execution of the ISR routine was low only the first time the process was run after booting.  Latency was too high for all subsequent runs and the routine could not get a segment's worth of data before the next interrupt.  I tried several mitigation strategies such as renicing the process to the highest priority and binding it to CPU 0 (which seems to be on the front line of handling interrupts) but nothing changed the behavior.   After too many hours of piddling around I decided to try PIGPIO which lead to much better and repeatable results.  I'm not sure what is happening beneath the hood but. at least for interrupts, this library gives great results.  I can get a reliable stream of frames at or near the maximum 9 Hz on the Pi 3.  Someday I'd like to see how it performs on a lower performance board like the Pi Zero.

    I uploaded my ported version of the leptonic server to github in case anyone wants to see how the ISR was implemented.  It can also enable AGC for better images.

    Next up is to decide on a socket-based protocol for sending commands to the lepton (for example to tell it where to sample temperature in the image when AGC is running) and for sending complete frames to consumers such as the display application or web server.

  • Accuracy testing

    Dan Julio07/21/2018 at 21:47 0 comments

    Determining the accuracy of temperature readings made with the camera is a bit tricky.  FLIR's documentation indicates it varies with ambient temperature, scene temperature and object emissivity.  They seem to do their characterization at 25°C against a 35°C blackbody.  They claim a typical accuracy of +/- 5°C or 5% and a range of accuracies up to +/- 8°C, depending on conditions.  Ambient temperatures seem to make a large impact (the lepton measures its internal temperature but I'm not sure if external temperature affects its accuracy).  My reading about this class of device also indicates that the emissivity of the object being measured also impacts accuracy - although it's not clear, other than when enclosing the lepton behind a lens of some kind, adjustments need to be made to its default parameters.

    Wanting to see how accurate my lepton is, I hacked lep_test6 into lep_test9 (in the github repository) adding the ability to change the emissivity in the RAD Flux Linear parameters setting as well as compare the output of the spot meter function (that can average a specified set of pixels in a specified location in the image) with output pixel data.  The spot meter can be used to get temperature when AGC is enabled and the pixel data does not contain actual temperature values.


    I then attempted to compare the temperature output from the Lepton with other temperature measuring devices for a variety of materials and object temperatures (being a hot summer day, I didn't get much opportunity to play with the ambient temperature).  I would call this very amateur science...

    Short summary: I found the device fairly accurate without having to change the default emissivity setting (with one exception) and the spot meter works as advertised.  Emissivity investigation will have to wait for another day.

    Longer Description + some data

    The test setup included a 4-channel Dallas DS18B20 temperature probe I made years ago, an Agilent multimeter with a type K thermocouple and a home-made IR Thermometer based on the Melexis MLX90614 sensor.  I attempted to use either one DS18B20 or the Agilent probe to capture the ambient temperature and then as many sensors as possible to also read the object temperature.  Basic claimed accuracy (over the temperature ranges I used) of the DS18B20 sensors is +/- 0.5°C, for the Agilent sensor, 1% + 1°C and for the Melexis sensor, 0.5°C.

    I held the Melexis sensor very close to the object since it has a very wide field of view and averages all the thermal energy in its scene.  Here it is measuring the temperature of a "blackbody" (electrical tape) on the side of a vase of ice water.

    Measuring temperature is hard...  There is the basic accuracy of the sensor itself and then how it interacts with the environment.  I saw a quite bit of variability for all sensors, especially the IR sensors.  The DS18B20 sensors tracked each other very well.  However I think that the lead temperature makes a big difference so when their plastic cases were touching an object, the temperatures recorded may have been wrong because the leads were at a slightly different temperature.  The thermocouple varied based how it touched the object (although it has very fast settling time).

    As can be seen from the data, the Lepton generally agreed with the other sensors.  I expected worse performance, partly because of the device specs and partly because I expected the surface emissivity to play a bigger role.

    Not included in the above table (a late add) was a measurement of a soldering iron set to 350°C.  The Lepton read 181°C and it wasn't until the emissivity setting was lowered to 35% that the temp was close (353°C).  I'm not sure why and would love to hear any thoughts...

  • Lepton AGC

    Dan Julio07/18/2018 at 03:23 0 comments

      Most demo code, including my lepton_test6 sketch, use a simple linear transformation from the raw 14-bit count or 16-bit radiometric temperature data to 8-bit values that are then transformed to RGB values via a lookup table (colormap).  To do this the maximum delta between all pixels in the frame is computed and then each pixel value scaled as follows:

        8-bit pixel value = (Pixel value - minimum pixel value) * 255 / maximum delta

      However this simple algorithm fails with scenes that contain both hot and cold regions because it tends to map most pixel values to either the maximum or minimum values resulting in images with little contrast between the two temperature extremes.

      The Leptons have an AGC (Automatic Gain Control) function that can be switched on to try to ameliorate the shortcomings of a simple linear transformation for temperature data to be displayed.  FLIR has an entire - slightly confusing - section (3.6) in the engineering data sheet describing their AGC implementation - actually two different forms of AGC they call histogram equalization (HEQ) and linear histogram stretch.  They claim improvements over traditional histogram techniques with their HEQ algorithm and include many parameters to tune the algorithm but don't deeply describe it.  They don't describe their linear histogram stretch algorithm.

      Enabling AGC mode (and disabling radiometric calculations - something that took me a few weeks to figure out) generates 8-bit data out of the selected AGC algorithm.  There are many parameters but they claim default values should produce good results.  There is an additional, somewhat mysterious, AGC Calculation State Enable as well.  The description states "This parameter controls the camera AGC calculations operations. If enabled, the current video histogram and AGC policy will be calculated for each input frame. If disabled, then no AGC calculations are performed and the current state of the ITT is preserved. For smooth AGC on /off operation, it is recommended to have this enabled. "


      Since reading the AGC section I have been interested in how well their implementation performs.  I wrote a sketch, lepton_test8, that allows cycling between the original linear transformation and 4 variations of FLIR's AGC function.  They are

      1. "AGC disabled" - Radiometric mode enabled, my code linearly transforms 16-bit data to 8-bit data.
      2. "AGC linear C" - AGC mode enabled, linear histogram stretch mode, AGC Calculation State enabled.  Lepton outputs 8-bit data.
      3. "AGC HEQ C" - AGC mode enabled, histogram equalization (HEQ) mode, AGC Calculation State enabled.  Lepton outputs 8-bit data.
      4. "AGC linear" - AGC mode enabled, linear histogram stretch mode, AGC Calculation State disabled.  Lepton outputs 8-bit data.
      5. "AGC HEQ" - AGC mode enabled, histogram equalization (HEQ) mode, AGC Calculation State disabled.  Lepton outputs 8-bit data.


      I then compared a scene with hot and cold components (soldering iron and ice-water with a room-temperature background) using the various modes and various color maps.  I also did some informal testing with less dynamic scenes.  The TL,DR summary is that the AGC modes generate distinctly better images in scenes with large dynamic range.  They do only slightly better with more mono-temperature scenes.  I couldn't see much difference between the linear histogram stretch and HEQ modes.  I also couldn't see much difference with the AGC Calculation state enabled or disabled (perhaps I saw artifacts left behind when the camera panned).

      Following are some pictures showing output with different modes.  You can see the soldering iron and glass in the first image.  The mode and current colormap is shown at the top of the camera's LCD display.

      A big difference with the "Iron Black" and "Golden" colormaps.


      The advantages of the built-in AGC modes was less pronounced when the temperature range of...

    Read more »

  • VoSPI

    Dan Julio07/13/2018 at 04:05 0 comments

    Lepton's use an output-only slave SPI interface called VoSPI (Video SPI) to output pixel data with a maximum 20 MHz clock.  It has the feeling of something that has evolved over time as FLIR added newer models.  It definitely does not feel like something that would be designed from scratch.  The Lepton's basic unit of data transfer is a 164- or 244-byte packet.  The 164 byte packets contain 80 16-bit pixels (of which 8-, 14- or 16-bits of data may be valid for the Lepton 3.5 depending on its operating mode).  The first 4 bytes are a 16-bit ID word and a 16-bit CRC word (that I have not, to-date, attempted to use).  The 244-byte packets contain 80 24-bit (8-bit each for R, G and B) pixels and the ID and CRC words.  The ID word carries a line number (0-59 or 0-62).

    The 80x60 pixel Leptons (2 and 2.5) output 60 packets per frame (or 63 packets if telemetry data is also included with the pixel data).  This turns out to be about 2 Mbits/second for a 100% dedicated interface.

    The 160x120 pixel Leptons (3 and 3.5) modify the protocol a bit in order to carry 4x the data.  They add a segment number to the ID word of packet 20.  Segment numbers 1-4 indicate that the entire set of packets contain a valid segment.  It definitely would have been easier (less buffering and easier processing) if the segment number was in the first packet.  This family of device requires a minimum of 8+ Mbits/second although I found that with the Teensy I had to use 16- or 20-MHz SPI clocks.  I noticed most of the other demo programs also use much higher SPI clock rates.

    All Lepton's have a hard requirement that the host must get the data out within three lines of when it is generated in the Lepton or it will lose synchronization and be unable to output valid data.  All Leptons also output what are called "Discard packets" that are indicated by a specific bit-pattern in the ID Word.  The host is to ignore these packets but keep reading for good data later.

    In my testing I also found that until synchronized they may output nonsensical non-discard packets too.

    I wrote a quick Teensy sketch that enabled VSYNC and polled it until asserted.  It then read packets until it saw a complete segment, or it saw invalid data (invalid line number) or a timeout was exceeded (maximum VSYNC period).  The program could usually sync with the Lepton within a handful of VSYNC periods.  The output from a typical run is shown below.  Once synced you can see a new valid frame every twelve frames (or 1/3 the data frame rate).  These are the frames that get displayed.  Each line number (0-59) represents 160 bytes of pixel data.

    It is interesting to see that alternating segments have discard packets.  I don't have an explanation for this although I wonder if this has to do with the timing of my SPI bus (16 MHz) and the Lepton's internal processing rate.

    Getting this code running meant I could get valid data out of the device.


    The core of the code:

    void loop() {
      bool curVsync;
    
      curVsync = (digitalRead(pin_lepton_vsync) == HIGH);
      
      if (curVsync != prevVsync) {
        Serial.printf("%3d : ", counter);
        ProcessSegment();
        Serial.println();
        counter++;
      }
      prevVsync = curVsync;
      
      if (counter == 106) {
        WaitForChar();
        counter = 0;
      }
    }
    
    void WaitForChar() {
      while (!Serial.available()) {};
      while (Serial.available()) {
        (void) Serial.read();
      }
    }
    
    void ProcessSegment() {
      uint32_t startUsec;
      bool done = false;
      
      line0count = 0;
      discardCount = 0;
      startUsec = micros();
      while (!done) {
        if (ProcessPacket()) {
          done = true;
        } else if (AbsDiff32u(startUsec, micros()) > LEP_MAX_FRAME_DELAY_USEC) {
         ...
    Read more »

  • Teensy Test Code

    Dan Julio07/12/2018 at 04:23 0 comments

    Initially I connected the Lepton to a Raspberry Pi 3 and ran demos from the Group Gets repository.  However many of those are written for the lower-resolution Lepton 2 family of modules.  I hacked a couple of them with less-than-stellar results because of the much larger dataflow from the Lepton 3.  I had more luck with Damien Walsh's leptonic but even it would occasionally lose sync on a lightly loaded Pi.  This lead my decision to try to use the VSYNC output to make it easier to synchronize the VoSPI transfers to the Lepton's video engine.

    FLIR has a reasonable set of default settings.  For example the Lepton 3.5 is ready to output radiometric data (with absolute temperature values for each pixel) immediately after booting.  However changing any of the default settings (e.g. enabling VSYNC) requires using the I2C interface to access its command interface (IDD).  They provide a C++ library that is designed to compile on 32- or 64-bit Linux machines that I ended up porting to the Teensy Arduino environment.  With this I was able to enable VSYNC and with the Teensy and Lepton on a proto board start to figure out how to reliably get video data out of it.  It's a bit picky about the timing and data gets garbled if the host isn't able to keep up.

    The Lepton 3 and 3.5 output data in 4 segments, each one quarter of a complete frame.  The segment length may vary depending if the data is formatted as temperature or AGC-processed data or is 24-bit RGB colorized values or includes additional telemetry information.  Each segment is comprised of a set of 60 (or more) 164-byte or 244-byte packets, including packets specifically designed to be discarded while the Lepton prepares the valid data and any optional telemetry data.  Although the Lepton's internal frame rate is about 26.3 Hz because of government regulations it only outputs ~8.8 frames of video per second.  However it generates data for all frames leading to a VSYNC rate of ~105.3 Hz (4 segments/frame).  This means the host has to read, and process for validity, an entire segment after each VSYNC.  It took me a while before I could manage this on the Teensy.

    The host must resynchronize with the Lepton whenever it gets out of sync or it will receive only garbage data.  This requires idling the VoSPI interface for at least 186 mSec.  I found that it takes the Lepton a few VSYNC pulses to start outputting valid data.  When the host is in sync then it will output 4 consecutive good segments on 4 consecutive VSYNC pulses every 12 VSYNC pulses.  The other eight VSYNC pulses contain invalid segments (identified with a segment number of 0).  Because my test fixture uses a single SPI interface to read data from the Lepton and then write frame buffer data to the LCD display the test sketches are only able to reach about half of the maximum frame rate (~4.4 frames/sec).

    I put the ported IDD library and three test sketches into a github repository.  The sketches also use the Adafruit ILI9341 and GFX libraries for the LCD module (although the code has its own routines to write pixel data to the display).

    lep_test6 - This sketch takes the default 16-bit absolute temperature (Kelvin * 100) value of each pixel and linearly scales the data to 8-bit values that can be run through a color map look-up table for display.  Because the data has absolute temperature values it is easy to display the temperature (currently without worrying about any real-world emissivity issues) of the center of the image.  Normally the scaling maps the data from the minimum and maximum in the frame.  However this sketch allows the user to select a few temperature ranges to scale the data in so the image does not change radically as different temperatures enter or exit the frame.

    The Lepton also has a more sophisticated AGC capability that is claimed to produce better...

    Read more »

  • Mods to PowerCell

    Dan Julio07/10/2018 at 19:17 0 comments

    Forgot to add this image to the hardware description.  Its a close-up of the front board showing the mods made to the Sparkfun PowerCell board.

  • Test platform hardware

    Dan Julio07/10/2018 at 19:08 0 comments

    Introduction

    The test platform hardware is pretty straightforward.  I used an Arduino shield style breakout board to make it easy to connect the Teensy to the display and imaging module.  Most connections between the Lepton module, LCD and Teensy are direct and documented in the image and included PDF schematic using the Arduino-style signal notation.  The additional circuitry is for power management and control buttons.  A LiPo charger/boost converter supplies 5V power to the various boards (the Teensy 3.3 volt regulator drives the 3.3v rail).  It's controlled using a soft-power switch.  The button initially enables power and then the Teensy drives D7 to hold power.  D7 is released to shut power off.  A few modifications are made to various boards.

    Lepton Breakout Modification

    Pure Engineering provides a SMT pad on the back of their breakout for the Lepton's GPIO3 pin.  This pin may be used to output a 105.3 Hz VSYNC signal that indicates the start of a segment transmission (a segment is 1/4 of a full frame of data and requires at least 60 164- or 244-byte SPI transfers).  It is useful for synchronization.  I use it as an interrupt on the Teensy to trigger a set of transfers.  I brought the signal out to a pin plugged into the same header that the rest of the breakout board's pins connected to.  I also removed the 2-pin power input (J3) that pokes out of the back of the breakout.  A pair of 4.7 kohm pull-up resistors pulled to 3V3 is connected to the I2C signals.



    Teensy 3.2 Modifications

    I cut the trace between VIN and VUSB on the back of the Teensy.  I use VUSB as the input power to the charger and let the charger supply 5V that is fed back to the Teensy via its VIN to power it.  That way the board may be charged via either the Teensy USB connector or the charger USB connector.  The VUSB power line was connected to the spare Arduino expansion pin and routed to the front board containing the charger via the header.  I also added a wire connected to the Teensy Reset pad on the bottom of the board and routed to the Arduino RST header pin.  This allows the Reset button on the LCD shield to reset the Teensy.


    Power Circuitry
    The power circuitry is built around a now-obsolete Sparkfun PowerCell board.  However I think one of their newer charger/boost converter boards (or something similar from another supplier) will work.  The main requirements are an active-high boost converter enable signal and very low quiescent current when the boost converter is disabled (so-as not to discharge the battery).  The system takes less than 250 mA at 5V.  I removed the 10 kohm pullup R2 from the Sparkfun board to reduce quiescent current.  The enable is held low by the resistor divider until the button is pressed enabling power.  As soon as the Teensy starts executing code it drives D7 high which holds the enable high (the user has to hold the power button until D7 is high which is a few hundred mSec and indicated by the LCD clear screen).  The resistor divider driving A6 is used to sense whether or not the button is being pressed.  It does this by seeing the higher voltage when the button is pressed. The BAT54 diode is used to isolate the Teensy from Vbatt when the button is pressed.  Different diodes may be used but they should be low Vf schottky units (the diode I used has about 0.3v Vf).

    The battery voltage is sensed using another resistor divider feeding A7.  To reduce power consumption when the system is off a n-channel MOSFET is used as a switch and to keep current from flowing into A7 when the system is off.  Most of the n-channel MOSFET parameters are unimportant and a variety of devices may be used.  The most important parameter is Vgs since the control signals are 3.3v based.  Rds-on should also be fairly low, although with the resistors it may be even a few...

    Read more »

View all 8 project logs

Enjoy this project?

Share

Discussions

Sophi Kravitz wrote 07/10/2018 at 17:21 point

hi Dan! This is a cool project! This is happening pretty soon: https://hackaday.io/contest/159585-battery-powered-radiometric-lepton-contest

  Are you sure? yes | no

Dan Julio wrote 07/10/2018 at 18:09 point

Hi Sophi!  I saw that.  I'm not sure I'll have anything for the Lepton 2.5 but I will contribute a port of FLIR's LeptonSDKEmb32OEM library for the Teensy which might be applicable to the ESP32.

  Are you sure? yes | no

Similar Projects

Does this project spark your interest?

Become a member to follow this project and never miss any updates