Assistance System for Vein Detection

Using NIR (near infrared) Illumination and real-time image processing, we can make the veins more visible!

Similar projects worth following
Medication that can’t be ingested through the gastrointestinal tract has to be injected intravenous – usually by a doctor but in chronic diseases it can also be carried out by yourself.
Our aim was to develop an assistive technology for venous puncture for diagnostic purposes or for medical drug administration. Easy reproduction and low costs are important criteria for development.
We built three prototypes which are working on the Raspberry Pi: Two of them differ regarding to the camera system used - one works with the PiCAM, the other with a modified webcam. The third one is a mobile version which is more compact and can use a smartphone or a computer as a display. Besides we compared our project to a professional vein detection system that costs about 4.000€ while ours would cost about 100€ - and the quality is the same.
We are three pupils from Berlin, Germany: Elias (12), Lucie (16) and me (Myrijam, 16). All files on Github :-)

A chronic disease which requires frequent venous puncture is hemophilia. In the blood clotting chain, one or more essential enzymes are missing or build in a non-working way due to DNA corruption. This can cause severe bleeding into joints, muscles or inner organs and is potentially live threatening if the rest activity of the clotting factors is below a few percent. Today, most of the clotting factors can be artificially synthesized through biomedical engineering (cell cultures with a changed DNA  produce them), but they still have to be administered externally – injected into the blood stream by venous puncture. Venous blood vessels used for medication are not always easy to recognize and if the needle is not secure in the vein, it must be punctured again at another point. Here our project comes into play: Using NIR (near infrared) Illumination and real-time image processing, we can make the veins more visible, allowing easier Access, less pain and more confidence for medical personell and patients.

The veins are illuminated with IR light (950nm) and the back scattering is captured by the Raspberry Camera (the one without the IR-filter). You can use old analogue film tape as a filter to block visible light and let only pass IR- light. The camera picture is processed in several stages to get an improved distribution of light and dark parts of the image (multistage local adaptive histogram equalization). The reason to use near IR illumination lies in the optical properties of human skin and in the absorbance spectrum of Hämoglobin. 

After several tests (with IR light but also with thermography and different visual wavelength) we we first developed two prototypes for computer-assisted venous localization. One uses a 3d printed case for the Pi and a 7 inch Screen, the other is an add-on module for the Pitop CEED. With these steps we moved the development away from breadboards and proof-of-concept stages to concentrate more on image quality and user handling. Both differ, too, regarding to the camera system used – one works with the PiCAM, the other with a modified webcam. Both have their own pros and cons…

The Raspberry PiCam can be used without further modifications. However, this camera offers only a fixed focus and cannot adjust to the image scene automatically (only brightness etc.).

Another possibility is the modification of a webcam – removing the IR blocking filter. It is a bit tricky, but we have been able to use such a camera from my previous research project (“eye controlled wheelchair”).

The Results:

Figure 9: Results of the filter stages

At the beginning of the program, the graphical user interface is constructed, in which, in addition to the converted video stream, sliders are shown for parameters like Brightness and filter adjustments. In a continuous loop, single images are read by the camera and the filters are applied.

In the first picture, the imported camera image is visible - the vein is already visible in the infrared light, as well as the cannula, which is simulated for the purpose of the puncture. Fig. 2 shows the result of the gray scale conversion since no color information is required and the data needed can be reduced to one third. The next step is to adjust the brightness distribution with an openCV filter. The result is a much clearer visual representation in Fig. 3. The next picture shows the result of the manual filter setting, in which brightness information below and above the threshold value is discarded and the range of brightness is also stretched over the entire range (0-255) from the selected range. The following filter converts the gray scale image into a false color image in which the relevant information is not contained in the brightness but in the color profile. As a result of the discussion with medical professionals, we have installed the last filter stage in which a depicted arm or a hand...

Read more »

fzz - 11.58 kB - 10/20/2017 at 12:52


Adobe Portable Document Format - 133.12 kB - 10/20/2017 at 12:50


Adobe Portable Document Format - 134.05 kB - 10/20/2017 at 12:50



The circuit diagram for connecting LED's and encoders

Portable Network Graphics (PNG) - 41.82 kB - 08/11/2017 at 10:52



The stl files for the mobile version; free download

Portable Network Graphics (PNG) - 335.50 kB - 08/11/2017 at 10:51


View all 13 files

  • YouTube video!

    Myrijam2 days ago 0 comments

    Yes! We made it! We finally made a video of how to use our prototype! Here is the link to the video on YouTube. Hope you enjoy it :-)

    BTW all needed files (3d-code, schematics, code) are available on Github

  • Hurray, new filter working

    Myrijam2 days ago 0 comments

    For some strange reason, a detailed analysis of the problem helped 😉: Any conversion of the filter results to a picture shown should have transferred the image to 8Bit grayscale (uint8). To find out, why this happened, we printed out the arrays containing the filtered results:

    As you can see, the array contains very, very small values (e^-6 to -23 e.g.) – and if values between 0 and 1 are transformed to the range from 0 to 255, these very very small numbers – just stay zero.

    Just out of curiosity, we tried what would happen if you just multiply this array with a huge factor, e.g. 2500000 – and we got our filtered results 😊!

    This image shows the original camera NIR image (upper left) and the first and second stage image enhancement by overall and region specific level adjustment (CLAHE filter). The last (lower right) picture shows the frangi filter result, and you can see that it’s just the major veins being left.

    Another try with the arm instead of the back of the hand delivers the same astounding results:

    The veins are clearly marked out and the frangi filter result could even be further processed by find contours etc, because it is just the veins left…

    So, do make a long story short: Yes, the frangi filter is a major contribution to vein detection and could make future versions of our Veinfinder even more powerful.

    Thanks to Dr. Halimeh from the Coagulation Centre in Duisburg, we were able to compare an earlier prototype with a professional medical vein detection system – and even at that stage, both delivered comparable results concerning the detection depth of veins.

    The only drawback at the moment is processing speed, because the frangi filter really slows things down – so this is just the starting point for future analysis, coding and hacking 😉…

  • New features

    Myrijam2 days ago 0 comments

    Researching how to further improve our Venenfinder, we came across a filter that is used for detecting and isolating vessel-like structures (branches of trees; blood vessels). It is used to isolate the filigree structures of retinal blood vessels by looking for continuous edges  – the frangi filter.

    It is named after the Alejandro F. Frangi who developed this multi-stage filter together with collegues for vessel identification in 1998.

    Luckily there is a Python library for it 😊 – and it is part of the scikit-image processing libraries, but you have to compile from the sources since the Frangi filter itself was introduced in version 0.13x up and 0.12x is the latest you can get via apt-get install.

    As explained in the previous post, we simply could not get this filter to install/compile in the virtual environments, so we went for a clean install of Raspian Stretch and OpenCV 3.3 without any virtual environments to get the desired image processing libraries.

    We opted for the latest, the dev 0.14 version. As describes in the documentation , you need to run the following commands to install dependencies, get the source code and compile it:

    sudo apt-get install python-matplotlib python-numpy python-pil python-scipy
    sudo apt-get install build-essential cython
    git clone
    pip install -e 

    If everything is working fine, you can test it in Python:


    Then try to import it and get the version number:

    >>> import skimage
    >>> skimage.__version__

    and your Pi should return:


    Then we tried the new filter with a simple static image, some leaflets and … it did not work ☹

    We always got a black image containing the frangi filter results, no matter what.

  • Software Update: Raspian Stretch and openCV 3.3

    Myrijam2 days ago 0 comments

    Build-Log: Adapt to new Raspian Strech and openCV3.3

    We started from scratch following mostly Adrian’s superb tutorial.

    In a nutshell, it is a couple of commands you need to get the necessary packages and install open CV (and later Skimage) from source:

    sudo raspi-config

    Following the advise in the forum, we changed the swapsize accordingly and rebooted afterwards – just after we did the reboot to expands root-FS. Then we followed  Adrians’s blog, but again did not install the virtual environments, because in all the other tests we could not get Python 3 to work with Sci-Image-Filters and CV2.

    Before doing anything else change an important bit in the config to make compiling with 4 cores possible: Edit /etc/dphys-swapfile and change CONF_SWAPSIZE to 2048 and reboot. You can then do make -j4 (tip from Stephen Hayes on Adrians Blog)

    sudo apt-get update && sudo apt-get upgrade
    sudo apt-get install build-essential cmake pkg-config
    sudo apt-get install libjpeg-dev libtiff5-dev libjasper-dev libpng12-dev
    sudo apt-get install libavcodec-dev libavformat-dev libswscale-dev libv4l-dev
    sudo apt-get install libxvidcore-dev libx264-dev
    sudo apt-get install libgtk2.0-dev libgtk-3-dev
    sudo apt-get install libatlas-base-dev gfortran
    sudo apt-get install python2.7-dev python3-dev
    cd ~
    wget -O
    wget -O
    sudo python
    pip install numpy


    cd ~/opencv-3.3.0/
    mkdir build
    cd build
        -D CMAKE_INSTALL_PREFIX=/usr/local \
        -D OPENCV_EXTRA_MODULES_PATH=~/opencv_contrib-3.3.0/modules \
    make -j4
    sudo make install
    sudo ldconfig

    Now you should be able to test that openCV ist successfully installed:

    Your Pi should “answer” with:
    Python 2.7.13 (default, Jan 19 2017, 14:48:08)
    [GCC 6.3.0 20170124] on linux2
    Type "help", "copyright", "credits" or "license" for more information.
    >>Now check that OpenCV can be imported:
    >>>import cv2
    (nothing should happen, just a new line):
    >>> cv2.__version__
    Your Pi should give you:

    Your Pi should “answer” with:

    Python 2.7.13 (default, Jan 19 2017, 14:48:08)
    [GCC 6.3.0 20170124] on linux2
    Type "help", "copyright", "credits" or "license" for more information.
    >>Now check that OpenCV can be imported:
    >>>import cv2
    (nothing should happen, just a new line):
    >>> cv2.__version__
    Your Pi should give you:

    Leave the Python interpreter by quit() – now we are finished 😊

    We verified that the latest Version of Venenfinder will work with this Version of OpenCV. 

  • Putting it all together

    Myrijam2 days ago 0 comments

    In the latest posts, I described how we installed the buzzer and the switch and here is how we put it all together on the Raspberry!

    As we mentioned before, we fist tired them out on a prototyping board (breadboard) and a spare Raspberry Pi3 and after we were sure everything is fine, we integrated both to the Venenfinder as an “embedded medical device”:


    Here we had to deal with a few issues – we wanted to “lock down” the Venenfinder a bit in term of casing, so people who just want to use it for medical reasons won’t have to worry about electronics, wires etc. Of course it is still open under the CC license! But since we did not plan to have easy maintenance access, we had to take our device a bit apart, unfortunately.

    The (active) buzzer fits in the casing quite well, and we even found a little space for the on/off push button. In order to save time we opted against 3d-printing the lid again with a precise opening for the switch. We just drilled a hole instead (a real “life-hack” 😉).

    And we integrated the code from the buzzer and on/off examples into the main Venenfinder – which is updated on the Github repo along with the new schematics:


    Furthermore, we got the chance to present this new prototype at another hemophilia patient meeting in Berlin on October, 15th and several people tried it immediately as you can see in the pictures. My brother Elias did a little presentation in front of the entire audience and gave the opportunity to test the device by imaging veins of different people…

  • Switch off

    Myrijam10/14/2017 at 16:32 0 comments

    Since we always used to turn the Raspberry Pi off just by pulling the plug, which isn't good for the computer and the SD-card system, we applied a switch to the mobile version in the previous log that turns the Raspberry Pi on - and we need a way to turn it off.

    Because, since it is running headless, one otherwise would have to ssh into it to tell it to "sudo shutdown -h" - not a working solution for people who just want to use the device as effortless as possible :-)

    We decided to use the same switch to turn the Raspberry Pi on as to shut it down safely to keep things simple. 

    Additionally to the libraries we needed before (time, GPIO), we now need one to send system calls like the "sudo shutdown -h"...

    from subprocess import call    # for shutdown

    Since it is good practice to use definitions/variables we declare pin3 to be the one we use for shutdown:

    onoff_pin = 3

    Then we declare this GPIO to be an input and since we a lazy and want to keep additional electronics to a minimum, we tell it to use the internal pull-up resistors, so the pin has always a defined level and is not "floating":

    GPIO.setup(onoff_pin, GPIO.IN, pull_up_down=GPIO.PUD_UP)    # define onoff_pin as input, pulldown

    When the on/off-Button is pressed, the system should notify the user of shutdown - of course by the buzzer we integrated in the previous build log. Now it should beep for one second, then tell the Raspberry to shut down. We need to declare a method that will be called when the switch is pressed - because we want it to be triggered by an interrupt (like the encoders) and not watch for it in the main loop doing the image processing: 

    def shutmedown(channel):
        GPIO.output(buzzerpin, True)                # Buzzer on
        time.sleep(1)                               # for 1 sec
        GPIO.output(buzzerpin, False)               # then off again
        GPIO.cleanup()                              # not really needed, we shut down now anyway
        call("sudo shutdown -h now", shell = True)  # this is the system call to shut down

    After we declared the method, we need to configure the interrupt that is triggered as soon as the button is pressed. Since we used a pull-up internal resistor, the state of our GPIO-Pin is always true - until the button is pressed. The level changed from true (high = 3.3 V) to false (low = 0V = GND), meaning we have a falling edge, a drop:

    GPIO.add_event_detect(onoff_pin, GPIO.FALLING, callback = shutmedown, bouncetime = 500) #interrupt falling edge to on/off-button, debounce by 500ms

    Debouncing is not really necessary, so the 500ms is uncritical. We don't count, just any press on the button will shut the Raspberry down.

    We tried it with a spare Raspberry on a breadboard and since it worked straight away, we now include buzzer and on/off key into our prototype as a next step.

  • Buzz, buzz, buzz

    Myrijam10/14/2017 at 16:00 0 comments

    In addition to the starting switch, we applied an active buzzer that beeps three times when the Raspberry Pi turned on. This is important to know when the Raspi has fully booted because you can't see when everything has loaded, since there is no display.

    The buzzer is connected to the GPIO pin 19 and to GND. To make life easier for us we chose to use an active buzzer that doesn't need to be switched on and off to generate a sound, it simply needs to be connected to power for the time you would like to hear the buzz.

    Again, this was tested on a different Raspberry on a breadboard before we went to install it on the "ready prototype". We only have this one mobile prototype and we have promised to send it to a patient in need asap - so we need to make sure everything is fine.

    Here we need to add some code - first, we import some libraries:

    import RPi.GPIO as GPIO        #for the pins 
    import time                    #for delaying

     Next we define the GPIO pin for the buzzer:

    buzzerpin = 19                     # attach buzzer between pin19 and GND; check polarity!
    GPIO.setmode(GPIO.BCM)             # use BCM numbering scheme as before
    GPIO.setup(buzzerpin, GPIO.OUT)
    time.sleep(0.5)                    # give it a short rest

    As soon as everything is set up (camera etc.) we want the buzzer to make a short beeping sequence to notify the user that a connection to the streaming NIR video is now possible. We opted for a short 3x beeping sound:

    for i in range(1,4):                # runs 3 times
        GPIO.output(buzzerpin, True)    # switch to high (=switch active buzzer on)
        time.sleep(0.2)                 # time between beeps
        GPIO.output(buzzerpin, False)   # switch to low (=switch active buzzer off)

    That was all - it worked astonishing well ;-)

    Now we have a little snippet we can add to the main code later... just before the main loop starts. Next task will be the power-off routine...

  • Switch on :-)

    Myrijam10/14/2017 at 15:26 0 comments

    While using the Venenfinder, we realized, there are some ordinary difficulties with our prototype, so we decided to make some updates!

    We wanted to apply a switch to the mobile version that turns the Raspberry Pi on and off, because we always used to turn the Raspi off just by pulling the cable, which isn't good for the computer and the SD-card system ;-)

    We didn't want to test these updates on our "ready prototype" - just in case something gets wrong - so we at first tested these modifications with another Raspberry.

    If the Raspberry is powered via the Micro-USB-Cable it starts immediately as soon as power is applied. If you properly shut the computer down you normally have to repower (unplug, replug) either the Micro-USB-Cable or the power supply to boot up again. But there is an easy option to it: A switch causing the Raspberry to start - it just need to be connected to GPIO 3 (or 5 depending on the scheme used).

    At least we now do not waste our USB-Socket any more ;-)

    Our next plan is to use the same switch to shut the Raspberry Pi down safely without just pulling the plug. And, since it is running headless, you would have to ssh into it to tell it to "sudo shutdown -h" - not a working solution for people who just want to use the device as effortless as possible. Oh, and we also want to add a buzzer to get a signal when the Raspberry is fully booted!

  • Mobile Version

    Myrijam08/13/2017 at 15:21 1 comment


    Most people have their smartphone almost always near by, so that in the ideal case for the "evaluation unit with screen" no further costs arise, since the smartphone assumes this. Older smartphones or keyboard handhelds often have a smaller IR blocking filter (different from devices and manufacturers, the iPhone, for example, has a very strong IR blocking filter), so they are not capable of performing the optimization algorithms, but in the video preview Could already show veins more clearly by mere IR irradiation. In this case, a very cost-effective solution would be possible (only IR LEDs are required). Another possibility to develop a mobile variant is to connect the modified webcam to a smartphone using a USB-on-the-go (UTG) adapter. Not all smartphones support this - but it would be a way to bypass the built-in camera. Then you would have a corresponding IR-sensitive camera with built-in lighting and the possibility to additionally optimize the video mobile by software. We chose the second version.

    For the hardware you simply need to print the stl-files according to your 3D-Printer’s software. The case is designed to fit the Raspberry Pi3 with the three encoders attached, but you can adjust it to you needs.
The Encoders attach to GND and to the GPIOS 20,21 / 18, 23 and 24,25. The IR-LEDs used have a peak at 940 or 950 nm and require a 12 Ohm Resistor, it you connect three of the LEDs in series. Connect 3 series of the LEDs with a resistor parallel and you have an array of 3x3 LEDs, which will fit into the casing designed for the reflectors.

    If you want to stream the calculated image to your smartphone, TV or tablet, you either need to integrate the Raspberry into your local Wi-Fi network – or just start a new one. We don’t want the user to have to deal with editing Wi-Fi settings on a terminal session,

    The veins are illuminated with IR light (950nm) and the back scattering is captured by the Raspberry Camera (the one without the IR-filter). You can use old analogue film tape as a filter to block visible light and let only pass IR- light. The camera picture is processed in several stages to get an improved distribution of light and dark parts of the image (histogram equalization). The reason to use near IR illumination lies in the optical properties of human skin and in the absorbance spectrum of hemoglobin.

    The device was developed by us (code, illumination, 3d-files as well as numerous tests of prototypes and real-world tests in a hospital). In this tutorial we reference to the following blogs who helped us developing this mobile version of the “Venenfinder”:

    We cite some steps from Adrian’s blog on how to install openCV on the Raspberry Pi from scratch:

    We just decided to turn the Pi into a hotspot. Here we followed Phil Martin’s blog on how to use the Raspberry as a Wi-Fi Access point:

    Since you need a way to change the setting of the image enhancement, we decided to use rotary encoders. These are basically just 2 switches and they sequence they close and open tells you the direction the knob was turned. 
We soldered 3 rotary encoders to a little board and created a Raspberry HAT on our own. For the code we used:

    We used some code from Igor Maculan – he programmed a Simple Python Motion Jpeg (mjpeg) Server using a webcam, and we changed it to Picam, added the encoder and display of parameters. Original Code:

    To rebuild this you can find the 3d files and the python program on my blog:

    And there is a tutorial that is linked here, where you can turn the Raspberry into a hotspot step by step.

  • Webcam Version

    Myrijam08/13/2017 at 15:19 0 comments

    As an alternative to the ready-to-use Pi-Camera, we (re)used a webcam that already had been modified in a previous research project (Eye controlled wheelchair,

    We had removed the infrared filter and replaced the two white LEDs with infrared LEDs. The following figure shows the webcam without housing, the converter chip without optics and the completely reconstructed camera with the two IR-LEDs (violet dots).

    Figure 12: modified webcam from the project "Eye controls wheelchair"

    A USB webcam offers several advantages: It can be connected to other computers than a Raspberry Pi or even smartphones (see “Whats next”) and is more flexible concerning the connection cable. Furthermore, this camera has a built-in autofocus and already supports illumination sufficient enough the range needed here (well, you have to exchange 2 white SMD LEDs for IR ones).

    The disadvantage is that rebuilding this device will be more complicated since soldering SMD components is required along with removing the IR blocking filter (cut it off using a sharp blade). Additionally using a USB camera with the Raspberry Pi, you may loose a bit speed/performance because the CPU has to do the transfer over the USB2, while the Raspberry camera can be handled by the graphics processor with no additional load to the CPU.

    We added a mounting option, consisting of a tripod base with 3d-printed fittings and 9mm plastic tubes:

    Figure 13: Construction of the camera for the modified webcam The code is a bit different, since the camera does auto contrast and auto brightness along with autofocus. This second prototype can be build from scratch and you can of course modify it to use the Raspberry Pi Camera as well. For the moment the Raspberry is still without an enclosure attached to the back of the monitor for easy access – but still not comfortable enough for the intended users.

    After Buildlog 3: “Testing the prototypes with Professionals”

    Our vein-detection system can not only be used for intravenous medication but also for obtaining blood samples. Both the image acquisition and the calculation of image optimizations are carried out on the fly, meaning patient and doctor see directly where the vein to be punctured can be found.

    Therefore we kindly asked two haemophilia specialists to have a look at our Vein detector and give us a feedback. In particular, we discussed our two prototypes with Dr. Klamroth, chief physician of the Center for Vascular Medicine at the Vivantes Clinic in Berlin. He confirmed that only veins near the surface are found with optical devices. Veins below 1mm depth should be localized by ultrasound. Furthermore, finding veins by cooling the skin and using thermography imaging is counterproductive, because veins contract if the skin is cold and are therefore even more difficult to puncture… Dr. Klamroth advised us to extend the results so far and, if necessary, to look for ways to additionally mark the veins in the video displayed as an orientational aid for the user.

    A few weeks later, we were able to present and discuss our prototypes in the Competence Care Centre for Haemophilia in Duisburg ( Dr. Halimeh and Dr. Kappert tested both prototypes in comparism with their professional medical system for Vein illumination. The professional device uses IR light as well, but then projects the image back to the skin using red a laser. Of course the professional system is much easier to use, no bootup time and adjustments needed, but we can compete in terms of image quality!


    The experiments we carried out as well as the research on scientific papers have shown that a universal can be realized with infrared lighting, independently of the skin pigmentation: Below 800nm, the skin dye melanin absorbs large parts of the irradiated light - above 1100nm, very much irradiated light is absorbed by the water in the tissue (see Figures 2 and 4). The combination...

    Read more »

View all 12 project logs

Enjoy this project?



Similar Projects

Does this project spark your interest?

Become a member to follow this project and never miss any updates