Close
0%
0%

LiDAR as an input device

Experiments with using a 2D 360 LiDAR sensor as an input device.

Similar projects worth following
For an upcoming project, I needed a locational input method around a floor projection. Nothing complicated or expensive, but reliable and cheap. After a few experiments, I concluded that a 2D 360 LiDAR system would be near perfect. Here are my thoughts and results of this project.

What I need is a system for interacting with a floor projection, both on and just outside of the projection area. There are ready-to-go systems that can do this to some degree. These are often fancy ceiling mounted devices with an IR camera that can track multiple users. These are also very expensive an overkill for what I need.

I need simple but reliable single user input, it's not going to be some flashy game, it's going to be an edutainment system.

My first attempt was to build an array of ultrasonic sensors connected to a Raspberry pico. I used trimpots to dial in the distance per interaction point and the sensors were mounted on a gimbaling joint. While fun to mess around with, it because clear very quickly that this method would never work quickly and reliably, even under ideal conditions.

The range was limited, on reading errors, recovery was slow, the pico only having 3 ADC's meant that without more stuff added, I could only attach 3 pots which meant 3 sensors. But for that there would always be a solution like just using 1 pot and store the value in an eeprom module or something. No, the real problem was that at best the response time would be roughly a second. Mostly it would be 2 seconds and that is much to long for user interaction.

So after a bit of searching around, I figured this could be easily done with a 2d 360 LiDAR sensor of which there are plenty in every price range. Of course I chose the cheapest option, the venerable RPLiDAR A1. Just 100 euro's without tax and I was off to the races.

multiSpot.py

Driver script that detects pre-defined targets and sends keyboard commands in accordance to events on those targets.

x-python - 5.02 kB - 10/19/2021 at 14:01

Download

getXY.py

Tool to get average Cartesian values from the LiDAR sensor

x-python - 3.29 kB - 10/19/2021 at 14:00

Download

  • A small gesture

    Timescale10/31/2021 at 16:59 0 comments

    Although the project is currently on hold, I continue playing around with the LiDAR sensor to explore the possibilities of using a robot sensor as a static interface device. Last time I uploaded the basic multispot code that tracks several groups, averaged them and checks these against a set of pre-defined points in space. Functionally this is about all a simple interactive activity in a space should ever need. This script could do with a little refinement like considering accuracy based on distance and/or position, but basically it just works good enough from desktop use to small rooms. I'm quite sure that implementing this in a big exposition room will be just fine and will require little tweaks.

    So having that functionality basically down, I started to look at using the sensor as a pure HID again. The proof-of-concept LiDAR mouse pointer script was quite basic. It worked by closest point, could not track multiple point groups and has no smoothing. It also did not nicely map the data to the screen coordinates. The obvious next step was to take the grouping method and use that for the HID and try to implement all the necessities to make it a workable input device.

    The file isn't quite ready yet to upload for everybody to play around with, but I thought it useful to share my ideas on it so perhaps somebody with a better idea could comment or ask a question. The majority of choices I make in tinkering with this was that I could very well make it so that apart from driving the cursor, I could also perhaps implement gestures in some form.

    Initially I am using group 0 for the pointer. so no matter how many groups are created, only the first one will control the cursor. This is fine for testing purposes and trying out different smoothing techniques. The smoothing trick I found to be the nicest one, I.E. the one that gave the least jittery cursor and the best control was a single step smooth with a vector and velocity value to impact the amount of smoothing. The velocity of course we get for free because in single step smoothing you retain the old position and find the difference between that and the new one. That value is also smoothed, albeit linearly whereas the we smooth the position data based on how the smoothed velocity matched the vector.

    This way, movement in any direction always gets a bias, meaning that quick motions do not take to long to catch up and slow movement give fine control. All the small jittering from the sensor data is also smoothed quit aggressively which means you get a cleaner cursor movement in all ranges.

    Now I'm sure that the vector method is quite overkill, but the reason I stuck with this was because I realized that IF this got me precise enough results, it could be used to do the same with the second group and based on the vectors and velocity, this could drive other events like zoom, pinch or any spacing 2d gesture the sensor could pick up.

    So this is my thinking right now. Get the cursor as smooth as possible and then try and implement gestures for two hands/ people. Hopefully next time I will have some code to share.

  • Multi spot with output.

    Timescale10/19/2021 at 13:59 0 comments

    Having had some time to test various methods, I believe that I am pretty close to a full functioning driver script that can be used to interact with a set of predetermined set of spots or targets. I will upload the scripts in their current form for anybody to play around with. Mind, these are crude and the result of a very iterative process of hit and miss.

    The most advanced version uses the dot-grouping method, converting them to Cartesian for later processing. The next step to to average the group into a point and see if the point matches the pre-codes targets.

    Being mainly a desktop experiment at this point, the values for including or excluding points from groups or determining of a point hits a target are rather crude. For instance, in the group phase, there is no accounting for angle vs. distance. This should be easy to implement at a later stage. As soon as the Cartesian coordinates are available, this does not matter anymore.

    Finally, for a target to be activated, it has to be actively scanned 3 times. After that it does an action and sets a flag. Every time an object is detected on that target, it will set the detection counter + 2(for ugly bad code practices reasons!). The loop always subtracts 1 from the counter. When the counter reaches zero, another action can be performed to signal the target is now empty.

    In the script, the actions for activating the target are generating a uppercase letter for a front-end application event. When the target is deactivated, the same letter will be send in lowercase. This means you can pre-define 26 spots theoretically. If spots are not blocking the line of site of the LiDAR, the script can track multiple active targets for multiplayer purposes.

    You can find the scripts in the file section of the project.

  • Multi touch maybe?

    Timescale10/03/2021 at 13:41 0 comments

    Having toyed around with my LiDAR sensor for a week, I have some workable scripts that could be used for an input device. The first and simplest method is to have an array of locations. When the sensor picks up a signal from that general area, an action can be performed. In the example code, the nearest reading is used and checked against the array so multiple spots can not be triggers simultaneously.

    for i, scan in enumerate(lidar.iter_scans()):
        dist = 10000
        for x in range(len(scan)):
            if scan[x][1] > 270 or scan[x][1]< 90: # field of view
                if scan[x][2] < dist:
                    # new nearest point
                    ang = scan[x][1]
                    dist = round(scan[x][2])
                    
       # Check array for matching points
       # print('Angle : %.2f' % ang + ' distance : %.2f' % dist)
        curSpot=0
        for chkSpots in spots:
            #chk difference in degrees for all spots
            a = chkSpots[0] - ang
            angDiff = abs((a + 180) % 360 - 180)
            if angDiff < 3:
                #Difference smaller than threshold. Check distance.
                if abs(dist - chkSpots[1]) < 25:
                    #Spot is occupied. Trigger code goes here
                    print("Trigger : " + str(curSpot))
            curSpot += 1
    

    This method works pretty well on small and large scale. The trigger action could be anything from a mouse event to a keyboard action, whatever the front end of the project requires. This is of course a very crude example and could be refined in various ways. In all the experiments I have not done any smoothing over time for example where over multiple scans the results are checked at past results and "smoothed" as it were. The current readouts can be quite jittery, but for simple interactions, basically you need only one good scan to trigger something. Anomalous readouts when there is nothing to be scanned really do not occur.  

    But what about generating X,Y coordinates? Using the first script as a base, that is also not that difficult.

        x = math.cos(math.radians(calcAng)) * dist
        y = math.sin(math.radians(calcAng)) * dist
        
        if dist < 750: # limit not to make the cursor go crazy
           pyautogui.moveTo(screenCenter + round(x ), round(y ))

    This basically give a very rough and non-calibrated control of your mouse cursor and works remarkably well. I left a couple of bits out which are unique to my setup, quirks and mistakes, This is still based on a single point which is the closest during one scan cycle and of course due to it not being calibrated, it does not cover the entire screen. This should not be to hard to do as pyautogui has easy functions to help with that.

    But it is quite obvious that this method would be fairly workable for example where an interactive area reacts to, for example, onmouseover events in a browser.

    Multi touch

    But how far can this be pushed? Could it do a rough kind of multi touch for a quiz game of something like that? It is obvious that this sensor has hard limits apart from distance and certain environments. The most obvious one is that it can not detect multiple objects if these object obstruct each other.

    But with some imagination, there is potential here. If only it could detect of track say 4 points simultaneously. You could do a two player game where their position would indicate an answer to a multiple choice question. It could be anything really!

    Obviously the "nearest point" approach is not very useful here. We need to gather clusters of points and subsequently determine if these are the same object. The ranges I use are rather arbitrary in the examples, but they do work quite well at close range.

    for x in range(len(scan)):
            if scan[x][1] > 270 or scan[x][1]< 90:
                if scan[x][2] < rangeLimit:
                    # point in within scanning distance.
                    ang = scan[x][1]
                    dist = scan[x][2]
                    if len(spotAng) -1 != group:
                         #first point detected, Add point to first array
                         spotAng.append(ang)
                         spotDist.append(dist)
                         spotNumber.append(1)
                         calcAng = round(ang - 270)
                         x = math.cos(math.radians(calcAng)) * dist
                         y = math.sin(math.radians(calcAng)) * dist
                         spotX.append(x)
                         spotY.append(y)
                    else :
                         # still in same group.. Check for angle
                         a = spotAng[group] - ang
                         angDiff = abs((a + 180...
    Read more »

  • Defining function

    Timescale09/27/2021 at 00:05 0 comments

    Thinking about it some more, it is obvious that there are 3 types of possible baseline operations here. It's the HID function, the defined point function and of course a combination of those two. I'd opt to go from simple to complex so I'll try the defined point function first.

    For the inevitable presentation I make a simple plan which shows all the physical properties of all scenarios. It is a simple picture that is meant to focus on what is needed.

    The scan area simple represents the area from which data is acquired. The projection area is of course the image which the interaction with the system is displayed on and this can be interacted with via screen coordinated, however it could also hold defined spots.

    The outer boundary could also use coordinates, but most likely will use pre-defined locations that trigger the running application. Point is, these are different applications.

  • LiDAR as HID or trigger

    Timescale09/26/2021 at 14:53 0 comments

    The typical use case for these sensors are robots which are not stationary. I intend to have the sensor stationary and scan a certain area for activity. This activity basically is people standing in the "scan zone". The resulting output depends on the complexity and interactivity needed for the final project.

    I'm using the RPLidar project in Python on a RPi4 to process the data and generate the output. I tried getting it to with in CPython, but apparently the current library does not play nice with the current firmware on the censor.

    The first task was to understand the data coming out of the sensor and into python. Basically, the sensor provides a dump of measurements every rotation. You get an angle and a distance measurement. I can easily do all the necessary processing between packets.

    For a single user interface, the process is very simple. Find the nearest measurement. The following step is either to compare that data to a lookup table and see if that position is a trigger, or to convert the data into coordinates which could then be mapped on the projection area as mouse coordinates.

    The Python script can also ignore half the measurements which are behind the area. In the final build, a shield behind the sensor will ensure that these angles won't even generate a measurement point because it will block the laser at a distance where it can't resolve the distance.

    The script should also ignores distances that are too far away, leaving the maximum time to process the proper data.

View all 5 project logs

Enjoy this project?

Share

Discussions

Similar Projects

Does this project spark your interest?

Become a member to follow this project and never miss any updates