Close

Multi touch maybe?

A project log for LiDAR as an input device

Experiments with using a 2D 360 LiDAR sensor as an input device.

timescaleTimescale 10/03/2021 at 13:410 Comments

Having toyed around with my LiDAR sensor for a week, I have some workable scripts that could be used for an input device. The first and simplest method is to have an array of locations. When the sensor picks up a signal from that general area, an action can be performed. In the example code, the nearest reading is used and checked against the array so multiple spots can not be triggers simultaneously.

for i, scan in enumerate(lidar.iter_scans()):
    dist = 10000
    for x in range(len(scan)):
        if scan[x][1] > 270 or scan[x][1]< 90: # field of view
            if scan[x][2] < dist:
                # new nearest point
                ang = scan[x][1]
                dist = round(scan[x][2])
                
   # Check array for matching points
   # print('Angle : %.2f' % ang + ' distance : %.2f' % dist)
    curSpot=0
    for chkSpots in spots:
        #chk difference in degrees for all spots
        a = chkSpots[0] - ang
        angDiff = abs((a + 180) % 360 - 180)
        if angDiff < 3:
            #Difference smaller than threshold. Check distance.
            if abs(dist - chkSpots[1]) < 25:
                #Spot is occupied. Trigger code goes here
                print("Trigger : " + str(curSpot))
        curSpot += 1

This method works pretty well on small and large scale. The trigger action could be anything from a mouse event to a keyboard action, whatever the front end of the project requires. This is of course a very crude example and could be refined in various ways. In all the experiments I have not done any smoothing over time for example where over multiple scans the results are checked at past results and "smoothed" as it were. The current readouts can be quite jittery, but for simple interactions, basically you need only one good scan to trigger something. Anomalous readouts when there is nothing to be scanned really do not occur.  

But what about generating X,Y coordinates? Using the first script as a base, that is also not that difficult.

    x = math.cos(math.radians(calcAng)) * dist
    y = math.sin(math.radians(calcAng)) * dist
    
    if dist < 750: # limit not to make the cursor go crazy
       pyautogui.moveTo(screenCenter + round(x ), round(y ))

This basically give a very rough and non-calibrated control of your mouse cursor and works remarkably well. I left a couple of bits out which are unique to my setup, quirks and mistakes, This is still based on a single point which is the closest during one scan cycle and of course due to it not being calibrated, it does not cover the entire screen. This should not be to hard to do as pyautogui has easy functions to help with that.

But it is quite obvious that this method would be fairly workable for example where an interactive area reacts to, for example, onmouseover events in a browser.

Multi touch

But how far can this be pushed? Could it do a rough kind of multi touch for a quiz game of something like that? It is obvious that this sensor has hard limits apart from distance and certain environments. The most obvious one is that it can not detect multiple objects if these object obstruct each other.

But with some imagination, there is potential here. If only it could detect of track say 4 points simultaneously. You could do a two player game where their position would indicate an answer to a multiple choice question. It could be anything really!

Obviously the "nearest point" approach is not very useful here. We need to gather clusters of points and subsequently determine if these are the same object. The ranges I use are rather arbitrary in the examples, but they do work quite well at close range.

for x in range(len(scan)):
        if scan[x][1] > 270 or scan[x][1]< 90:
            if scan[x][2] < rangeLimit:
                # point in within scanning distance.
                ang = scan[x][1]
                dist = scan[x][2]
                if len(spotAng) -1 != group:
                     #first point detected, Add point to first array
                     spotAng.append(ang)
                     spotDist.append(dist)
                     spotNumber.append(1)
                     calcAng = round(ang - 270)
                     x = math.cos(math.radians(calcAng)) * dist
                     y = math.sin(math.radians(calcAng)) * dist
                     spotX.append(x)
                     spotY.append(y)
                else :
                     # still in same group.. Check for angle
                     a = spotAng[group] - ang
                     angDiff = abs((a + 180) % 360 - 180)
                     #print(angDiff)
                     if angDiff < 3: #if difference is less than 3 degrees
                         if abs(dist - spotDist[group]) < 40:
                             # distance is alo within the same area.
                             spotAng[group] = ang
                             spotNumber[group] += 1
                             spotDist[group] = dist
                             calcAng = round(ang - 270)
                             x = math.cos(math.radians(calcAng)) * dist
                             y = math.sin(math.radians(calcAng)) * dist
                             spotX[group] += x
                             spotY[group] += y
                         else:
                             group +=1
                     else:
                         #out of bounce
                         group += 1

 First when a point is detected, It gets put in the first arrays. If subsequent points are within its distance and angle range, these get added to that array, if they do not, they next points gets a new set of arrays. This could be done a bit cleaner with a multidimensional array, but that is for another day.

Note that I also calculate the X and Y coordinates in the part and not the next. The reason for this is that I average the points in the array and eventually I want to do an extra check of point distance of these results. This is easier to do with coordinates than with degrees, but for that to work and not have an extra loop it has to be calculated here.

The final part is just about where I'm at right now. There are various ways to use the collected information, but at the moment I just plot the results of the first group to the mouse cursor.

   for spots in range(len(spotNumber)):
        spotX[spots] = round(spotX[spots] / spotNumber[spots]) * 3
        spotY[spots] = round(spotY[spots] / spotNumber[spots]) * 3
 
   if (len(spotAng) > 0):
        print(spotX[0])
        if dist < 750:
            pyautogui.moveTo(560 + spotX[0] , spotY[0] - 200 )

Pyautogiu does not support multi input I think, but with this simple test, I have noticed that rounding the clusters of points make for a pretty robust pointing device. If you print out len(spotNumber) you can see the points it has detected. It goes up to 8 without difficulty (as long, of course, nothing is obstructing something else or are to close to each other).

You could also implement the spot detection method on this principle and I think I will because for a multi input coordinate system, this will not work as is. However, for single input coordinate systems, this is far superior to the nearest point method.

For single input, this method also has the option to detect interference and ignore it. Say, somebody is occupying a spot, and somebody walks in the area. In this example, this person could now take over the first position depending where they stepped in, but some simple checks could detect this and ignore the second input. 

Thus far my experiments with the LiDAR data and how this could be implemented as a HID. For the next log, I'd like to have advanced features like smoothing, intrusion detection and calibration working and at that point I'll probably upload some of the Python files in their entirety.

Discussions