Close
0%
0%

Pi Zero 2 Robot Navigation Head

(it's a gimbal) with OpenCV/range scanning and IMU for navigating wireless robot platforms by API

Similar projects worth following
Previous name: Floating Navigation Sensor Assembly (FNSA)

This is using a combination of Open CV and depth probing via short and long range ToF sensors/Lidar. The entire sensor assembly is designed to pan/tilt around an IMU and the IMU determines the sensor plane's attitude in 3D space.

This navigation unit then wirelessly operates a robot via a websocket API (ESP8266 robot).

I abandoned the other one because the physical design was bad and it was using a Pi Zero W 1 where as this one is using a 2 which means quad core/faster Open CV processing. The IMU is also "better" for positioning (hopefully) vs. the servos only/fighting stiff wires.

I call it floating because it does not use slip rings/it's all contained/separate from the robot motion system.

List of sensors

  • MPU9250 (was MPU6050)
  • VL53L0X ToF sensor
  • TFmini-S lidar
  • 8MP Raspberry Pi V2 camera module

Current navigation plan

  • image segmentation/blob centroid distance finding.
  • tracking blobs as basic cubes in relation to estimated robot position with IMU

Battery life

  • 6hrs for the FNSA

Idea:

Align the pan part via a marker, camera visually aligns itself by looking down at its chin then centering.

Weight is 5.9oz before trimming rat's nest of wires. That's just the internal electronics thing, not including outer chasis.

  • Talkin' bout my SLAM vision

    Jacob David C Cunningham01/14/2023 at 23:26 0 comments

    Another ego grab but making progress.

    This time we're going back to algebra days, finding the intercept between two lines yay.

  • Vision Progress

    Jacob David C Cunningham12/27/2022 at 00:12 0 comments

    It's slow but coming along

  • Panoramic vibes

    Jacob David C Cunningham12/20/2022 at 05:21 0 comments

    My progress has greatly slowed, full time job, been slow lately/down.

    Still on the panorama but I discovered that it's built into CV2 so I went ahead with that.

    It's much better than anything I can come up with right now.

    So at this point I am able to produce a panorama from 15 images and then crop it.

    I put a red dot on the middle/level image for reference so that the beam pointing is correct.

    Next I'll do the blob stuff/pair with IMU for navigation. This will take time but I have some holiday time coming up.

    You can see some artifacts from the stitching (blurry areas).

    The purple dots are from the lidar.

    Original panorama looks like this, lots of empty areas

    Then when you align the red dot for center of image/pointing, looks like this

    I'm still working on rough FOV dimensions

  • Panorama fail

    Jacob David C Cunningham12/05/2022 at 05:49 0 comments

    Trying to generate panoramas so I have a wider FOV

  • Chisel counter weight

    Jacob David C Cunningham11/20/2022 at 04:12 0 comments

    Today I made this manual web control, it's building towards the web interface that would monitor the telemetry/decision making by the autonomous part.

    It was a good experience to do this because it exposed some problems like bad code and also how tiny the FOV is from the camera potentially... which means multiple images and probing before being able to make a decision where to go.

    It's also nice being able to see where the Lidar beam is on the image since you see this purple pattern.

    I also realized how bad of a design this thing is (top heavy) and the servos stop too abruptly causing vibration.

    I did print mounts so the head can be attached to the robot securely-ish (some slop).

    Without the chisel (yellow bar) in the back it is very tipsy.

  • Threads

    Jacob David C Cunningham10/30/2022 at 03:44 0 comments

    Ahh... gotta love Python and an OP computer

    I did some work on this project... I would have liked to continue working on it the next day but I have some personal tools I'd like to work on.

    Anyway I'm starting to form "the system". At least the next bit will be the websocket thread that is shared by different functions.

    I added a thread that samples the IMU, I am still working on figuring out how to sample it for displacement where it is accurate/matches reality.

  • Vision, what is it good for?

    Jacob David C Cunningham08/21/2022 at 23:59 0 comments

    This is a small update, I have to stop working on this for a couple of weeks to learn something else so wanted to post what I worked on.

    I started to work on the image processing aspect. I'm building on/extending what I started working on in the past.

    In this case I was just trying to isolate this monitor stand and figure out how far away it was with the depth probes.

    One trick I'm using with regard to FOV/perspective is bringing in physical dimensions into CAD.

    I know that the sheet of paper is 8.5" x 11" in reality so you scale the imported image by those dimensions in SketchUp and it matches in scale with the modeled sensors.

    It is hard to figure out the angle something might be at due to perspective but I should be able to come up with some proportion.

    Anyway what I'm doing is I believe "image segmentation" where I'm using blobs of color to find items of interest then determining where that group is in 3D space.

    So here is a mask applied to try and find the all the groups of black pixels. Noe this photo was taken by the onboard camera not the one above where it's a phone taking the shot. Also a mask has been applied below.

    Then I'm trying to group those black patches of pixels. My method is not good right now so I need to work on it some more.

    Here is a quad that was sampled well (green is centroid)

    Then here is one quadrant that was not sampled well. This is using contours (largest closed grouping of blue).

    It missed the entire massive black area. So I'll improve this part.

    Once I can accurately find the depth of things/matches with reality (need to do lots of testing) then it's pretty easy to navigate around it. Then the IMU onboard will track the robot's location and store the positions of objects it found. It's all crude but part of learning.

    Pointing servos

    At this time I have not determined a function to take a degree and point the sensor plane. It's possible that the milliseconds supplied to the pulse_width function is equal... no it's not. I read somewhere that 1500 is the center of a servo, and I had to rotate 16 degrees. Which turned out to be around 1640 in my case, though center is also 1460. I think it's just coincidence that the numbers almost match eg. 16 -> 1640.

    But I have to figure that out, the other thing I realized is since the two sensors ToF/Lidar are parallel to each other but offset, if they point in a direction the one closest to that direction will have a shorter path... so I'll have to offset those measurements.

    TL;DR is there's still a lot of work to do.

  • Systems online

    Jacob David C Cunningham08/18/2022 at 01:03 0 comments

    This is a "nothing obvious" progress update.

    The next one will address:

    • IMU feedback for leveling/motion/aiming
    • Actual image segmentation with Python
    • Actual motion tracking/navigation
    • Web interface to get telemetry

    I will get all that done because I have to switch gears and learn something else for work/hackathon.

    Anyway at this point I have successfully gotten all the stuff talking, I'm using "class-based architecture" or OOP if you can call it that. Really I'm just making this thing as I go along. I have pretty weak experience with OOP.

    Top down the robot code is like this:

    NavUnit

    • boot
    • motion (talk to WiFi buggy by websocket)
    • sensors (address camera, tof, lidar, IMU
    • the state/navigation

    There's also a web interface that will get data from the nav unit.

    I mostly made this update for the video since the individual video parts are very long/not something I can just sample 10 seconds of.

  • We three... we're all alone

    Jacob David C Cunningham08/13/2022 at 18:23 0 comments

    Nothing like coffee and programming a severed robot head. It is pretty cool wirelessly programming it as opposed to Arduino.

    So I'm still cruising along writing small snippets for things I need before I put together the whole "system". Which I still don't have a concrete plan for yet.

    So far I have:

    • figured out servo commands
    • bi-directional websockets
    • imu interface
    • camera photo
    • opencv running
    • wifi buggy with commands to move it

    There's still a lot to do namely with the actual slam/navigation part.

    Then I'll just refine those parts over time/have a web interface that shows the progress of it navigating/what it mapped.

    So I think next update will be first crude navigation system with running processes (lies Aug 17, 2022).

  • Filthy quitter

    Jacob David C Cunningham08/10/2022 at 02:27 0 comments

    Well... I didn't want to screw around with learning SPI and figuring out how to write one specifically for the Seeeduino... so I massacred my boi... I put an MPU9250 in there. Which didn't quite fit so I had to cut some plastic out. I'll update the STL/designs for this change. This also means the center line of the IMU is not exact... but compounding sources of errors... (shrug). It's still a plane so the tilt should be valid... It's also off for panning. But I can fix all this with software offset. What's not great is the connection is not immediately guaranteed so idk... ugh. It sucks... need to build better hardware.

    Yeah it's odd... it is not there on boot and takes a few tries but once it gets going it is able to keep running which is the issue I care about the most.

    Anyway I can move forward with this. I'll write some start up try catch checks to make sure everything's running.

View all 15 project logs

Enjoy this project?

Share

Discussions

Similar Projects

Does this project spark your interest?

Become a member to follow this project and never miss any updates