After 14 years in aerospace, robotics, and drone development, I’ve worked across everything from mechanical design to SW simulation and CI. But I often found myself focused on subsystems - writing test libraries, integrating sensors -without the deep picture of how it all works together.
One of the most interesting projects I contributed to was autonomous vehicles, where perception, planning, and controls had to work in sync. Still, the scale made it tough to trace the full pipeline from sensor input to real-world action.
So now, with some time and curiosity on my side, I’m building a low-cost robot that navigates my house - encompassing mapping, obstacle avoidance, and recognizing objects. It’s a hands-on way to explore how real robotic systems work, end to end. As the Top Gear crew would say: How hard can it be?
This project is both a personal deep dive and a learning tool. I’ll be using:
- A 4WD mecanum-wheeled chassis (for maneuverability and control complexity)
- A Jetson Nano (for high-level tasks like planning, perception, and SLAM)
- An Arduino (for low-level motor control and IO)
- A suite of sensors, including a camera, LIDAR, and a 9DOF IMU
The goal is to gradually build a fully autonomous indoor robot. Starting with sensor calibration and localization, I’ll move on to dead-reckoned driving, then introduce mapping (first with ultrasound, then LIDAR), and eventually add advanced sensor fusion and object detection. The final stretch will include a simple GUI for placing waypoints and visualizing maps, and hopefully some camera-based perception for detecting people and objects along the way.
And, the thing that makes this time stand out from before, I’m able to lean on ChatGPT as my trusty co-pilot - helping with quick debugging, script scaffolding, and advice on modules and architectures.
This is a project about exploration, learning, and building something meaningful - one subsystem at a time.
THE CODE FOR THE PROJECT CAN BE FOUND ON A PUBLIC REPO IN MY GITHUB: https://github.com/arteml8/robot_project