REDACTED - The First Fully Open Bipedal Robot

TLDR: REDACTED is the first bipedal robot with an actual fully open-source software AND hardware stack.

Public Chat
Similar projects worth following
Almost all papers I've read promise to "eventually" publish the code that actually controls the robot. To my knowledge, there is still no project that allows a "quick start" for bipedal robot controllers.

This is why I developed REDACTED, an all-in-one software stack that is fully open source and allows any (Linux :) ) user to get a bipedal robot up and running within minutes. You can even control it with an interactive Web UI or a Playstation Controller!

I tried making the code base as flexible as possible, allowing changes to the leg design in a matter of seconds without having to recalculate everything by hand.

This will allow virtually anyone to get started with further development of such robots, just experiment with them or even get it walking in real life, because I also provide drawings, a BOM and assembly instructions for a full bipedal robot design that can be controlled by the exact same software (still work in progress).

Project choices and the mission

As stated in the summary, my goal is to provide an open solution to the bipedal walking problem, in order to give others a starting point and source of inspiration. This includes hardware designs, drawings, assembly plans, a BOM and the full software stack to make the robot walk. The repositories will be updated as I continue the project, so you will also have access to more advanced features / the full humanoid when I get to that part.

The original reason for starting to learn about humanoid robots was that I wanted to develop better and safer rescue forces for any kind of natural disasters or situations where humans risk their lives to save others. This is the long-term goal and I'm still in the early stages, but I at least wanted to share my vision and motivation for this project. The bipedal platform I am currently developing will be expanded with an upper body soon, but I figured the walking part is already a project in and of itself.

As you will find out below, the project is split up into Hardware and Software, both of which are elaborated on below in a mostly chronological manner. First some explanations, though:

I did extensive planning of the hardware building phase, including assembly plans, drawings for all parts of the servo and simulations / FEA for all servo parts using Ansys Mechanical, as well as a detailed BOM for every part in the servo. This is also where you will find cost estimates, if you are wondering about that.

While quadrupeds are quite common / abundant at this point, bipedal robots are pretty rare. This means most of the literature is focused on quadrupedal robots, and frameworks like champ are designed for quadrupedal robots as a consequence.

Over time, I noticed a number of incredibly tricky issues (at least tricky to me and the almighty WPBack, who helped me for countless hours over Discord), which were not mentioned once in any of the papers, maybe because they only arise when working with significantly less stable two-legged robots or because there is a difference between actual implementation and theoretical explanation. Still, it surprised me that there was no real resource that discussed those issues and possible solutions.Even though the code base became increasingly messy due to most time being spent on debugging, I chose to do what no one else seemed to want to do and open-sourced all repositories, thus providing others with one potential solution to the problems they might face. Here they are:

Main C++ code base for real-time controller

Simulation part, containing Gazebo config files and plugins
Jupyter Notebook containing the calculations and experiments I made to learn MPC

Jupyter Notebook containing the full derivation of a 5DOF serial leg

What I like about this, is that anyone can take this code base and do whatever they want with it, and I think I would have liked that a lot when studying this, especially for the following reasons (in no particular order):

  • The robot being simulated has full CAD, drawings and assembly plans shipped with it, meaning anyone can replicate the design.
  • I tried parametrizing everything I could. Inertia, mass and distance in all 3 coordinate axes of each leg link, gait frequency, step height and length, walking height, all this is variable in real-time or in a few seconds. The same applies to the equations of motions of the leg itself, they are also kept fully modular within the Jupyter notebook and allow for very quick adjustments. I have personally already used this to check what leg setup performs better.
The web UI has sliders for most parameters that can be adjusted at runtime, allowing real-time control over the robot.
  • Complete freedom about where to go from here. Be it merely changing the leg parameters / kinematic structure or even rewriting controller portions, many interesting variations can arise. In my opinion, this is also very valuable because it gives "users" a chance...
Read more »

  • Controlling the robot in real-time via a Web UI

    Loukas K.09/26/2021 at 22:27 0 comments

    Shortly after starting the project, I was already envisioning being able to control the robot in real-time via various input methods, one being a Web UI.

    After figuring out one bug after the other, be it when turning in place, stepping sideways or walking circles, I started working out the details about how to implement such a Web UI. Thanks again to Daniel Berndt for helping me out with the HTML + JS implementation!

    The basic idea is to have a VPS acting as a webserver for both user and robot. This VPS serves the website and syncs the slider values across clients, which was obviously important to prevent confusion. It also handles the WebRTC screen sharing, which I use to show a live view of the robot on the website.

    The web UI clients communicate via a websocket and the VPS sends a number of desired states to the robot controller at regular intervals via multiple TCP sockets. 

    The controller parses those JSON objects using nlohmann/json and interpolates between desired value from web UI and internal reference states for the MPC to prevent abrupt value changes. Especially contact duration was incredibly sensitive to on-the-fly changes. This worked relatively well during first experiments even without the interpolation, the client side was the most time-demanding.

    What happens when you give users a web UI to control a robot in a simulation, what's invevitably going to happen? Correct, someone is going to break it. Since the simulation is also quite sensitive, I spent a lot of time on a functional remote reset mechanism, i.e. being able to press a button in the web UI and having everything return to nominal. Unfortunately, the simulation really didn't like the various ways I had tried, numerical errors everywhere, leading the robot to explode without any other issues.

    So I ended up using a janky way of killing simulation and controller, restarting both and moving the mouse to the "Start Screen Share Button" in the browser. I was under a lot of time pressure so forgive me for this solution :) 

    It was a lot of fun showing to family and friends, so definitely worth it!

    Here is the related Github issue:

  • Fixing the robots' marathon performance

    Loukas K.09/24/2021 at 10:51 0 comments

    It was always important for me to test if the robot can walk indefinitely - or at least for hours on end - in the simulation, so after I got the robot walking for a few seconds, I kept trying longer runs to see what failed and how it failed. At some point, I got it to walk for about an hour, but then it would suddenly fail in very weird ways:

    What's more, it was failing at random points in time, sometimes after a few minutes, sometimes after over an hour... That lead me to believe it was either multithreading related (deadlocks etc.) or something at the OS level. After reverting to a number of commits and experimenting with wrapper functions to make everything fully threadsafe (which was long overdue anyway), I was quite certain threads were not the problem. So I started adding high_resolution_clocks to each code block on every thread and logged all that into the CSV files:

    Maybe you already noticed: There's a clear spike of "previous_logging_time", meaning the write operation must have taken over 250ms! This made a lot of sense, and even though I haven't tracked down why that operation randomly took so much longer, I suspected it was just something on the hardware cache or OS level and moved on to the proper solution, which was, of course, asynchronous IO. The implementation is not very clean because I wasn't perfectly sure that was the actual issue but I used this in the end:

    // From
    #ifndef ASYNC_LOGGER_H
    #define ASYNC_LOGGER_H
    #include <condition_variable>
    #include <fstream>
    #include <mutex>
    #include <queue>
    #include <streambuf>
    #include <string>
    #include <thread>
    #include <vector>
    struct async_buf : std::streambuf
        std::ofstream                 out;
        std::mutex                    mutex;
        std::condition_variable       condition;
        std::queue<std::vector<char>> queue;
        std::vector<char>             buffer;
        bool                          done;
        std::thread                   thread;
        void worker() {
            bool local_done(false);
            std::vector<char> buf;
            while (!local_done) {
                    std::unique_lock<std::mutex> guard(this->mutex);
                                         [this](){ return !this->queue.empty()
                                                       || this->done; });
                    if (!this->queue.empty()) {
                    local_done = this->queue.empty() && this->done;
                if (!buf.empty()) {
                    out.write(, std::streamsize(buf.size()));
        async_buf(std::string const& name)
            : out(name)
            , buffer(128)
            , done(false)
            , thread(&async_buf::worker, this) {
                       this-> + this->buffer.size() - 1);
        ~async_buf() {
            std::cout << "Async logger destructor ran" << std::endl;
            std::unique_lock<std::mutex>(this->mutex), (this->done = true);
        int overflow(int c) {
            if (c != std::char_traits<char>::eof()) {
                *this->pptr() = std::char_traits<char>::to_char_type(c);
            return this->sync() != -1
                ? std::char_traits<char>::not_eof(c): std::char_traits<char>::eof();
        int sync() {
            if (this->pbase() != this->pptr()) {
                this->buffer.resize(std::size_t(this->pptr() - this->pbase()));
                    std::unique_lock<std::mutex> guard(this->mutex);
                this->buffer = std::vector<char>(128);
                           this-> + this->buffer.size() - 1);
            return 0;

     The file:

    Many thanks to the Stackoverflow fellow :)

    You can also read about this issue on my Github repo where you will see I also checked if the solver time was spiking instead of the loop around it, maybe because the cartesian position values became increasingly large, but that was not the case:

    After fixing this issue with async IO, the robot was able to walk for over 4 hours, after which the limits of the simulation started to show... More on that in another...

    Read more »

  • The first major issue: Delay Compensation

    Loukas K.09/24/2021 at 10:34 0 comments

    After I had fully implemented MPC in a Jupyter Notebook using CasADi and the point mass state space model, I wanted to test the same model in the physics-based simulation environment Gazebo, using the Bullet physics engine.

    To do that, I basically rewrote the Jupyter Notebook in C++, and generated a file describing the formulated optimization problem using the CasADi export feature.

    I also made sure that the structure of my software is fully modular, meaning the controller runs completely separated from Gazebo and only receives state information at regular intervals, after which it processes this state and sends back a control action, more precisely torque commands for each joint of the robot. They currently communicate via a local UDP connection, which I will change to UNIX sockets in the near future.

    This allows me to switch back and forth from simulation and real robot without changing anything about the controller, which was very important to me.

    So after I finished the Gazebo Plugin that sends state messages via a UDP socket and implemented a basic version of the MPC in C++, I tested the controller using a simple floating torso in Gazebo and used the API to directly apply forces in the world frame for debugging purposes.

    However, I quickly realized something was wrong because the torso kept "flying away" in the simulation, indicating wrong forces were being applied by the controller. 

    This went on for a while, and I started logging every value I could think of to a CSV file. Then, I initially used GNU Octave to plot the CSV values, and after looking at some very conclusive plots

    Image... I started logging both controller side and simulation side, suspecting the controller is working with too old state information. And there it was!

    There was a clear difference of about 15ms between latest logged simulation state and the state the controller was using for the MPC calculations, which meant outdated forces were sent to Gazebo and the controller was "lagging behind" by about one time step. No wonder it was constantly exploding in the simulation.

    After trying a few approaches of fixing the problem, I settled on simply stepping the discretized model by one time step and using that "compensated" state for further MPC calculations: See here

    // Step the model one timestep and use the resulting state as the initial state for the solver. This compensates for the roughly 1 sample delay due to the solver time
    P_param.block<n,1>(0, 0) = step_discrete_model(x_t, u_t, r_x_left, r_x_right, r_y_left, r_y_right, r_z_left, r_z_right); 

    This resulted in the floating torso being stabilized in Gazebo, which was a great first step! I do have to mention, though, that a few other bugs were fixed while trying to find the root cause, so stuff like a few sign errors were probably also part of the cause :)

View all 3 project logs

Enjoy this project?



flare wrote 09/27/2021 at 06:01 point

You are a beautiful person. Please continue being your amazing self. :)

  Are you sure? yes | no

rraetz wrote 09/26/2021 at 16:40 point

Amazing!! Thanks a lot for sharing your code, it's highly appreciated!

  Are you sure? yes | no

Lightning Phil wrote 09/24/2021 at 19:35 point

Awesome project! 

  Are you sure? yes | no

Similar Projects

Does this project spark your interest?

Become a member to follow this project and never miss any updates