We've been spending a bit of time cleaning up the code in the git repository, refactoring common elements of different pieces into parent classes and generally trying to unify the interfaces between stuff. Going forward, we'd like to be able to mix and match the sensor sources and controllers, so hopefully this effort pays off.
We had success balancing the pole using entirely the new vision system as an angle measurement! In general, the swing-up from down position seems to have gotten finicky using both the rotary encoder and the vision system. An interesting next step is to see if we can balance a double inverted pendulum, using the rotary encoder for the first pole and the camera for the second.
I've gotten a couple posts on my blog about a project to use PyTorch (an automatic differentiation and machine learning library for python) for model predictive control. In model predictive control, you solve a trajectory optimization problem for every new command you give. In trajectory optimization, you set up the control problem as a big ole optimization problem.
PyTorch is useful because it is fast, and it gets us the gradients and hessians we need with minimal fuss.
Although not entirely tested, this approach looks promising so far. The trajectories and forces look reasonable and the iteration speed seems acceptable.
Here's the some of the latest blog posts about this :