Search

# chip updates: imagining a neural net that could teach chip to walk [just ideas]

So the 2.12 lecture on ML got me thinking again. How would we design a neural network that allowed chip to learn to walk. Here are the beginnings:

First the network would take in 22 parameters and output 12. 5 parameters are the IMU data, current roll, pitch, yaw, forward velocity, and rotational velocity. 5 parameters are the desired IMU data desired roll, pitch, yaw, forward velocity, and rotational velocity. And the other 12 parameters are the current leg positions in xyz space.

The output is the, at that tilmestep, the desired 12 xyz foot positions that get limited and sent to the platform. The IMU data is read and compared to the desired and that changes the weights of the network during training?

I don't know if this is even valid it's a quick sketch.

Here's an example design for the first layer. It connects the current RPY, the desired RPY, the current FV/RV, the desired FV/RV, and ONE OF THE CURRENT POSITIONS (like front-left-x). This is the most local of a feedback controller it asks "how do these values indicate the positions of the front left leg x value" or something similar.

Now an attempted drawing of layer 2 of the network. The layer after the first layer also is a local layer and that combines the FLX, FLY, FLZ positions for a local "leg" model creation almost.

The layers after that would end up being fully connected (maybe two fully connected layers after that) and then the output layer of the network which is the 12 output commands FLX, FLY, FLZ command and so on...

I don't know if this is how you design a network but what I tried to do is group the variables that make sense initially, and then resort to a fully connected net later like a CNN would do.

I doubt I'll get to trying this anytime soon. The energy required to train something like this is immense. But a good design exercise. We'll be using traditional control for CHIP.