ooohhh it's been a while, a new plan for chipv2 [updates]
Yes, it's been a while since I've written one of these. This past semester I've just been @ home in NJ doing class remotely. I haven't kept the notebook up because there wasn't much of my own personal projects going on to mention. But to organize myself a little, here's what's cooking for January :).
This January I'll be home so I want to take the time to really work out the kniks in chip and get it to a state where it is walking/doing things. We've tried many things over the months of this project:
static creep-gait walking, programming in all the moves by hand
using "CoG Motion Analysis" and other techniques to predict where the CoG of the platform moved and trying to compensate for it
many other random things contained in this notebook
Over the semester we tried a few things too and didn't really write about it because time ran out. But we'll document them here now.
trying to add wheels.
The next thing we tried was to give up on "walking" and add rubber wheels to the bottom of the platform. We added a box with two motor controllers and an arduino that communicated over pyfiramta to the jetson to make this happen.
So when the wheels went on the bottom of the robot, it moved forward and backward fine but when it came to turning, the platform was useless. The differential drive system didn't work and the wheel system started to damage things. We decided to go another route.
finding out about spot micro ai.
Over the semester I came across this project called Spot Micro AI. The developer, Deok-yeon Kim, built the platform to make legged robots cheaper and more accessible. We won't build spot micro but we can take some lessons from the project on how to approach controlling chipv2 without having to create an extremely difficult platform to model.
The Spot Micro AI project introduced me to the PyBullet physics simulator and its capabilities. And that got me wondering. Is there a way we can get chip to really understand its behavior without going through ridiculous amounts of effort in modeling, can we use simulation to help the platform predict its current state or its state based on certain control inputs to make decisions?
In other words, can we create a computer model + simulation that runs on the robot that allows the robot to physically understand itself better?
That's the plan for now. And here's a few more specifics:
Use a URDF to generate a compatible and accurate model of the Chip platform
Import this model into PyBullet, and use this to generate a simulation that is physically accurate
Apply inverse kinematics and dynamics, as well as the simulation and possibly machine learning to control the platform
Ew, we just said machine learning, I promise I wouldn't if we weren't actually using actual machine learning ...
Some progress we made! We found a website that will allow us to view URDF files online and play with the joints to help us determine if the URDF is working correctly. We also found many resources on creating URDFs and using them with PyBullet. The progress so far is here:
That's a preview of the white paper planned for this January. I've become a fan of these white papers recently because it's a nice way to concisely document my progress on a project and allow others to find that work useful at the end of a project. It's easy to document everything in these step-by-step for repeatability.
If you read the above preview you'll find the sources as well as some initial images from the URDF file testing...
As well as from the initial simulation itself...
Now writing the paper doesn't mean we won't document in the notebook. But it does mean that documentation may be more complete in the paper than in the notebook because it's simply easier to put everything in one place. But this is the plan for January...
I'll make another post with one or two other small things I'll be doing for fun in January :)