chip updates: analyzing the walking of OpenDog to apply it to chip [updates]
This post is simply, notes from the following video:
So James Burton built an open-source robot dog, he's calling it the OpenDog project (mostly based on Arduino and such). It's a really cool project there are some issues I have with it as development platforms go.
(1) The use of linear ball-screw actuators, for me, while it makes the robot a little easier to control is too limiting, I understand the massive reduction in torque needed for something like that over the design of CHIP - but I think it's too limiting for a robust development platform.
(2) Arduino is amazing I love it as much as anyone, but the platform being solely based on Arduino will limit its autonomy capabilities (when it comes to navigating in a house on its own and etc).
Those are not complaints or criticisms of OpenDog - I'm merely making the point that the goals of CHIP and OpenDog are completely different. Chip started out as just being a project to make a robotic dog for fun (which we continue), but at the end we will release all the documentation and etc on how to build it so it may be also used as a robust development platform for different applications. Actual real-world applications we have yet to come up with.
That being said. I have no idea the cost of OpenDog vs Chip, I'm also not a fan of how many parts were 3D printed on OpenDog - but to each their own, and frankly, I LOVE what James Burton has done with OpenDog even if it isn't the way I'd have designed it :) - again, not really criticizing, just highlighting the difference.
Anyways: OpenDog is about to be a huge help to us, going to use the project to learn about weight shifts and steps.
So this is walking without an IMU for clarification and here's the plan.
(1) plan the feet positions
(2) build a step sequence
(3) move one when its off the ground
(4) take into account leaning to take weight off the feet
So here's the trajectory being proposed...
The speed of walking is the time difference between all of the vertical lines - that scales with how far the stick has been pushed forward.
So his recommendation is to generate a trapezoid trajectory where the division are multiples of the stick and when the foot is placed back down, the foot is at the 0 POSITION it doesn't follow the whole trapezoid the feet are just going up and down.
ALSO MUST MAKE SURE THREE FEET ARE ON THE GROUND AT ANY POINT, WHICH MEANS SPEED OF STEP IS 3X SPEED OF SHIFT WHEN ON THE GROUND.
Here's the feet position, 2 - 3 - 4 - 4 - 5 - 5 - 6 - 6 - 6 - 1 then wraps. His technique is to make the legs go in a right-right *lean* left-left technique. We are going to do something different, we're doing to right *lean* left *lean* so on. And move like this.
FL - BR - FR - BL. Which might come to something similar. Or maybe we should just program in his version, then edit - yea let's do that.
So the strategy is, if one leg is off the ground the other three stay stationary for right now.
Okay, so we're definitely going to put the robot on the stand for this. Last time what we were doing to make the robot lean was move the y-axis on one side down, and the y-axis of one side up. We can't do that we actually need to make it lean, more than we need to make it lean like that. We need to change the Z POSITIONS OF THE LEG by a little bit!
We're first going to generate leans, and then steps. This will walk a bit strange, but for now, I think it's okay. Baby steps (get it?).
So to start, we're going to go into Trajectory Generator and write a few functions, the first being a lean function which takes in a body angle and translates that to feet positions. Here's the math for that, it takes in a roll angle form the joystick (or something) and makes the robot lean to that. We'll have to limit the max roll angle to something like 15 degrees because of the mechanical limitations of the system.
Here is the math, and here is it in code. All we need to do is map a joystick to the roll angle. NOTE - THIS IS A FEED-FORWARD SYSTEM THAT LETS US CONTROL THE LEG POSITIONS ASSUMING THE ROBOT IS STANDING. WE WILL MAKE THIS MUCH BETTER LATER WITH IMU FEEDBACK. FOR ROLL-PITCH-AND-YAW CONTROL. ALL THREE WILL NEED TO BE IMPLEMENTED FOR WALKING, WE ONLY NEED ROLL.
Let's get to testing, stand up and make it roll in both directions and then go back to center.
Now with the left and right pad buttons???? We're using the POV stick!
So he leans! No we can try to get to a walk right?
next: forwards backwards weight transfer.
Not so fast! We need to deal with forwards backwards weight transfer, luckily this is really easy. We'd just need to shift the legs in the x-direction a tiny bit. Here we go.
That's it! Now let's upload this and add the forward and backward pad buttons and see what happens! **We changed the magnitude of the shift to 1/50 because 0.05 is too big, and the direction was also wrong, we need to move the feet back to shift the weight forward.
So what comes next are two things. First we need to setup button actions for what we have done so far.
The next thing is we need to (A) find the center of mass of the robot, or at least have the robot know where its center of mass is in the XZ plane at any time. so it can use the theory that if the COM is within the location of the three feet, it can stand without the other foot. I also thing the COM is way too far back for us to be able to walk yet. We need to calculate this first, or at the very least, shift the COM forward.
Then what we need to do, after it knows that, is write a walking program that positions the feet correctly for walking on standup itself. I think the standup angle might also be too low - even if the robot gets up and sits back down.
We ideally, need an IMU that will help us control the yaw-pitch-and-roll angles.
Maybe the next step is to setup the Jetson and network tables so only thing the rio has to do is read trajectories and execute them? And send foot positions back?