chip updates: writing a few new libraries for the platform [updates]
So as we saw yesterday there’s still a little that needs to be done on Chip before we can think about walking. Stand-sit is still an issue and the IK works but we can’t just use the IK, we need to define a few things and here they are:
Add wait/delay to trajRunner Class"
It would be nice if the Trajectory Runner class had a “wait” or a “delay” of some sort that paused for a number of seconds in-between waypoints before continuing. (But doing so without disruptions to the system.)
Create a new class called "MOTIONS" and put trajectories in there
It's hard to keep typing out all of these random trajectories into different places we need them. So instead we can make a new class called MOTIONS that generates the trajectories for us and it can also import libIK.py
The idea is chip has "motions" it is capable of and that we can "install" by upgrading the library and such
Test the IK on the simulator vs in real life
We're noticing that the IK on the actual robot seems to have issues with the magnitude of the commands. For example, 0.5m is much lower than we expect it to be so we want to check out if that's the same in the simulator.
We'll do the bullets in reverse order as that seems to be the "most" to "least" important order.
So here is the Z=0.7 command for the simulator. We see it's in a vaguely appropriate standing position. And now on the actual platform itself:
This feels lower and I'm not sure why which means we should take a look at the code and the drivers to convert one to the other. Other things it could be:
Slop in the system leaving out a few degrees that would result in the extra height
The zero offset of the system - we might be 8 rotations off in any direction since we're using trajectory runner? The zero for the platform and the IK might not be the same.
It could also just be the "zeroing" position of the robot. Maybe we're off when we're physically zeroing the platform.
At the very least here, we want the simulation to match real-life if we don't get both to match the actual physical lengths/coordinate system.
Before we get here we forgot to do something, secure the feet of the robot properly so they don't keep falling off.
We're going to make this attachment very simple - just using Duct Tape (the thick kind)
The other thing we're thinking now is that Trajectory Runner is a real pain and we should avoid using it if we can.
The problem with trajectory runner is the fact that the platform really has to "wait" at each point while the checker checks that that point has been reached before moving to the next point.
This causes very "jittery" motions of the platform and we want them to be more smooth. For now let's get rid of the trajectory runner as we are doing things.
Look how much better the "response time" is here!
Now, back to the point of matching the simulation with real life now that' we've removed the extra actuation controller, we're debating on removing the trajectory system, and we've secured the feet. The next step is to fix this weird height offset issue we're seeing? Like the fact that the platform "stands" lower in real life than in the simulator?
Now the first thing we're going to change in the simulation is the robot's joints because right now it isn't super accurate. Instead of setting "torque" control we're going to use POSITION_CONTROL w/ the maxVelocity parameter. We know the maximum velocity of our actuator at full speed is 5m/s but we've limited that output to 0.145x that. So we can set that as the maximum speed of the system. This should automatically deal with things like maximum force and torque. And all of the joints are working now in pybullet except for that one issue!!! Here's a document of it:
We're waiting for a response now because until we fix this we can't move on with the platform. But when we do here is the plan:
First, implement the simulation such that it's simulating the robot while the robot is running (at the same time in realtime).
Use the simulation for position estimation of the robot body at any given time.
Use the output from the simulation to control the robot with the joystick. Use the sticks to raise and lower the height of the platform, yaw, xy control, etc.
So here's the thing! We also have an IMU, two of them actually. One from the camera + one from the ardupilot. We're going to see if we can get "local" position data from the ardupilot to substitute the simulation while it still being accurate. (It still has a GPS connected too!)
But just because it does provide a local position there still might be a small problem. And that's the latency of the position when it comes to the Arduupilot itself. It's actually kinda slow... 10msg/3:14 seconds
So now we want to look up the real-sense package manager.
And it seems like using the T-265 for local visual odometery might not be such a bad idea, there's only one problem since it's mounted in the front of the robot and "pitch" commands will throw off the "height" reading of the robot so we may need to implement pitch compensation or something. But let's use the tracking-camera for now because that seems fun.
Subscribe to: /camera/odom/sample --> for xyz local pos
It's also important to note the camera starts at a certain position off the ground so we'll need to account for that when dealing with the x-y-z and things.
So before we even get there let's remove the complication and just figure out how to send XYZ commands and we'll just do this with a global last_body_pos variable or something so let's try that.
So we made a global "body-pos" variable and just set everything with respect to that each time. Now the joystick should make the platform go up and down. Here's a nice video:
Now that we've uploaded code for just vertical height, let's try the roll and the x direction as well!!
Above is the code for that, it doesn't take too much adding on. Now we want to test the system right here.
It was going so well until the system over-currented right at the end but it seems like it'll somewhat work. Now the last thing to add is the "y" coordinate control. The yaw and the Y control will be "return-to-center." The "x" and "z" will be "leave-as-is" or delta-control. We'll also have to check the x/y +/- directions because x was reversed on the joystick so we need to check the yaw and y directions too. The directions of x and y seems good. Remember we want the body moving in those directions so the legs go opposite.
And then the last thing we probably want to add after this is to fix the "initial area" like when the robot is low to the ground have the robot atomically choose its x and y positions for the feet so that it doesn't do weird sideways things and etc.
Now it's time to get to sit/stand. We want to finish this off today and then we'll move on because if we can finish up sit/stand then we can start moving onto other things like walking.
SO ACTUALLY IT TURNS OUT WE CAN DO THIS WITH WHAT WE'VE DONE SO FAR TAKE A LOOK!!!!!!! AND THIS MEANS WE CAN SIT UP ON THE HIND LEGS TOO BECAUSE WE WERE TRYNA DO THAT BEFORE!!!
This is actually working this is so fun to do the whole body kinematics and control it straight from the controller. It makes me so happy!
We're going to leave it here for now and come back to more stuff later. But here's what we achieved:
Tested IK vs Real Life - there's some discrepancies but the system IS working for now so we are going to leave it
We got rid of Trajectory runner! Now it's just a simple state-machine/state-tracker. Later we can start using things like the IMU to get local z-positions and orientations and etc.
We still have to create a class called motions but for now it might not be the biggest deal considering that like we can control everything from the remote including stand/sit!!!
That's all for now :)