• aditya mehrotra.

chip updates: platform-level state-estimation [updates]

So all of these kinematic experiments have helped us notice some things we hadn't noticed before. And here's an interesting one:



When you shift the weight a little onto the back legs and then "yaw" the platform all the way to one side, the weight completely is taken off the front leg opposite the yawing direction, and it is very stable. So instead of using the y-directional movement to get the platform to balance on three legs, we can use the yaw movement like this:

The hardest part will be attempting to transfer the weight to the back legs. And then stepping. But if we follow the following stepping pattern which steps the front legs before it steps the back legs, we might be able to make this work...

The thing is, we need to figure out how to program the movement of a single leg while the rest of the BodyIK is still necessary to maintain the platform's movement. We're going to use a block diagram to figure out what's currently happening and then use that to propose an alternate solution.

So to allow the platform to move a single leg we'll likely need to modify LibIK to separately perform the IK of one leg as opposed to the entire platform. Here's what that system might look like...

This is starting to look a little complicated, we need to treat the robot differently when each leg is off the ground and stepping. Because we want to make the software modular we don't want to go in and butcher LibIK so we're probably going to create a "LibStep" or some other library to see if we can get the system to do what we want it to do.

So essentially what we're doing is...

  • Adding another state machine that keeps track of what leg is currently stepping as well as what legs have already stepped and by what gait-size.

  • And a library that takes that information and the straight output from the desired kinematics and adjusts for the stepped leg.

It is entirely possible we will need to bring back "trajectoryRunner" for this because certain components require us to "step through" some motions. But we'd very much not like to because we wouldn't be able to "end" the trajectory, and keep track of the state of the robot at any given time which means the first option of just modifying LibIK seems like it might be the way to go! Maybe we add in a separate method that doesn't touch BodyIK and just accounts for the current position of the platform at any given time. So now we get onto dynamics, and state-estimation!

Now we need to match the simulator to the real world because otherwise we won't be able to perform accurate state estimation.



Here's a definite interesting that we see in both the simulation and on the platform as well. When we ask the actuators to go to "0.8m" height the platform only rises to around 0.5m, we don't know why that is but what it does tell us is that the simulation's state estimation is actually very very good. Which means we will most likely be able to use its forward-kinematics for state-estimation. Here's a comparison on the actual platform.



Now there's only one problem with the current simulation as opposed to the actual platform simulation, currently the simulation does not work unless we turn the torque all the way up to max. And if we do so then the system will fail likely because the robot will flip itself or something unrealistic. We attempted to limit the torque to the platform but the simulation started breaking for unknown reasons, see: https://github.com/bulletphysics/bullet3/issues/3217


So now here's the new plan for state estimation...

  • At each time-step, we will use the encoders and the IMU to read the position of the actuators and the orientation of the body.

  • We will use LibIK as well as the simulator to perform forward kinematics on all of the legs to keep track of the positions in their LOCAL coordinate frames.

  • If we can fix the simulator we can compare this all to the simulator.

AN IMPORTANT NOTE: SLOP IN THE SYSTEM MAY RECUDE THE ABILITIES OF ANY FK ALGORITHM. WE MIGHT NEED TO JUST STICK WITH JOINT POSITIONS FOR STATE ESTIMATION.



Here are all the actuator positions coming out of the platform. It's based on rotations but can also be converted to angles and used for position estimation of the leg in the leg coordinate frame.



But if we create a fixed base and use "resetJointState" instead of any motion we can do state-estimation of each of the legs without having to do large amounts of FK by hand! And to get the end-effector coordinates in the local coordinate frame all we'd have to do is subtract by their x-y location on the platform. The only thing we haven't set is the body orientation and we will gather that from the IMU. For testing, we will test with the sliders.



Now this is a little strange of-course but we can use the IMU to get the actual body orientation quaternion. To get the actual foot positions we'll likely, however, need to get the position of the shoulder joint and subtract from the foot position in the global frame. We want the position in the leg coordinate frame of course.

  • Like we can use the body's forward kinematics, and also the simulation's absolute leg position to determine the position of the foot in the local coordinate frame.

  • Or we can simply not update the orientation and just use this and standard tracking of the foot positions in both the local and global frames.

Let's just do both foot and orientation tracking on the platform for right now and then we'll. figure out the maths later. Remember this is ALL BEING UPDATED FROM FEEDBACK not from the commands. It's not a state machine it's an actual modeler!

So now we need to actually implement this... on the platform ! And we shall do so.



Here are some important pieces of code that will allow us to update the simulation with the new state of the robot each time an orientation or position command comes in. And we actually don't even have to "step" the simulation because we aren't really simulating anything, just state-estimating w/ the IMU and the encoders! This is also computationally very inexpensive. We're going to have to test this system out on the live platform next.

Testing the state-estimation on the platform live.


It would be nice if we could visualize the data but to do that we need to plug in the JetsonTX1 to a monitor and maybe we should do that and turn the simulator from headless. to GUI mode for now. That's the easiest way to verify if everything is working rather than pouring through data. We'll have to get an HDMI cable in there somehow. Or find some Remote Desktop client (https://www.howtogeek.com/429190/how-to-set-up-remote-desktop-on-ubuntu/) that seems annoying complicated we'll just plug in a cable... we should just leave an HDMI in there from now-on just in case. We'll use one of those Roku HDMI male to female adapters or something.



We added in an HDMI injector and a pretty cool little touchscreen too!



So now we're actually ready to go ahead and test the simulation because all we need to do is run the GUI and it should pop up on the screen! We'll just need to connect a mouse/keyboard to the thing so we can do things.



And LOOK at how well that is working that's really cool. The only thing to consider now is the fact that the "yaw" seems to be going quite poorly coming out of the IMU we have most likely because it's using a GPS/compass to figure out the heading. For yaw we might fuse in the estimate from the real sense camera. Let's see if we can't get the yaw estimate to be more "local" than "global."


One other things to note is that when we start trying to find foot positions we'll have to account for orientation so this may not matter anyways?

FUSING w/ REALSENSE IMU:


Turns out the real sense camera on-board also has an IMU that we can take advantage of for the "yaw" estimates! We can try using the topic /camera/odom/sample which also has Euler angle estimates.



And now it works BEAUTIFULLYYYYYYYY I mean like the orientation is perfect using the yaw from the camera and the roll and pitch from the ardupilot! Here's a little block diagram on how this all works.

And now we've created our final state-estimator. We still need to work on actually finding the foot positions in their local coordinate frames and etc but we will get to that.


#updates #omgrobots #yeeettttt

9 views0 comments
© copyright 2019 | aditya mehrotra