• aditya mehrotra.

chip updates: the day plan, IMUs, stably standing and sitting [updates]

So today I have a few things to do, first I need to go to three classes and then I think I have a lab for 2.12? But other that that, in the evening, I think I'm free for the most part. What that means is I want to get the IMU and sitting/standing working today because I have the time to. In the meantime, my brother will be working on the weird enabling/disabling implementation the C/C++ LibDS version did not work yesterday I don't know why I don't know enough C/C++ to tell you. But we're working on it. Here's the ROS network so far.



As far as manual controls go, the "A" button makes the robot stand, the "B" button makes it sit, the "LT" is supposed to send an ENABLE signal, and the "RT" is supposed to disable the robot. We still haven't figured out how to write the "ENABLER NODE," this node is needed only because we're using a RoboRIO with the FIRST Robotics Competition software stack. The FRC software stack requires (for safety and competition reasons) robots to be connected to a "Driver Station" at all times and for the DS to send an enable signal. That's what we're going to need to try to emulate.


The other things we want this robot to do is stream live video to the ground station (that's also handles by ArduPilot maybe) and get data and process it from the intel RealSense camera (T265). We found: https://github.com/IntelRealSense/realsense-ros the ROS package for the T265, we'll try it out later today hopefully after mounting the camera. We also need to look into https://dev.px4.io/v1.9.0/en/qgc/video_streaming_wifi_broadcast.html which is how to stream video to the ground station via WiFi telemetry or something. Some more stuff about CAMERA in MAVLINK: https://mavlink.io/en/services/camera.html and https://camera-manager.dronecode.org/en/!


So here's a good link! This is literally what we want: https://forums.intel.com/s/question/0D50P0000490S7QSAU/getting-the-realsense-camera-data-over-mavros?language=en_US later today we'll go through this link and see if anything comes of it! Or we can do something using ROS only. http://wiki.ros.org/web_video_server. We can create a wifi access point form the Jetson TX1:



So here's the plan at the moment. The Jetson TX1 will create a WiFi hotspot (when we get there) and we will stream data over a web_video_server that a "Ground Station Computer" can connect to. At the same time, we will have a MAVLINK QGCS running that connects to the ArduPilot for mission planning and actuation commands. We'll figure out if we still want to bluetooth the joystick or if we want to use some joystick connected to the ground station later, for now, this is fine.


Also we really need to clean those copper stains! Here's the plan for the day:

(1) Clean copper stains/figure out a way to prevent it from being stained further.

(2) Mount the RealSense and the Telemetry.

(3) Work on Standup/Sitdown/IMU code.


So we also bought some copper cleaner: https://www.amazon.com/gp/product/B002V4VLOC/ref=ppx_yo_dt_b_asin_title_o00_s00?ie=UTF8&psc=1 we'll use it and then find a way to protect the copper. In the meantime my diagnosis is don't TOUCH the copper because that looks like what has caused the stains! We'll wear gloves when handling the copper next time!


The last thing I want to bring up before we go to other things is how this control flow will ideal work. So a user will input waypoints from QGroundControl and then select write to Ardupilot. Then there will be a button on the controller that will start the mission. When we start the mission we'll get the current position and heading from the GPS/Ardupilot system, and we'll find the next waypoint and compute what the command for velocity and heading should be and we'll basically do some feedback PID control that turns the robot to the right heading and makes it walk. Along the way obstacle avoidance will take over and allow the platform to do things like walk around rocks and etc.


SO HERE's A LIST OF THE CONTROLLERS ON CHIP:

(1) When we send an XYZ position to the leg, we do a PID controller on each joint built into the motor controllers.

(2) There's a PID controller that works with the IMU to keep the platform stable. So we read the desired platform angular acceleration and roll/pitch and then the PID controller will set the control signal to the legs to change their positions to keep the platform stable.

(3) We'll use PID control for position and heading control of the robot as well!


ALL PID CONTROLLERS WE WRITE WILL USE: https://pypi.org/project/simple-pid/ the nice part about this is we can bound the outputs here. The stabilizer node is all written we just need to test it and then tune some gains. You knowww.... it's probably a better idea to bound the actual command to the leg so we'll just do that so we don't limit the stability we can get.


#updates #chip #omgrobots #wireless_control

1 view0 comments
© copyright 2019 | aditya mehrotra