scout updates: more potential changes to both the software and hardware architecture [updates]
So now let's take a look at how the last post changes our software architecture and hardware process handling. I'm going to basically present a few possible software architectures and we're going to evaluate them.
option one: Jetson overrides FRC Driver Station
Here is the first of the two possible software architectures. This depends on the last post where we said we need to find a way to get the Jetson TX1 to enable the robot /RoboRIO without using the FRC Driver Station. Let's walk through at least how we would interact with the robot in this case from the beginning before we get to the rest of the system on-board the Jetson TX1 which is the ROS autonomy stack.
In this case what would happen is when we turn on the robot, the Jetson TX1 would automatically send the "enable" signal to the RoboRIO controller through some means. This is what we were asking on the form. But people there think this is a bad idea :(
So now we go to the next option... an alternative to the above architecture that allows us to still use and enable the RoboRIO but without using 6 different computers.
option two: onboard robot FRC QDriverStation for Linux
So after some quick googling, we found QDriverStation which is a full-fledged FRC Driver Station for the Linux Operating system. What if, we install this on the Jetson TX1 and add a small touchscreen to the robot. Then on startup this QDriverStation automatically launches along with the ROSLaunch file for the entire robot. Then the person still can fully use just the QGroundControl or similar Ardupilot ground station to control the robot, BUT they must walk over to the robot and use the touchscreen to enable the robot on QDriveStation which actually makes a lot of sense because the the robot doesn't move accidentally. This is probably the approach we will take.
now looking more at the software architecture:
Now let's take a look at the mess of other pieces of code themselves that will make up the SCOUT system. Refer to the diagram in Option Two - this is the most current and correct version of the SCOUT software architecture.
Essentially what will happen with the system is the following. First the system will turn on and the robot will have to be enabled using the LCD screen on the robot or NO actuation will occur. The camera will scan the environment and determine if the environment is outdoor or indoor. If outdoor it will forfeit command to Ardupilot GPS operation, if not it will start using the stereo-vision SLAM to build a map of the environment. It will then take commands from the ground station and use the camera for obstacle avoidance as well. Together it will plan a path and control the leg actuation. There's a lot more complications going on in the above diagram but this is the basics. More details will be posted with the implementations. Communications between the RIO and the motors will be done via CAN, communications between the Jetson and the RoboRIO will be done using NetworkTables.
If we remove the LiDAR and the SLAM portion of the system - which we really should to simplify the hardware and use a singular forward facing sensor - we can use the following Visual-SLAM instructions: https://ardupilot.org/dev/docs/ros-vio-tracking-camera.html
hardware to mount with this arch. change:
Just going through the pieces of hardware we now need to mount to the SCOUT robot or find some way to mount.
(1) Xbox Kinect or Some Stereo Vision Camera/ZED Camera
(2) Ardupilot System with GPS and Telemetry
(3) Nvidia Jetson TX1 Board
(4) Small Touchscreen USB
(5) USB Expansion Hub
We will not mount all this hardware right now - we're going to go step-by-step and only mount hardware when we need it.