Core files for running automated ARCC League
- This repo has been developed on and tested for Ubuntu 18.04 with ROS melodic
- Start by installing and setting up conda
- Create a conda env with the command
conda env create -f environmrent.yaml
. That should load all of the proper dependencies for running league core. - Activate conda env with with
conda activate league_env
- Install ROS melodic
- init and update git submodules
sudo apt install libunwind8-dev
sudo apt-get install python-requests python-requests-toolbelt
rosdep install --from-paths src --ignore-src -r -y
catkin_make
- You may need to download the SDK for the FLIR style cameras manually follow this
- Setup FLIR Camera udev rules by downloading the spinnaker viewer/driver from the link in the setting up the vision system section (software install link) and run the setup making sure to correctly enter the username when prompted (if your camera cannot be detect the reason is likely that the udev rules are not properly setup)
- In
/etc/default/grub
add the lineGRUB_CMDLINE_LINUX_DEFAULT="quiet splash usbcore.usbfs_memory_mb=1000"
then runsudo update-grub
- Calibrate Camera with tutorial here
Once you have followed the setup instructions you can begin running ACC League software!
ARCC league uses quality machine vision system to provide reliable DR tracking. The follow links are to the setup instructions for the Flir BFS-U3-16S2C-CS Camera.
The specs for the camera when focus = infinity and the zoom in minimized is the following HFOV: 120 VFOV: 90 Frame size: 1440(h)x1080(w) FPS: ~30 Topic: /camera_array/cam0/image_raw
For camera calibration the following command can be used: rosrun camera_calibration cameracalibrator.py --size 8x6 --square 0.038 image:=/camera_array/cam0/image_raw --no-service-check
('D = ', [-0.21338195826140682, 0.03258095763463949, -0.00018955724409316463, -0.00040928582228521367, 0.0]) ('K = ', [648.0113147742057, 0.0, 703.9625700394782, 0.0, 648.1713061967582, 533.5561962879885, 0.0, 0.0, 1.0]) ('R = ', [1.0, 0.0, 0.0, 0.0, 1.0, 0.0, 0.0, 0.0, 1.0]) ('P = ', [445.3047180175781, 0.0, 707.8064656754395, 0.0, 0.0, 526.1232299804688, 529.175376518222, 0.0, 0.0, 0.0, 1.0, 0.0])
Nodes and functions can be launch individually or together. Launch files package contains the large single files that will launch every component of the system. Parameters to the launch file include which tracking system to use and the config files to be loaded for each process. The rqt_graph for the single car system is shown below.
Check all configuration files are up correct
- dr_tracker_config.yaml
- track_util_config.yaml
- ekf_config.yaml
- base_local_planner_params.yaml
- costmap_common_params.yaml
Make sure you have the following:
- Calibrated wide FOV machine vision camera
- Map corresponding to camera placement and track layout
- Updated parameters for ekf based on camera, lighting, and map
Two types of tracking systems exist. One is a traditional motion system that ses computer vision techniques to detect and track a triangle. The other technique uses object detect neural networks and computer vision to detect, orient, and track the obect.
Setting up triangle tracking
- Design an isosceles triangle with a 9/4 height to base ratio. The color should be one that is not encountered in large amounts on the track such as red or blue. The mask can then be tune in the dr_tracker/config/dr_tracker_config.yaml along with various other data points needed for accurate calibration of the tracking system.
- Existing triangle files can be found here
- Best performance has been found with 9/4 ratio dark red/blue triangles printed on thick matte paper.
- Example dimensions for a triangle that fits the DeepRacer frame would be h:275mm and b: 122mm with RGB values (158, 11, 15)
Setting up Deep Neural tracking
- print out on thick matte paper colored buttons (circles) with clear delinitations between in and the rest of the cars shell and other colored dots. For example colors could be purple, blue, green, red, yellow. For the AWS Deepracer the dots should have a diameter of 50mm.