Skip to content

Latest commit

 

History

History
12 lines (11 loc) · 1.83 KB

README.md

File metadata and controls

12 lines (11 loc) · 1.83 KB

senior-design-cv

Computer vision project for the MSOE Senior Design Team

Localization Pipeline

  1. Set up stream to simultaneously intake RGB images and depth maps from an Intel RealSense D435 Depth Camera
  2. Identify objects in RGB image using a custom trained YOLOv5 network and Batch Mode Cluster-NMS, and then use OpenCV to determine the state of all platform and goal objects
  3. Obtain the distance from the camera to each of the identified objects using OpenCV/trig and the D435's depth map
  4. Use the D435's intrinsic parameters and trig to determine the horizontal/vertical angles of the identified objects relative to the camera
  5. Make use of the ros_tf2 library to create a static broadcaster ROS node that transforms the object vectors (created from the data in steps 3 and 4) from the coordinate frame of the camera (1) to the coordinate frame of the center of the robot, at ground level (2) image
  6. Create a second ROS node which subscribes to the static broadcaster ROS node which uses the coordinates of the robot (x, y, θ) and the object vectors, which are now in the robot's coordinate frame, to determine the coordinates of each object, and then publish them, along with the state of the platforms and goals image