Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Computer vision for dock bays #1261

Open
cbrxyz opened this issue Sep 2, 2024 · 13 comments
Open

Computer vision for dock bays #1261

cbrxyz opened this issue Sep 2, 2024 · 13 comments

Comments

@cbrxyz
Copy link
Member

cbrxyz commented Sep 2, 2024

What needs to change?

We will need to detect the color of each bay for the docking mission, to determine which one to dock in. This could be done with classical CV or ML, although ML might be easier to complete in the short-term.

How would this task be tested?

  1. Test the results of the detector in a variety of environmental conditions, and ensure that it is able to detect each of the red, green, and blue squares.
@Josht8601 Josht8601 self-assigned this Sep 2, 2024
@danushsingla danushsingla self-assigned this Sep 3, 2024
@AlexanderWangY AlexanderWangY self-assigned this Sep 4, 2024
@willzoo willzoo self-assigned this Sep 6, 2024
@Josht8601
Copy link
Contributor

MilTaskFLowChart

The process for using machine learning to detect docking bay colors starts with preprocessing the images by resizing them to a consistent shape ad normalizing the pixel values. After ensuring that there is enough labeled data, the data is split into training and testing sets. If more data is needed, more labeled images are collected. The next step is model selection where the machine learning model such as Random Forest or CNN is chosen based on the task requirements. The model is then trained on the training dataset and tested on the testing dataset to evaluate its accuracy. If the model's accuracy is not satisfactory, it is adjusted and retrained until acceptable performance is achieved. Once the model is trained, it undergoes environmental testing to ensure it performs well in various lighting and environmental conditions. After passing this test, the model is used for real-time prediction, where it detects the color of the docking bays during the mission. Finally, the model is deployed into the docking system. Note, this approach will be also discussed with the other students also assigned to this task to ensure everyone is aligned and collaborating.

@cbrxyz
Copy link
Member Author

cbrxyz commented Sep 8, 2024

Great job on the diagram, Josh! That's a great plan! The next task will definitely be collecting enough data to train a good model.

@danushsingla
Copy link

image

The goal is to use OpenCV to manage color detection. First, get the pixel cluster from the LIDAR and the pixels from the camera. Divide the pixel cluster by three based on each docking bay so that each cluster refers to one docking bay color. Then map these pixels to the camera pixels. Get the center pixel of each cluster and find nearby pixels. Find the max value of R, G, and B for these pixels in the cluster and whichever one is max is the color of that cluster. Check to see if which is the color that we want, and mark that cluster. Return this marked cluster.

@DaniParr
Copy link
Contributor

DaniParr commented Sep 9, 2024

@Josht8601 @danushsingla Great job on your workflow diagrams. I believe the more practical approach would be following Danush's workflow since using the point cloud clusters for each dock and mapping them to the camera frame would provide us with the information to extract the color of each dock. This will allow us to label each point cloud cluster with their respective color and continue with the docking mission.

@AlexanderWangY @willzoo @Josht8601 please get in contact with Danush over discord to discuss his approach!

@AlexanderWangY
Copy link

Status update:

From this weeks meeting we have concluded that we will be finding the average RGB values of a center cluster form the LIDAR cut out of the boards. This should work sufficiently as long as the LIDAR is accurate. We will then work on returning the max of the RGB scale and returning a string or enum of the detected color.

@danushsingla
Copy link

To add to Alex's update, the find color detects the color and then returns the cluster that has the color we are supposed to go to. This code is already implemented, what we need to do in the upcoming week is fix the image cropping. Currently, we crop the images based on the cluster values but we realize that this is not always accurate and can lead to inaccurate cropping. We will test a variety of methods to fix this and ensure we capture the image perfectly.

@Josht8601
Copy link
Contributor

For this week, in addition to the image cropping, we also added some comments to the modified find_color function explaining our current idea of using the dock center points to process the cropped images of the dock sections. These center points represent key areas of the dock that the robot will need to focus on.

@willzoo
Copy link

willzoo commented Sep 27, 2024

Update: We have finally wrote a functioning algorithm to find the color of the dock and move to it. However, the algorithm takes some time (~30 seconds) to run because the crop_images function doesn't crop correctly most of the time, so we re-get the LiDAR data and run crop images over and over until the images are correctly cropped to the docks. Currently, the sim boat is told to go to the red dock and correctly moves there.

@Josht8601
Copy link
Contributor

This week we met on Tuesday to debug and resolve some of the issues with the image cropping. However due to the hurricane, we couldn't meet in person in the lab. Some of us were able to meet online on Thursday later in the evening to continue fixing the code in regards to the image cropping as well as improve the speed because the code was running pretty slow before.

@AlexanderWangY
Copy link

Due to the hurricane and other things I was not able to get much done this week. As for progress, I'm still working on merging the color detection into a library/class model to be used for modularity (and decrease some tech debt for future use cases).

@willzoo
Copy link

willzoo commented Oct 14, 2024

Update: I git cherry-picked Max's commits from his task6-2024-testworld into docking_2024, so we can access the world accurate to 2024 now using the command roslaunch navigator_launch simulation.launch use_mil_world:=true world:=scan_dock_fling_2024/default --screen. I also tested the square detection algorithm that Danush and Alex made, and verified it was working in the sim after I fixed a few minor errors. However it seems the boat is not capable of finding the dock in Max's world, probably because the light buoy is in the world for some reason and distracting it. But, the boat also has difficulties finding the dock anyways when moved from its original position.

@willzoo
Copy link

willzoo commented Oct 20, 2024

Update: I came to testing this week on Wednesday and stayed the whole time, but I wasn't able to get much done on this issue in particular. The CV algorithm is working, but we need to make the boat actually reach the updated dock to test it. I examined the code a little this week and it seems the boat is hard-coded to go to a certain position which used to be where the dock was in the original testing world, but not where it is in Max's world. So next week, I should be able to find a way to make the boat go to the dock regardless of its position. Note that there is no need to make a PR for this issue, because it shares the docking_2024 branch with the Task 6: Dock and Deliver issue.

@willzoo
Copy link

willzoo commented Oct 28, 2024

Update: I changed the position of the docking POI and added a line of code for the boat to turn to face the POI, so now the boat almost always makes it to the dock. Adding the rocking fix somehow made the lidar cropping issue worse. I attempted to fix it by removing points in the point cloud below the mean in the x direction, as well as the z direction, but this only seemed to slightly improve the accuracy of cropping the dock images. Hopefully, I can fix the cropping issue this week. Also, as Danush mentioned, I helped him on aligning the boat's position when docking.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

6 participants