-
Notifications
You must be signed in to change notification settings - Fork 29
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Computer vision for drone tins #1269
Comments
Update: Collected data for pictures of the tins has been added to label studio, and I will continue to add more by the end of today. |
Great job on uploading the data! It looks great! The only issue is that the frames might be a little too similar, which could make the model hyperfocus on just the environments that your videos are shot in. Could you take the data that you have, and take every 7th or 8th image and re-upload those images as a new project in Label Studio? Also, it looks like the dataset might be a bit unbalanced... it looks like there's quite a few red tin images, but not as many green/blue. You could consider getting some more images of the green and blue tins. Also, I would try to get some outside videos with the sun shining onto the tins and some videos with shade covering the tins (since they will usually be outside). |
Update: As suggested, I got more data with the drone tins outside in the sun. I got every 7th image to clean up the unbalanced red tins and re-uploaded them under RobotX 2024-New Tin Data for Drone in Label Studio. |
Update: This week, I finished labelling all of the data in the dataset on label studio so it can be ready for training next week. |
Update: fixed all of the annotated dataset to make the bounding boxes tighter. This week, I also started working on splitting the data into training testing and validation sets so that we can start training the model on the tin. |
Update: Split the images and labels into train, test, validation using the script. The images are now on workstation2.mil.ufl.edu --> 10.245.80.200 under RobotX_2024_tin_data |
Update: Finished Training model on drone tins. Tested on test images, and It looks like it's detecting pretty well. |
@mohana-pamidi Where is the model? |
it should be under ~/catkin_ws/src/mil/mil_common/perception/vision_stack/src/vision_stack/ml/yolov7 on workstation2 |
This week, I tested the model on the raspberrypi on the drone. However, the model was having difficulty detecting tins when the drone is closer to the tins and when there is glare on the tins. I took some more images and retrained the model. This new model still needs to be tested. |
Completely forgot to update last night, however, I retrained the model on images from the drone to see if the drone can pick up the tins better when flying. |
What needs to change?
Similar to #1268, we will need to develop a computer vision model to detect the red, green, and blue tins that lay on the drone landing pads. To the down camera, these will look like red, green, and blue circles. We will be using the drone camera's built-in vision model at first, and if this does not perform well, then we will develop our own in-house model.
How would this task be tested?
The text was updated successfully, but these errors were encountered: