Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Memory consumption #5

Closed
hanbangzou opened this issue May 13, 2024 · 6 comments
Closed

Memory consumption #5

hanbangzou opened this issue May 13, 2024 · 6 comments
Labels
bug Something isn't working enhancement New feature or request fixed

Comments

@hanbangzou
Copy link

Dear Author

Thank you for developing this package. I've successfully used it to process an image of size 2424x2424, but encountered a significant issue where all 32 GB of my system's memory was utilized, preventing me from processing additional images.

Could you please provide any recommendations on how to manage or optimize memory usage for batch processing? Any guidance or solutions would be greatly appreciated.

element_crops = MakeCropsDetectThem(
image=image,
model=model,
segment=True,
shape_x=640,
shape_y=640,
overlap_x=50,
overlap_y=50,
conf=0.5,
iou=0.7,
resize_initial_size=False,
)
result = CombineDetections(element_crops, nms_threshold=0.25, match_metric='IOS')

Final Results:

img=result.image
confidences=result.filtered_confidences
boxes=result.filtered_boxes
masks=result.filtered_masks
classes_ids=result.filtered_classes_id
classes_names=result.filtered_classes_names

visualize_results(
img=result.image,
confidences=result.filtered_confidences,
boxes=result.filtered_boxes,
masks=result.filtered_masks,
show_class=False,fill_mask=True,segment=True,
classes_ids=result.filtered_classes_id
)

image

@Koldim2001
Copy link
Owner

Koldim2001 commented May 13, 2024

To reduce the load on the operational memory, you can either decrease the number of patches during inference by increasing the stride or crop size, or you can resize the photograph to a smaller size before running it through patch-based inference. The larger the original image, the greater the weight of the binary masks that are processed under the hood, as these masks have the same size as the input image. Try reducing the size of the image several times using cv2.resize and then pass the image through with smaller patches. This should help reduce the load. At the same time, the quality of the segmentation will not significantly decrease.

This memory issue is only observed during segmentation due to the weight of the binary masks under the hood of the operations. Detection works with less utilization.

I wish you luck in your research and work with the library!

🚀@hanbangzou IMPORTANT INFO for you: 🚀

Soon, a major update will be released that will address this issue. I am implementing an approach that involves storing and processing an array of polygon masks instead of the classic method of storing heavy binary masks. This should reduce memory consumption for instance segmentation by more than 1000 times. So, I will definitely let you know when I release the new version of the library! 🎉

@Koldim2001
Copy link
Owner

@hanbangzou

To update the library to the latest version, run the following command in your terminal:

pip install --upgrade patched_yolo_infer

After successfully updating the library, you can use it like that (read README for more info):

element_crops = MakeCropsDetectThem(
    image=img,
    model_path="yolov8m-seg.pt",
    segment=True,
    show_crops=False,
    shape_x=600,
    shape_y=500,
    overlap_x=50,
    overlap_y=50,
    conf=0.5,
    iou=0.7,
    classes_list=[0, 1, 2, 3, 5, 7],
    resize_initial_size=True,
)
result = CombineDetections(element_crops, nms_threshold=0.5, match_metric='IOS')

The main issue was with large binary masks from each individual detection, so I optimized this by rewriting everything to work with polygons. Now, the code outputs a list of polygons for each detected mask. If you want to visualize the results, here's how you can do it:

visualize_results(
    img=result.image,
    confidences=result.filtered_confidences,
    boxes=result.filtered_boxes,
    polygons=result.filtered_polygons,
    classes_ids=result.filtered_classes_id,
    classes_names=result.filtered_classes_names,
    segment=True,
    thickness=8,
    font_scale=1.1,
    fill_mask=True,
    show_boxes=False,
    delta_colors=3,
    show_class=False,
    axis_off=False
)

Please let me know if this solved your problem. I would greatly appreciate it if you could give a star to the repository. I'm very happy to have helped you, good luck!

@Koldim2001 Koldim2001 added bug Something isn't working enhancement New feature or request labels May 23, 2024
@Koldim2001
Copy link
Owner

@hanbangzou How are your results? Did the new repository update help you solve the memory problem?

PS: All usage examples are provided in Google Colab

@hanbangzou
Copy link
Author

@Koldim2001

Thanks for your suggestion!!

I tried to resize all my images as temporary solution and it worked well.

I will soon test the new update and give you the feedback. Thanks for your help. :)

@hanbangzou
Copy link
Author

@Koldim2001

I have tested the new update with same 2424X2424 images and noticed a significant improvement. The jump in memory consumption is now barely noticeable.

Thank you for the hard work and effort put into this update.

@Koldim2001
Copy link
Owner

@hanbangzou
I'm glad to hear that the update has resolved the issue! Good luck!

@Koldim2001 Koldim2001 pinned this issue Jun 17, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working enhancement New feature or request fixed
Projects
None yet
Development

No branches or pull requests

2 participants