You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Thank you for your nice package. I have a large set of training data (900 volume) and I was trying to use Nyul normalization (Sample-based) to standardize the intensity variations. However, I am getting memory error issue. I was wondering if you have any insight on how to use this normalization method in a batch-wise manner? or any other solution to tackle the memory error problem?
Best,
The text was updated successfully, but these errors were encountered:
Thanks for the feedback. I'm going to see if I can change the setup to only load the images when needed.
In the meantime, you can pick a large subset of the data (small enough to avoid the memory error), perform Nyul, save the standard scale, and reuse that scale on the remaining images. I'm fairly confident it'll be fine, but let me know if that doesn't work.
jcreinhold
changed the title
MemoryError for Nyul
Out-of-memory error in Nyul for large amounts of data
Mar 17, 2022
Hello I have the same problem, are there any new possibilities to avoid this problem?
for example i see that one can save nyul weights in npz - now one could do multiple npz files using sections of dataset and then average it somehow - is it good idea? can it be done?
@jakubMitura14 Yes that can be done and is a reasonable idea. I don't have the time to fix this issue anytime soon. Just re-save the averaged histogram in the same format. Good luck!
Ok thanks! Just to be sure this are npz files so as far as I get it I can load it to numpy, and they need to have the same dimension hence by the average I can get element wise average ?
Hi,
Thank you for your nice package. I have a large set of training data (900 volume) and I was trying to use Nyul normalization (Sample-based) to standardize the intensity variations. However, I am getting memory error issue. I was wondering if you have any insight on how to use this normalization method in a batch-wise manner? or any other solution to tackle the memory error problem?
Best,
The text was updated successfully, but these errors were encountered: