-
Notifications
You must be signed in to change notification settings - Fork 53
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Zernike memory issue #52
Comments
What kind of behavior would we expect or prefer here? I'll take a look while I'm on cleanup duty. |
the basic question is what are the memory requirements of that function. |
Sorry to get pedantic here, but @satra can you just give a few bullets or a shell script describing the workflow that produces the issue? For example, If I understand correctly, the "ask" here is for a new feature in the Zernike code that returns a "required memory" estimate for a particular mesh. Is that accurate? |
@brianthelion - actually we just want to know what the memory requirements are. for example for a standard brain mesh size of about 2 hemi x (100k vertices + 100k faces), i'm seeing memory spikes upto 22 GB when running mindboggle. so we need to know/describe to users what are the memory requirements for the different steps. if i don't run the zernicke step of mindboggle, the rest of mindboggle appears to be satisfied with about 2G of ram. |
My first-pass reading of the code is that the memory usage should only be a function of the moment order, More specifically, the thread pool iterates over the faces in the mesh, calculating the contribution of each face to the overall moment. Each worker in the pool initializes four arrays of size So, seems like there are a couple of possibilities:
If there is a very serious memory leak, we probably need to find it. @satra can you eliminate (4) and (5) as possibilities? Can you also tell us which function in the code is hitting the |
@binarybottle Are you able to reproduce this issue? My time to deliver a potential fix is pretty limited, so the sooner we can get it through triage the better. |
Satra ran thousands of images on Amazon and on the MIT cluster, and found this spike on many cases, so I would defer to Satra's having adequately reproduced this problem. |
I have absolute faith that @satra is seeing a real issue. I need a greater level of specificity, though, to determine whether or not there's an actual bug. @binarybottle if you can provide some basic debug output under the failure mode, that would get me a long way down the road. |
@brianthelion - i'll try to get more specifics this weekend. |
Thanks, @satra! |
Thank you, Satra! |
Any updates here? Thanks! |
yes and no. i can replicate the error when i run it on it's own, but not when i'm memory profiling it. i have limited it to the zernicke function. i had to step away from this - but i'll look at it again soon. |
I will disable Zernike moments from the --all option until this is resolved... |
@satra Any progress on this issue? Thanks! |
Ping! |
pong! sorry i have had no time to test this issue. unfortunately this will require me to carve out an hour or two and those have been a little scarce! i do need to run a bunch of mindboggle output next month, so i will try to test it then. |
ok - so here is the issue: https://github.com/binarybottle/mindboggle/blob/master/mindboggle/shapes/zernike/pipelines.py#L232 multiprocessing doesn't use shared memory and makes copies of every data bit that's passed through the to allow for shared memory one would need to use sharedctypes: https://docs.python.org/2/library/multiprocessing.html#module-multiprocessing.sharedctypes |
@brianthelion - it's actually the same pattern that's used throughout that file. i just used one example to highlight it. |
@satra and @brianthelion -- Do Zernike moments still require 2GB RAM, or were you able to find a resolution to this issue? |
@binarybottle What's needed here is support for allocating ndarrays in shared memory. On other projects I'd used this, but it seems that its currently un-maintained. I don't see other good implementations out there that support the standard numpy array interface, but maybe @satra can point at something. Without shared memory support, the options are: (1) Continue to use KoehlMultiproc as the DefaultPipeline and assume that the memcopy won't blow out the system RAM, or exit with a graceful warning if we can detect that a memory error is coming; (2) switch the DefaultPipeline to a SerialPipeline as in this line. |
Mindboggle is crashing on many subjects with a memory limit set to 2G:
Satrajit Ghosh (2/15/2015):
150215-15:17:45,874 workflow INFO:
Executing node Zernike_sulci.a1 in dir: /om/scratch/Tue/ps/MB_work/734db8e05f6be469df79c1419f253ad7/Mindboggle/Surface_feature_shapes/_hemi_rh/Zernike_sulci
Load "sulci" scalars from sulci.vtk
8329 vertices for label 1
Reduced 160076 to 15921 triangular faces
srun: Exceeded job memory limit
The text was updated successfully, but these errors were encountered: