[Feature request]: Have gempyor been able to measure its pickled memory to setup batch run #360
Labels
batch
Relating to batch processing.
gempyor
Concerns the Python core.
inference
Concerns the parameter inference framework.
medium priority
Medium priority.
Label
batch, gempyor, inference
Priority Label
medium priority
Is your feature request related to a problem? Please describe.
When submitting a cluster job, we need to specify some memory usage. During the beta testing of emcee, a lot folks (@saraloo, @jcblemai, @anjalika-nande) got into trouble because not enough memory was allocated, and guessing this number is hard.
Is your feature request related to a new application, scenario round, pathogen? Please describe.
This is for all rounds, a big time saver.
Describe the solution you'd like
In the submission script we want to build for emcee, we might want to build the pickable object that is distributed across cores, measure its memory footprint, and use that to inform the SLURM memory request
The text was updated successfully, but these errors were encountered: