-
Notifications
You must be signed in to change notification settings - Fork 120
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Reading in large amount of reader files: memory limit #1245
Comments
The dataset is in this case opened with Xarray open_mfdataset: In the generic reader, there are some more options provided to open_mfdataset: |
I've tried adding those arguments and still getting the same issue. To confirm, is the intended behavior to read the files in as needed, or does the simulation need to be able to hold all the reader files in memory at once? |
Update: reading in 2000 hourly timesteps using |
See this parallel issue: #1241 (comment) So you could also try to install |
I am working with SCHISM model files that contain a single time step each. At the moment I am reading in two months worth of files using:
However that kills the run due to exceeding memory limit. Each timestep/model file is 270mb, so is creating the reader attempting to allocate 388gb of memory? Is there a better way to create the readers so it only accesses the timesteps one at a time?
The text was updated successfully, but these errors were encountered: