You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I am using xr.open_dataarray() with chunks and do some simple computation. After that 800mb of RAM is used, no matter whether I close the file explicitly, delete the xarray objects or invoke the Python garbage collector.
What seems to work: do not use the threading Dask scheduler. The issue does not seem to occur with the single-threaded or processes scheduler. Also setting MALLOC_MMAP_MAX_=40960 seems to solve the issue as suggested above (disclaimer: I don't fully understand the details here).
If I understand things correctly, this indicates that the issue is a consequence of dask/dask#3530. Not sure if there is anything to be fixed on the xarray side or what would be the best work around. I will try to use the processes scheduler.
Not consuming significantly more memory than before opening the NetCDF file.
Minimal Complete Verifiable Example
importgcimportdaskimportpsutilimportos.pathimportnumpyasnpimportxarrayasxr# a value of 1_000_000 would make much more sense here, but there seems to be a larger memory leak# with small chunk size for some reasonCHUNK_SIZE=1_000_000defprint_used_mem():
process=psutil.Process()
print("Used RAM in GB:", process.memory_info().rss/1024**3)
defread_test_data():
print("Opening DataArray...")
print_used_mem()
data=xr.open_dataarray('tempdata.nc', chunks=CHUNK_SIZE, cache=False)
print_used_mem()
print("Compute sum...")
result=data.sum()
print_used_mem()
print("Print result...")
print("Result", float(result))
print_used_mem()
data.close()
delresultdeldataprint_used_mem()
defmain():
# preparation:# create about 7.5GB of data (8 * 10**9 / 1024**3)ifnotos.path.exists('tempdata.nc'):
print("Creating 7.5GB file tempdata.nc...")
data=xr.DataArray(np.zeros(10**9))
data.to_netcdf('tempdata.nc')
print("Test file created!")
withdask.config.set(scheduler='threading'):
print("Starting read test...")
print_used_mem()
read_test_data()
print("not inside any function any longer")
print_used_mem()
print("Garbage collect:", gc.collect())
print_used_mem()
if__name__=='__main__':
print("Used memory before test:")
print_used_mem()
print("")
main()
print("\nUsed memory after test:")
print_used_mem()
MVCE confirmation
Minimal example — the example is as focused as reasonably possible to demonstrate the underlying issue in xarray.
Complete example — the example is self-contained, including all data and the text of any traceback.
Verifiable example — the example copy & pastes into an IPython prompt or Binder notebook, returning the result.
New issue — a search of GitHub Issues suggests this is not a duplicate.
Recent environment — the issue occurs with the latest version of xarray and its dependencies.
What happened?
I am using xr.open_dataarray() with chunks and do some simple computation. After that 800mb of RAM is used, no matter whether I close the file explicitly, delete the xarray objects or invoke the Python garbage collector.
What seems to work: do not use the threading Dask scheduler. The issue does not seem to occur with the single-threaded or processes scheduler. Also setting MALLOC_MMAP_MAX_=40960 seems to solve the issue as suggested above (disclaimer: I don't fully understand the details here).
If I understand things correctly, this indicates that the issue is a consequence of dask/dask#3530. Not sure if there is anything to be fixed on the xarray side or what would be the best work around. I will try to use the processes scheduler.
See also #2186, which has been closed without fix and my comment there.
What did you expect to happen?
Not consuming significantly more memory than before opening the NetCDF file.
Minimal Complete Verifiable Example
MVCE confirmation
Relevant log output
Anything else we need to know?
No response
Environment
I did the tests in a new conda environment installing only relevant packages:
micromamba install -c conda-forge xarray dask netcdf4
.xr.show_versions():
xarray: 2024.10.0
pandas: 2.2.3
numpy: 2.1.3
scipy: None
netCDF4: 1.7.2
pydap: None
h5netcdf: None
h5py: None
zarr: None
cftime: 1.6.4
nc_time_axis: None
iris: None
bottleneck: None
dask: 2024.11.2
distributed: 2024.11.2
matplotlib: None
cartopy: None
seaborn: None
numbagg: None
fsspec: 2024.10.0
cupy: None
pint: None
sparse: None
flox: None
numpy_groupies: None
setuptools: None
pip: 24.3.1
conda: None
pytest: None
mypy: None
IPython: None
sphinx: None
conda list:
The text was updated successfully, but these errors were encountered: