Skip to content

Commit

Permalink
Merge pull request #457 from JuliaDataCubes/la/loadinmemory
Browse files Browse the repository at this point in the history
readcubedata, read data into memory
  • Loading branch information
lazarusA authored Oct 23, 2024
2 parents 9c9e2a3 + 68dd226 commit 009bf11
Showing 1 changed file with 34 additions and 1 deletion.
35 changes: 34 additions & 1 deletion docs/src/UserGuide/read.md
Original file line number Diff line number Diff line change
Expand Up @@ -66,4 +66,37 @@ using Downloads: download
path = download("https://github.com/yeesian/ArchGDALDatasets/raw/307f8f0e584a39a050c042849004e6a2bd674f99/gdalworkshop/world.tif", "world.tif")
ds = open_dataset(path)
````
````

## Load data into memory

For datasets or variables that could fit in RAM, you might want to load them completely into memory. This can be done using the `readcubedata` function. As an example, let's use the NetCDF workflow; the same should be true for other cases.

### readcubedata

:::tabs

== single variable

```@example read_netcdf
readcubedata(ds.tos)
```

== with the `:` operator

```@example read_netcdf
ds.tos[:, :, :]
```

In this case, you should know in advance how many dimensions there are and how long they are, which shouldn't be hard to determine since this information is already displayed when querying such variables.

== Complete Dataset

```@example read_netcdf
ds_loaded = readcubedata(ds)
ds_loaded["tos"] # Load the variable of interest; the loaded status is shown for each variable.
```

:::

Note how the loading status changes from `loaded lazily` to `loaded in memory`.

0 comments on commit 009bf11

Please sign in to comment.