You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I have sufficient DRAM in my system close to 750G, and am looking to load feats in_memory to exploit faster DRAM access. However I see the format stored is .npy which makes the loading process extremely slow.
Like the ogbn_papers100M and other family of datasets we use .npz compressed format and also store in a preprocessed directory in binary format which makes loading times from disk extremely fast.
Is it possible to re-use the same libraries for MAG240M datastets, or is there any workaround?
TIA.
The text was updated successfully, but these errors were encountered:
Hello Team,
I have sufficient DRAM in my system close to 750G, and am looking to load feats in_memory to exploit faster DRAM access. However I see the format stored is .npy which makes the loading process extremely slow.
Like the ogbn_papers100M and other family of datasets we use .npz compressed format and also store in a preprocessed directory in binary format which makes loading times from disk extremely fast.
Is it possible to re-use the same libraries for MAG240M datastets, or is there any workaround?
TIA.
The text was updated successfully, but these errors were encountered: