You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I recently started to use ADCC, and already with calculation of 379 basis functions and adc2 I am going over 3Tb RAM, that is actually my limit (openspell calculation, 10 states requested, 31 atom, I can send structure per email if it is needed).
I saw AdcMemory class with a lot of interesting keys, and also attributes like cached_eri_blocks and cached_fock_blocks, that perhaps should also help, but didn't found any examples of their proper usage.
Stupid inserting them in the way similar to conv_tol attribute ended with:
scfres.cached_eri_blocks
AttributeError: 'UHF' object has no attribute 'cached_eri_blocks'
Could you please show an example how to handle RAM wishes of the code?
Best regards
The text was updated successfully, but these errors were encountered:
Ok, based on #118 and https://adc-connect.org/v0.15.7/api/libadcc.AdcMemory.html?highlight I seems to identify syntaxes as adcc.memory_pool.initialise('/path_to_scratch/',2000000000,allocator="libxm") for limiting RAM to 2Gb (test calculation, benzyl radical on 6-31G, 77 orbitals).
Growth of timing is pretty impressive:
less than 1 minute without libxm, a bit more than 6Gb RAM was used and
over 1 hour with settings specified above. However despite I tried to allocate in different runs 2 and 4 Gb - Slurm identifies "Memory Utilized: 1.06 GB"
This I am not understanding...
UPD: Seems amount of memory specified as second variable is not affecting neither time nor Memory Utilisation (checked with RAMs 2Gb, 4Gb, 50 Gb)
I will report if there will be more interesting results with big system and allocation of 3 Tb RAM.
The short answer is ADC calculations need a lot of memory. This can be reduced by tricks (caching data on disk, density-fitting etc.), but we don't have any of these properly implemented in adcc. libxm in principle allows you to cache tensors on disk as you have reported, but similarly it increases runtimes a lot. This is known behaviour and basically the reason why libxm is not advertised more.
We currently have no way of hard-limiting memory usage. The max_block_size does something else (to do with blocks used for tensor contractions). We don't recommend the default to be altered.
Dear ADCC-Team,
I recently started to use ADCC, and already with calculation of 379 basis functions and adc2 I am going over 3Tb RAM, that is actually my limit (openspell calculation, 10 states requested, 31 atom, I can send structure per email if it is needed).
I saw AdcMemory class with a lot of interesting keys, and also attributes like cached_eri_blocks and cached_fock_blocks, that perhaps should also help, but didn't found any examples of their proper usage.
Stupid inserting them in the way similar to conv_tol attribute ended with:
Could you please show an example how to handle RAM wishes of the code?
Best regards
The text was updated successfully, but these errors were encountered: