Skip to content
This repository was archived by the owner on Jan 22, 2025. It is now read-only.

Commit b685182

Browse files
mergify[bot]alessandrod
authored andcommitted
v1.18: accounts-db: fix 8G+ memory spike during hash calculation (backport of #1308) (#1318)
accounts-db: fix 8G+ memory spike during hash calculation (#1308) We were accidentally doing several thousands 4MB allocations - even during incremental hash - which added up to a 8G+ memory spikes over ~2s every ~30s. Fix by using Vec::new() in the identity function. Empirically 98%+ reduces join arrays with less than 128 elements, and only the last few reduces join large vecs. Because realloc does exponential growth we don't see pathological reallocation but reduces do at most one realloc (and often 0 because of exp growth). (cherry picked from commit 2c71685) Co-authored-by: Alessandro Decina <[email protected]>
1 parent f2b31bd commit b685182

File tree

1 file changed

+7
-3
lines changed

1 file changed

+7
-3
lines changed

accounts-db/src/accounts_hash.rs

+7-3
Original file line numberDiff line numberDiff line change
@@ -838,9 +838,13 @@ impl<'a> AccountsHasher<'a> {
838838
accum
839839
})
840840
.reduce(
841-
|| DedupResult {
842-
hashes_files: Vec::with_capacity(max_bin),
843-
..Default::default()
841+
|| {
842+
DedupResult {
843+
// Allocate with Vec::new() so that no allocation actually happens. See
844+
// https://github.com/anza-xyz/agave/pull/1308.
845+
hashes_files: Vec::new(),
846+
..Default::default()
847+
}
844848
},
845849
|mut a, mut b| {
846850
a.lamports_sum = a

0 commit comments

Comments
 (0)