Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Fix issue where prune blobs can OOM #6571

Open
wants to merge 107 commits into
base: unstable
Choose a base branch
from
Open
Show file tree
Hide file tree
Changes from all commits
Commits
Show all changes
107 commits
Select commit Hold shift + click to select a range
0487b33
add interface
eserilev Aug 31, 2023
4b48ef8
Merge branch 'unstable' of https://github.com/sigp/lighthouse into mo…
eserilev Aug 31, 2023
ec84723
level db interface updates
eserilev Sep 6, 2023
86b820e
level db interface updates
eserilev Sep 6, 2023
846b213
starting the split
eserilev Sep 7, 2023
a885c62
remove leveldb references
eserilev Sep 9, 2023
b8636d1
get test cases to pass
eserilev Sep 9, 2023
7177f22
refactor and get test cases to pass
eserilev Sep 9, 2023
d86eaf3
generalize key iter
eserilev Sep 10, 2023
d6c4971
resolve merge conflicts
eserilev Jan 30, 2024
2ee7279
rename impl to LevelDB
eserilev Jan 30, 2024
cc1dcf4
initial work
eserilev Jan 30, 2024
b6a2823
write option
eserilev Jan 30, 2024
3881da1
cfg
eserilev Jan 30, 2024
1d39785
merge
eserilev Jan 30, 2024
04011f3
redb db impl
eserilev Jan 30, 2024
e2ecb41
redb
eserilev Jan 30, 2024
323e9a9
durability and atomicity
eserilev Jan 30, 2024
e2a9f7c
Merge branch 'beacon-node-backend-redb' into modularize-beacon-node-b…
eserilev Jan 30, 2024
ad24ec9
remove savepoint
eserilev Jan 31, 2024
3e557d6
Merge branch 'beacon-node-backend-redb' into modularize-beacon-node-b…
eserilev Jan 31, 2024
39c6b83
working on getting full_participation_no_skips test to pass
eserilev Jan 31, 2024
f718f9f
test case passes
eserilev Feb 1, 2024
41700d6
update
eserilev Feb 1, 2024
04eada1
use rw lock
eserilev Feb 1, 2024
e4d47ea
table iter experiment
eserilev Feb 2, 2024
4cf145d
iterator tests
eserilev Feb 3, 2024
01f8d3d
redb 2.0
eserilev Feb 3, 2024
22cb4c0
iter_column_keys
eserilev Feb 3, 2024
27e9128
remove generic type param
eserilev Feb 3, 2024
82ed22d
test cases passing
eserilev Feb 3, 2024
f96ea2a
iter temp and iter raw impl
eserilev Feb 3, 2024
1583a1f
remove unneeded get_key_for_col
eserilev Feb 3, 2024
715e24d
fix
eserilev Feb 3, 2024
5dab93b
lint
eserilev Feb 3, 2024
114c7dd
remove unwraps
eserilev Feb 4, 2024
b84ff3c
fmt
eserilev Feb 4, 2024
6badfe6
redb dependency
eserilev Feb 4, 2024
a76f64f
Merge branch 'unstable' of https://github.com/sigp/lighthouse into mo…
eserilev Feb 8, 2024
ae6bf24
add db name
eserilev Feb 9, 2024
795859e
resolve merge conflicts
eserilev Feb 10, 2024
ab5e6a8
merge conflicts resolved
eserilev Feb 10, 2024
9f4ccb5
dir check
eserilev Feb 19, 2024
5b49e4f
Merge branch 'unstable' of https://github.com/sigp/lighthouse into mo…
eserilev Feb 19, 2024
487c2f1
logging errors
eserilev Feb 19, 2024
fd60c5b
Merge branch 'unstable' of https://github.com/sigp/lighthouse into mo…
eserilev Feb 22, 2024
e1ab17d
consolidate iter_raw to generic iter
eserilev Feb 24, 2024
ceeded2
Merge branch 'unstable' of https://github.com/sigp/lighthouse into mo…
eserilev Feb 24, 2024
2bb1be7
linting
eserilev Feb 24, 2024
b1c437d
off by one column iter
eserilev Feb 24, 2024
0c6352b
fix test
eserilev Feb 25, 2024
69f315f
remove iter_temp_state_roots, add predicate, add backend flag
eserilev Feb 27, 2024
969b679
added redb and leveldb build feature
eserilev Feb 27, 2024
ddb68e2
update docket
eserilev Feb 27, 2024
304aed9
resolve merge conflict
eserilev Mar 28, 2024
de444ee
resolve merge conflicts, move redb to v2.0
eserilev Mar 29, 2024
48c5ca4
merge unstable
eserilev Jun 6, 2024
577836c
fmt fmt fmt
eserilev Jun 6, 2024
d067bc6
resolve merge conflicts
eserilev Aug 12, 2024
bdcdbda
remote iter raw keys
eserilev Aug 12, 2024
ad7f889
fix failed test
eserilev Aug 13, 2024
acfcd55
fmt
eserilev Aug 13, 2024
1164ddd
cargo changes
eserilev Aug 13, 2024
def8b6c
Merge branch 'modularize-beacon-node-backend' of https://github.com/e…
eserilev Aug 13, 2024
f7fc0a0
defeault to redb
eserilev Aug 14, 2024
e1806d8
defeault to redb
eserilev Aug 14, 2024
2d2d4e9
Merge branch 'unstable' of https://github.com/sigp/lighthouse into mo…
eserilev Aug 14, 2024
fdbe248
Merge branch 'unstable' of https://github.com/sigp/lighthouse into mo…
eserilev Aug 20, 2024
0b3eee2
revert forced redb
eserilev Aug 20, 2024
15569b7
rename dbfile to .redb
eserilev Aug 20, 2024
72d56ec
fix test, update docs
eserilev Aug 20, 2024
e9ee3ba
fix leveldb error
eserilev Aug 21, 2024
2494a08
fmt
eserilev Aug 21, 2024
58d0baf
leveldb fix
eserilev Aug 21, 2024
ced1189
remove println
eserilev Aug 21, 2024
ec7059b
remove extraneous migration schemas and comment tuples
eserilev Aug 26, 2024
ad7db41
add compaction metrics
eserilev Aug 26, 2024
8ed2ff4
log from_ssz_bytes error in lc server
eserilev Aug 26, 2024
63b15b1
add additional metrics
eserilev Aug 26, 2024
fc2e412
linting
eserilev Aug 26, 2024
74e1c07
fixbroken test
eserilev Aug 26, 2024
72b6381
metrics
eserilev Aug 27, 2024
627f013
fix build error
eserilev Aug 27, 2024
6237531
small revert
eserilev Aug 27, 2024
df59d23
resolve merge conflicts:
eserilev Aug 29, 2024
d97aeaf
Merge branch 'modularize-beacon-node-backend' of https://github.com/e…
eserilev Aug 29, 2024
0bab63c
update metrics
eserilev Aug 29, 2024
06490d4
fix redb
eserilev Aug 29, 2024
89866f2
fix conflicts, add TODOS
eserilev Sep 9, 2024
f2c514a
Merge branch 'modularize-beacon-node-backend' of https://github.com/e…
eserilev Sep 9, 2024
7428536
remove todos
eserilev Sep 9, 2024
910f45d
fix audits
eserilev Sep 9, 2024
cf3b056
conflicts
eserilev Sep 10, 2024
b589e2d
resolve conflicts, add some TODOs
eserilev Sep 10, 2024
e3cdca5
Merge branch 'unstable' of https://github.com/sigp/lighthouse into mo…
eserilev Sep 16, 2024
282aa16
update redb version to 2.1.3
eserilev Sep 16, 2024
807137a
get tests to pass
eserilev Sep 16, 2024
4b7a4ff
Merge branch 'unstable' of https://github.com/sigp/lighthouse into mo…
eserilev Sep 19, 2024
7bcc6f4
Merge branch 'unstable' of https://github.com/sigp/lighthouse into mo…
eserilev Sep 20, 2024
921dbae
Merge remote-tracking branch 'origin/unstable' into modularize-beacon…
michaelsproul Oct 29, 2024
23b0a7c
Merge branch 'unstable' of https://github.com/sigp/lighthouse into mo…
eserilev Nov 1, 2024
be5ceec
optimize redb temp state cleanup
eserilev Nov 2, 2024
5e7ff6e
fix tests
eserilev Nov 2, 2024
c83812b
fix test
eserilev Nov 4, 2024
eda7aef
delete while fn to iterate through blobs and prune
eserilev Nov 5, 2024
7966dfc
Merge branch 'unstable' of https://github.com/sigp/lighthouse into fi…
eserilev Nov 7, 2024
ea863a5
remove memory store delete while impl
eserilev Nov 7, 2024
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
2 changes: 2 additions & 0 deletions Cargo.lock

Some generated files are not rendered by default. Learn more about how customized files appear on GitHub.

2 changes: 1 addition & 1 deletion Makefile
Original file line number Diff line number Diff line change
Expand Up @@ -14,7 +14,7 @@ BUILD_PATH_AARCH64 = "target/$(AARCH64_TAG)/release"
PINNED_NIGHTLY ?= nightly

# List of features to use when cross-compiling. Can be overridden via the environment.
CROSS_FEATURES ?= gnosis,slasher-lmdb,slasher-mdbx,slasher-redb,jemalloc
CROSS_FEATURES ?= gnosis,slasher-lmdb,slasher-mdbx,slasher-redb,jemalloc,beacon-node-leveldb,beacon-node-redb

# Cargo profile for Cross builds. Default is for local builds, CI uses an override.
CROSS_PROFILE ?= release
Expand Down
1 change: 1 addition & 0 deletions beacon_node/beacon_chain/src/beacon_chain.rs
Original file line number Diff line number Diff line change
Expand Up @@ -1399,6 +1399,7 @@ impl<T: BeaconChainTypes> BeaconChain<T> {
sync_committee_period,
count,
&self.spec,
self.log.clone(),
)
}

Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -325,7 +325,6 @@ impl<E: EthSpec> PendingComponents<E> {
(None, Some(verified_data_columns))
}
};

let executed_block = recover(diet_executed_block)?;

let AvailabilityPendingExecutedBlock {
Expand Down Expand Up @@ -716,7 +715,7 @@ mod test {
use slog::{info, Logger};
use state_processing::ConsensusContext;
use std::collections::VecDeque;
use store::{HotColdDB, ItemStore, LevelDB, StoreConfig};
use store::{database::interface::BeaconNodeBackend, HotColdDB, ItemStore, StoreConfig};
use tempfile::{tempdir, TempDir};
use types::non_zero_usize::new_non_zero_usize;
use types::{ExecPayload, MinimalEthSpec};
Expand All @@ -728,7 +727,7 @@ mod test {
db_path: &TempDir,
spec: Arc<ChainSpec>,
log: Logger,
) -> Arc<HotColdDB<E, LevelDB<E>, LevelDB<E>>> {
) -> Arc<HotColdDB<E, BeaconNodeBackend<E>, BeaconNodeBackend<E>>> {
let hot_path = db_path.path().join("hot_db");
let cold_path = db_path.path().join("cold_db");
let blobs_path = db_path.path().join("blobs_db");
Expand Down Expand Up @@ -902,7 +901,11 @@ mod test {
)
where
E: EthSpec,
T: BeaconChainTypes<HotStore = LevelDB<E>, ColdStore = LevelDB<E>, EthSpec = E>,
T: BeaconChainTypes<
HotStore = BeaconNodeBackend<E>,
ColdStore = BeaconNodeBackend<E>,
EthSpec = E,
>,
{
let log = test_logger();
let chain_db_path = tempdir().expect("should get temp dir");
Expand Down
35 changes: 26 additions & 9 deletions beacon_node/beacon_chain/src/light_client_server_cache.rs
Original file line number Diff line number Diff line change
Expand Up @@ -2,7 +2,7 @@ use crate::errors::BeaconChainError;
use crate::{metrics, BeaconChainTypes, BeaconStore};
use parking_lot::{Mutex, RwLock};
use safe_arith::SafeArith;
use slog::{debug, Logger};
use slog::{debug, error, Logger};
use ssz::Decode;
use std::num::NonZeroUsize;
use std::sync::Arc;
Expand Down Expand Up @@ -270,13 +270,33 @@ impl<T: BeaconChainTypes> LightClientServerCache<T> {
start_period: u64,
count: u64,
chain_spec: &ChainSpec,
log: Logger,
) -> Result<Vec<LightClientUpdate<T::EthSpec>>, BeaconChainError> {
let column = DBColumn::LightClientUpdate;
let mut light_client_updates = vec![];
for res in store
.hot_db
.iter_column_from::<Vec<u8>>(column, &start_period.to_le_bytes())
{

let results = store.hot_db.iter_column_from::<Vec<u8>>(
column,
&start_period.to_le_bytes(),
move |sync_committee_bytes, _| match u64::from_ssz_bytes(sync_committee_bytes) {
Ok(sync_committee_period) => {
if sync_committee_period >= start_period + count {
return false;
}
true
}
Err(e) => {
error!(
log,
"Error decoding sync committee bytes from the db";
"error" => ?e
);
false
}
},
);

for res in results? {
let (sync_committee_bytes, light_client_update_bytes) = res?;
let sync_committee_period = u64::from_ssz_bytes(&sync_committee_bytes)
.map_err(store::errors::Error::SszDecodeError)?;
Expand All @@ -290,11 +310,8 @@ impl<T: BeaconChainTypes> LightClientServerCache<T> {
.map_err(store::errors::Error::SszDecodeError)?;

light_client_updates.push(light_client_update);

if sync_committee_period >= start_period + count {
break;
}
}

Ok(light_client_updates)
}

Expand Down
2 changes: 1 addition & 1 deletion beacon_node/beacon_chain/src/otb_verification_service.rs
Original file line number Diff line number Diff line change
Expand Up @@ -120,7 +120,7 @@ pub fn load_optimistic_transition_blocks<T: BeaconChainTypes>(
chain: &BeaconChain<T>,
) -> Result<Vec<OptimisticTransitionBlock>, StoreError> {
process_results(
chain.store.hot_db.iter_column::<Hash256>(OTBColumn),
chain.store.hot_db.iter_column::<Hash256>(OTBColumn)?,
|iter| {
iter.map(|(_, bytes)| OptimisticTransitionBlock::from_store_bytes(&bytes))
.collect()
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -3,9 +3,7 @@ use crate::validator_pubkey_cache::DatabasePubkey;
use slog::{info, Logger};
use ssz::{Decode, Encode};
use std::sync::Arc;
use store::{
get_key_for_col, DBColumn, Error, HotColdDB, KeyValueStore, KeyValueStoreOp, StoreItem,
};
use store::{DBColumn, Error, HotColdDB, KeyValueStore, KeyValueStoreOp, StoreItem};
use types::{Hash256, PublicKey};

const LOG_EVERY: usize = 200_000;
Expand All @@ -21,7 +19,7 @@ pub fn upgrade_to_v21<T: BeaconChainTypes>(
// Iterate through all pubkeys and decompress them.
for (i, res) in db
.hot_db
.iter_column::<Hash256>(DBColumn::PubkeyCache)
.iter_column::<Hash256>(DBColumn::PubkeyCache)?
.enumerate()
{
let (key, value) = res?;
Expand Down Expand Up @@ -53,7 +51,7 @@ pub fn downgrade_from_v21<T: BeaconChainTypes>(
// Iterate through all pubkeys and recompress them.
for (i, res) in db
.hot_db
.iter_column::<Hash256>(DBColumn::PubkeyCache)
.iter_column::<Hash256>(DBColumn::PubkeyCache)?
.enumerate()
{
let (key, value) = res?;
Expand All @@ -62,9 +60,10 @@ pub fn downgrade_from_v21<T: BeaconChainTypes>(
message: format!("{e:?}"),
})?;

let db_key = get_key_for_col(DBColumn::PubkeyCache.into(), key.as_slice());
let column: &str = DBColumn::PubkeyCache.into();
ops.push(KeyValueStoreOp::PutKeyValue(
db_key,
column.to_owned(),
key.as_slice().to_vec(),
pubkey_bytes.as_ssz_bytes(),
));

Expand Down
15 changes: 11 additions & 4 deletions beacon_node/beacon_chain/src/test_utils.rs
Original file line number Diff line number Diff line change
Expand Up @@ -56,7 +56,8 @@ use std::str::FromStr;
use std::sync::atomic::{AtomicUsize, Ordering};
use std::sync::{Arc, LazyLock};
use std::time::Duration;
use store::{config::StoreConfig, HotColdDB, ItemStore, LevelDB, MemoryStore};
use store::database::interface::BeaconNodeBackend;
use store::{config::StoreConfig, HotColdDB, ItemStore, MemoryStore};
use task_executor::TaskExecutor;
use task_executor::{test_utils::TestRuntime, ShutdownReason};
use tree_hash::TreeHash;
Expand Down Expand Up @@ -116,7 +117,7 @@ pub fn get_kzg(spec: &ChainSpec) -> Arc<Kzg> {
pub type BaseHarnessType<E, THotStore, TColdStore> =
Witness<TestingSlotClock, CachingEth1Backend<E>, E, THotStore, TColdStore>;

pub type DiskHarnessType<E> = BaseHarnessType<E, LevelDB<E>, LevelDB<E>>;
pub type DiskHarnessType<E> = BaseHarnessType<E, BeaconNodeBackend<E>, BeaconNodeBackend<E>>;
pub type EphemeralHarnessType<E> = BaseHarnessType<E, MemoryStore<E>, MemoryStore<E>>;

pub type BoxedMutator<E, Hot, Cold> = Box<
Expand Down Expand Up @@ -299,7 +300,10 @@ impl<E: EthSpec> Builder<EphemeralHarnessType<E>> {

impl<E: EthSpec> Builder<DiskHarnessType<E>> {
/// Disk store, start from genesis.
pub fn fresh_disk_store(mut self, store: Arc<HotColdDB<E, LevelDB<E>, LevelDB<E>>>) -> Self {
pub fn fresh_disk_store(
mut self,
store: Arc<HotColdDB<E, BeaconNodeBackend<E>, BeaconNodeBackend<E>>>,
) -> Self {
let validator_keypairs = self
.validator_keypairs
.clone()
Expand All @@ -324,7 +328,10 @@ impl<E: EthSpec> Builder<DiskHarnessType<E>> {
}

/// Disk store, resume.
pub fn resumed_disk_store(mut self, store: Arc<HotColdDB<E, LevelDB<E>, LevelDB<E>>>) -> Self {
pub fn resumed_disk_store(
mut self,
store: Arc<HotColdDB<E, BeaconNodeBackend<E>, BeaconNodeBackend<E>>>,
) -> Self {
let mutator = move |builder: BeaconChainBuilder<_>| {
builder
.resume_from_db()
Expand Down
5 changes: 3 additions & 2 deletions beacon_node/beacon_chain/tests/op_verification.rs
Original file line number Diff line number Diff line change
Expand Up @@ -14,7 +14,8 @@ use state_processing::per_block_processing::errors::{
AttesterSlashingInvalid, BlockOperationError, ExitInvalid, ProposerSlashingInvalid,
};
use std::sync::{Arc, LazyLock};
use store::{LevelDB, StoreConfig};
use store::database::interface::BeaconNodeBackend;
use store::StoreConfig;
use tempfile::{tempdir, TempDir};
use types::*;

Expand All @@ -26,7 +27,7 @@ static KEYPAIRS: LazyLock<Vec<Keypair>> =

type E = MinimalEthSpec;
type TestHarness = BeaconChainHarness<DiskHarnessType<E>>;
type HotColdDB = store::HotColdDB<E, LevelDB<E>, LevelDB<E>>;
type HotColdDB = store::HotColdDB<E, BeaconNodeBackend<E>, BeaconNodeBackend<E>>;

fn get_store(db_path: &TempDir) -> Arc<HotColdDB> {
let spec = Arc::new(test_spec::<E>());
Expand Down
39 changes: 23 additions & 16 deletions beacon_node/beacon_chain/tests/store_tests.rs
Original file line number Diff line number Diff line change
Expand Up @@ -26,12 +26,13 @@ use std::convert::TryInto;
use std::sync::{Arc, LazyLock};
use std::time::Duration;
use store::chunked_vector::Chunk;
use store::database::interface::BeaconNodeBackend;
use store::metadata::{SchemaVersion, CURRENT_SCHEMA_VERSION, STATE_UPPER_LIMIT_NO_RETAIN};
use store::KeyValueStore;
use store::{
chunked_vector::{chunk_key, Field},
get_key_for_col,
iter::{BlockRootsIterator, StateRootsIterator},
BlobInfo, DBColumn, HotColdDB, KeyValueStore, KeyValueStoreOp, LevelDB, StoreConfig,
BlobInfo, DBColumn, HotColdDB, KeyValueStoreOp, StoreConfig,
};
use tempfile::{tempdir, TempDir};
use tokio::time::sleep;
Expand All @@ -49,15 +50,15 @@ static KEYPAIRS: LazyLock<Vec<Keypair>> =
type E = MinimalEthSpec;
type TestHarness = BeaconChainHarness<DiskHarnessType<E>>;

fn get_store(db_path: &TempDir) -> Arc<HotColdDB<E, LevelDB<E>, LevelDB<E>>> {
fn get_store(db_path: &TempDir) -> Arc<HotColdDB<E, BeaconNodeBackend<E>, BeaconNodeBackend<E>>> {
get_store_generic(db_path, StoreConfig::default(), test_spec::<E>())
}

fn get_store_generic(
db_path: &TempDir,
config: StoreConfig,
spec: ChainSpec,
) -> Arc<HotColdDB<E, LevelDB<E>, LevelDB<E>>> {
) -> Arc<HotColdDB<E, BeaconNodeBackend<E>, BeaconNodeBackend<E>>> {
let hot_path = db_path.path().join("hot_db");
let cold_path = db_path.path().join("cold_db");
let blobs_path = db_path.path().join("blobs_db");
Expand All @@ -76,7 +77,7 @@ fn get_store_generic(
}

fn get_harness(
store: Arc<HotColdDB<E, LevelDB<E>, LevelDB<E>>>,
store: Arc<HotColdDB<E, BeaconNodeBackend<E>, BeaconNodeBackend<E>>>,
validator_count: usize,
) -> TestHarness {
// Most tests expect to retain historic states, so we use this as the default.
Expand All @@ -88,7 +89,7 @@ fn get_harness(
}

fn get_harness_generic(
store: Arc<HotColdDB<E, LevelDB<E>, LevelDB<E>>>,
store: Arc<HotColdDB<E, BeaconNodeBackend<E>, BeaconNodeBackend<E>>>,
validator_count: usize,
chain_config: ChainConfig,
) -> TestHarness {
Expand Down Expand Up @@ -269,10 +270,12 @@ async fn heal_freezer_block_roots_at_split() {
let chunk_index = <store::chunked_vector::BlockRoots as Field<E>>::chunk_index(
last_restore_point_slot.as_usize(),
);
let key_chunk = get_key_for_col(DBColumn::BeaconBlockRoots.as_str(), &chunk_key(chunk_index));
store
.cold_db
.do_atomically(vec![KeyValueStoreOp::DeleteKey(key_chunk)])
.do_atomically(vec![KeyValueStoreOp::DeleteKey(
DBColumn::BeaconBlockRoots.as_str().to_owned(),
chunk_key(chunk_index).to_vec(),
)])
.unwrap();

let block_root_err = store
Expand Down Expand Up @@ -348,10 +351,12 @@ async fn heal_freezer_block_roots_with_skip_slots() {
let chunk_index = <store::chunked_vector::BlockRoots as Field<E>>::chunk_index(
last_restore_point_slot.as_usize(),
);
let key_chunk = get_key_for_col(DBColumn::BeaconBlockRoots.as_str(), &chunk_key(chunk_index));
store
.cold_db
.do_atomically(vec![KeyValueStoreOp::DeleteKey(key_chunk)])
.do_atomically(vec![KeyValueStoreOp::DeleteKey(
DBColumn::BeaconBlockRoots.as_str().to_owned(),
chunk_key(chunk_index).to_vec(),
)])
.unwrap();

let block_root_err = store
Expand Down Expand Up @@ -493,7 +498,6 @@ async fn full_participation_no_skips() {
AttestationStrategy::AllValidators,
)
.await;

check_finalization(&harness, num_blocks_produced);
check_split_slot(&harness, store);
check_chain_dump(&harness, num_blocks_produced + 1);
Expand Down Expand Up @@ -2388,7 +2392,7 @@ async fn garbage_collect_temp_states_from_failed_block_on_startup() {
.unwrap_err();

assert_eq!(
store.iter_temporary_state_roots().count(),
store.iter_temporary_state_roots().unwrap().count(),
block_slot.as_usize() - 1
);
store
Expand All @@ -2407,7 +2411,7 @@ async fn garbage_collect_temp_states_from_failed_block_on_startup() {

// On startup, the store should garbage collect all the temporary states.
let store = get_store(&db_path);
assert_eq!(store.iter_temporary_state_roots().count(), 0);
assert_eq!(store.iter_temporary_state_roots().unwrap().count(), 0);
}

#[tokio::test]
Expand Down Expand Up @@ -2443,7 +2447,7 @@ async fn garbage_collect_temp_states_from_failed_block_on_finalization() {
.unwrap_err();

assert_eq!(
store.iter_temporary_state_roots().count(),
store.iter_temporary_state_roots().unwrap().count(),
block_slot.as_usize() - 1
);

Expand All @@ -2462,7 +2466,7 @@ async fn garbage_collect_temp_states_from_failed_block_on_finalization() {
assert_ne!(store.get_split_slot(), 0);

// Check that temporary states have been pruned.
assert_eq!(store.iter_temporary_state_roots().count(), 0);
assert_eq!(store.iter_temporary_state_roots().unwrap().count(), 0);
}

#[tokio::test]
Expand Down Expand Up @@ -3745,7 +3749,10 @@ fn check_finalization(harness: &TestHarness, expected_slot: u64) {
}

/// Check that the HotColdDB's split_slot is equal to the start slot of the last finalized epoch.
fn check_split_slot(harness: &TestHarness, store: Arc<HotColdDB<E, LevelDB<E>, LevelDB<E>>>) {
fn check_split_slot(
harness: &TestHarness,
store: Arc<HotColdDB<E, BeaconNodeBackend<E>, BeaconNodeBackend<E>>>,
) {
let split_slot = store.get_split_slot();
assert_eq!(
harness
Expand Down
Loading
Loading