Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Check current signers still validators #1097

Merged
merged 35 commits into from
Oct 24, 2024
Merged
Show file tree
Hide file tree
Changes from 24 commits
Commits
Show all changes
35 commits
Select commit Hold shift + click to select a range
454c9ad
Check current signers still validators
JesseAbram Oct 3, 2024
940b650
update
JesseAbram Oct 3, 2024
a27a9ec
tests
JesseAbram Oct 4, 2024
0fea1bf
end of day push
JesseAbram Oct 4, 2024
7f2346d
working reshare
JesseAbram Oct 9, 2024
a54dd56
clean
JesseAbram Oct 9, 2024
ae41b50
remove mutable
JesseAbram Oct 9, 2024
ee8c1fa
test
JesseAbram Oct 10, 2024
2d91051
update
JesseAbram Oct 11, 2024
f5daa0f
threhsold limit
JesseAbram Oct 16, 2024
ed32eba
fix test
JesseAbram Oct 16, 2024
4482fd6
remove old signer
JesseAbram Oct 16, 2024
b1d93ac
clean
JesseAbram Oct 16, 2024
84cf720
Merge branch 'master' of github.com:entropyxyz/entropy-core into chec…
JesseAbram Oct 16, 2024
97f54a9
metadata
JesseAbram Oct 16, 2024
0e8c7b4
fmt
JesseAbram Oct 16, 2024
4ec4f36
test
JesseAbram Oct 16, 2024
3a58960
update benchmarks
JesseAbram Oct 16, 2024
1864b4d
working benchmarks
JesseAbram Oct 17, 2024
4f7167b
fix benches
JesseAbram Oct 17, 2024
3328151
Update crates/shared/src/types.rs
JesseAbram Oct 18, 2024
118fd90
correct truncation
JesseAbram Oct 18, 2024
516fa7b
remove hash comparison
JesseAbram Oct 18, 2024
3d9bd90
Merge branch 'master' of github.com:entropyxyz/entropy-core into chec…
JesseAbram Oct 21, 2024
4154e88
Apply suggestions from code review
JesseAbram Oct 22, 2024
75e73f5
fix
JesseAbram Oct 22, 2024
3880ff1
fix
JesseAbram Oct 22, 2024
afe674e
fix benchmark
JesseAbram Oct 22, 2024
1154f1e
fix benchmark
JesseAbram Oct 23, 2024
72c89e5
fmt
JesseAbram Oct 23, 2024
b5d2ebd
fix
JesseAbram Oct 23, 2024
7312376
comment
JesseAbram Oct 23, 2024
537f1b4
Update crates/threshold-signature-server/src/validator/api.rs
JesseAbram Oct 23, 2024
f44a7e3
fix type
JesseAbram Oct 23, 2024
0c1dc4b
Merge branch 'master'
JesseAbram Oct 24, 2024
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
Binary file modified crates/client/entropy_metadata.scale
Binary file not shown.
3 changes: 2 additions & 1 deletion crates/protocol/src/execute_protocol.rs
Original file line number Diff line number Diff line change
Expand Up @@ -334,6 +334,7 @@ pub async fn execute_reshare(
chans: Channels,
threshold_pair: &sr25519::Pair,
inputs: KeyResharingInputs<KeyParams, PartyId>,
verifiers: &BTreeSet<PartyId>,
Copy link
Contributor

@ameba23 ameba23 Oct 18, 2024

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

im not sure we need this argument as it looks like whenever we call this function verifiers is the same as inputs.new_holders

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

yes originally I changed it to allow for old holders to be more and not in new holders however ran into the #1114 issue, didn't change it back because this is still more right imo and adds no overhead, and hopefully will be needed when the issue is resolved

aux_info_option: Option<AuxInfo<KeyParams, PartyId>>,
) -> Result<
(ThresholdKeyShare<KeyParams, PartyId>, AuxInfo<KeyParams, PartyId>),
Expand All @@ -350,7 +351,7 @@ pub async fn execute_reshare(
&mut OsRng,
SynedrionSessionId::from_seed(session_id_hash.as_slice()),
pair,
&inputs.new_holders,
verifiers,
inputs.clone(),
)
.map_err(ProtocolExecutionErr::SessionCreation)?;
Expand Down
3 changes: 2 additions & 1 deletion crates/protocol/tests/helpers/mod.rs
Original file line number Diff line number Diff line change
Expand Up @@ -144,7 +144,8 @@ pub async fn server(
new_threshold: old_key.threshold(),
};

let new_keyshare = execute_reshare(session_id, channels, &pair, inputs, None).await?;
let new_keyshare =
execute_reshare(session_id, channels, &pair, inputs, &party_ids, None).await?;
Ok(ProtocolOutput::Reshare(new_keyshare.0))
},
SessionId::Dkg { .. } => {
Expand Down
4 changes: 2 additions & 2 deletions crates/shared/src/types.rs
Original file line number Diff line number Diff line change
Expand Up @@ -55,8 +55,8 @@ pub struct OcwMessageDkg {
#[cfg_attr(feature = "std", derive(Serialize, Deserialize))]
#[derive(Clone, Encode, Decode, Debug, Eq, PartialEq, TypeInfo)]
pub struct OcwMessageReshare {
// Stash address of new signer
pub new_signer: Vec<u8>,
// Stash addresses of new signers
pub new_signers: Vec<Vec<u8>>,
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

A few lines above, in ValidatorInfo, Vecs are provided with the full path, codec::alloc::vec::Vec<u8>. Should we be consistent? The short form here seems better.

pub block_number: BlockNumber,
}

Expand Down
3 changes: 2 additions & 1 deletion crates/threshold-signature-server/src/signing_client/api.rs
Original file line number Diff line number Diff line change
Expand Up @@ -205,7 +205,8 @@ pub async fn do_proactive_refresh(
.await?;

let result =
execute_reshare(session_id, channels, signer.signer(), inputs, Some(aux_info)).await?;
execute_reshare(session_id, channels, signer.signer(), inputs, &party_ids, Some(aux_info))
.await?;
Ok(result)
}

Expand Down
53 changes: 29 additions & 24 deletions crates/threshold-signature-server/src/validator/api.rs
Original file line number Diff line number Diff line change
Expand Up @@ -106,7 +106,7 @@ pub async fn new_reshare(
.map_err(|e| ValidatorErr::UserError(e.to_string()))?;

let old_holder: Option<OldHolder<KeyParams, PartyId>> =
if data.new_signer == my_stash_address.encode() {
if data.new_signers.contains(&my_stash_address.encode()) {
None
} else {
let kvdb_result = app_state.kv_store.kv().get(&hex::encode(NETWORK_PARENT_KEY)).await?;
Expand All @@ -116,14 +116,15 @@ pub async fn new_reshare(
Some(OldHolder { key_share: key_share.0 })
};

let party_ids: BTreeSet<PartyId> =
// new_holders -> From chain next_signers (old_holders (currently forced to be t) + new_holders)
// also acts as verifiers as is everyone in the party
let new_holders: BTreeSet<PartyId> =
validators_info.iter().cloned().map(|x| PartyId::new(x.tss_account)).collect();

let pruned_old_holders =
prune_old_holders(&api, &rpc, data.new_signer, validators_info.clone()).await?;

// old holders -> next_signers - new_signers (will be at least t)
let old_holders =
&prune_old_holders(&api, &rpc, data.new_signers, validators_info.clone()).await?;
Comment on lines +123 to +125
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I can't find where the (will be at least t) invariant is checked; doesn't seem to happen in prune_old_holders (also, but unrelated to this PR, that method seems to be cloning wildly but not sure that's needed).

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

let old_holders: BTreeSet<PartyId> =
pruned_old_holders.into_iter().map(|x| PartyId::new(x.tss_account)).collect();
old_holders.iter().map(|x| PartyId::new(x.tss_account.clone())).collect();

let new_holder = NewHolder {
verifying_key: decoded_verifying_key,
Expand All @@ -139,7 +140,7 @@ pub async fn new_reshare(
let inputs = KeyResharingInputs {
old_holder,
new_holder: Some(new_holder),
new_holders: party_ids.clone(),
new_holders: new_holders.clone(),
new_threshold: threshold as usize,
};

Expand All @@ -157,7 +158,6 @@ pub async fn new_reshare(
converted_validator_info.push(validator_info.clone());
tss_accounts.push(validator_info.tss_account.clone());
}

let channels = get_channels(
&app_state.listener_state,
converted_validator_info,
Expand All @@ -169,7 +169,8 @@ pub async fn new_reshare(
.await?;

let (new_key_share, aux_info) =
execute_reshare(session_id.clone(), channels, signer.signer(), inputs, None).await?;
execute_reshare(session_id.clone(), channels, signer.signer(), inputs, &new_holders, None)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Maybe ive misunderstood something, but it looks like new_holders is the same as inputs.new_holders - so why add the extra argument to the execute_reshare function?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

explained lower down

.await?;

let serialized_key_share = key_serialize(&(new_key_share, aux_info))
.map_err(|_| ProtocolErr::KvSerialize("Kv Serialize Error".to_string()))?;
Expand Down Expand Up @@ -273,8 +274,8 @@ pub async fn validate_new_reshare(
.await?
.ok_or_else(|| ValidatorErr::ChainFetch("Not Currently in a reshare"))?;

if reshare_data.new_signer != chain_data.new_signer
|| chain_data.block_number != reshare_data.block_number
if chain_data.new_signers != reshare_data.new_signers
|| chain_data.block_number != reshare_data.block_number.saturating_sub(1)
JesseAbram marked this conversation as resolved.
Show resolved Hide resolved
{
return Err(ValidatorErr::InvalidData);
}
Expand Down Expand Up @@ -365,20 +366,24 @@ pub fn check_forbidden_key(key: &str) -> Result<(), ValidatorErr> {
pub async fn prune_old_holders(
api: &OnlineClient<EntropyConfig>,
rpc: &LegacyRpcMethods<EntropyConfig>,
new_signer: Vec<u8>,
new_signers: Vec<Vec<u8>>,
validators_info: Vec<ValidatorInfo>,
) -> Result<Vec<ValidatorInfo>, ValidatorErr> {
Ok(if !new_signer.is_empty() {
let address_slice: &[u8; 32] = &new_signer.clone().try_into().unwrap();
let new_signer_address = AccountId32(*address_slice);
let new_signer_info = &get_validators_info(api, rpc, vec![new_signer_address])
.await
.map_err(|e| ValidatorErr::UserError(e.to_string()))?[0];
validators_info
.iter()
.filter(|x| x.tss_account != new_signer_info.tss_account)
.cloned()
.collect()
Ok(if !new_signers.is_empty() {
let mut filtered_validators_info = vec![];
for new_signer in new_signers {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This works fine - but since we are anyway about to convert this to a BTreeSet, we can use .difference()

let address_slice: &[u8; 32] = &new_signer.clone().try_into().unwrap();
let new_signer_address = AccountId32(*address_slice);
let new_signer_info = &get_validators_info(api, rpc, vec![new_signer_address])
.await
.map_err(|e| ValidatorErr::UserError(e.to_string()))?[0];
filtered_validators_info = validators_info
.iter()
.filter(|x| x.tss_account != new_signer_info.tss_account)
.cloned()
.collect::<Vec<_>>();
}
filtered_validators_info
} else {
validators_info.clone()
})
Expand Down
20 changes: 12 additions & 8 deletions crates/threshold-signature-server/src/validator/tests.rs
Original file line number Diff line number Diff line change
Expand Up @@ -67,7 +67,7 @@ use synedrion::k256::ecdsa::VerifyingKey;

#[tokio::test]
#[serial]
async fn test_reshare() {
async fn test_reshare_foo() {
JesseAbram marked this conversation as resolved.
Show resolved Hide resolved
initialize_test_logger().await;
clean_tests();

Expand All @@ -76,7 +76,7 @@ async fn test_reshare() {
let (_validator_ips, _validator_ids) =
spawn_testing_validators(ChainSpecType::Integration).await;

let validator_ports = vec![3001, 3002, 3003, 3004];
let validator_ports = vec![3002, 3003, 3004];
let api = get_api(&cxt.ws_url).await.unwrap();
let rpc = get_rpc(&cxt.ws_url).await.unwrap();

Expand Down Expand Up @@ -113,13 +113,15 @@ async fn test_reshare() {
let new_signer = all_validators.iter().find(|v| !signer_stash_accounts.contains(v)).unwrap();

let block_number = TEST_RESHARE_BLOCK_NUMBER;
let onchain_reshare_request =
OcwMessageReshare { new_signer: new_signer.0.to_vec(), block_number };
let onchain_reshare_request = OcwMessageReshare {
new_signers: vec![new_signer.0.to_vec()],
block_number: block_number - 1,
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Why change this instead of keeping the +1 in run_to_block?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

because this needs to match the onchain info or it will not pass the validate check

};

run_to_block(&rpc, block_number + 1).await;
run_to_block(&rpc, block_number).await;
// Send the OCW message to all TS servers who don't have a chain node
let response_results = join_all(
validator_ports[1..]
validator_ports
.iter()
.map(|port| {
client
Expand Down Expand Up @@ -323,7 +325,8 @@ async fn test_reshare_validation_fail() {
let kv = setup_client().await;

let block_number = rpc.chain_get_header(None).await.unwrap().unwrap().number + 1;
let mut ocw_message = OcwMessageReshare { new_signer: dave.public().encode(), block_number };
let mut ocw_message =
OcwMessageReshare { new_signers: vec![dave.public().encode()], block_number };

let err_stale_data =
validate_new_reshare(&api, &rpc, &ocw_message, &kv).await.map_err(|e| e.to_string());
Expand Down Expand Up @@ -361,7 +364,8 @@ async fn test_reshare_validation_fail_not_in_reshare() {
let kv = setup_client().await;

let block_number = rpc.chain_get_header(None).await.unwrap().unwrap().number + 1;
let ocw_message = OcwMessageReshare { new_signer: alice.public().encode(), block_number };
let ocw_message =
OcwMessageReshare { new_signers: vec![alice.public().encode()], block_number };

run_to_block(&rpc, block_number + 1).await;

Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -153,19 +153,16 @@ async fn do_reshare(api: &OnlineClient<EntropyConfig>, rpc: &LegacyRpcMethods<En
signers.push(server_info);
}

// Get all validators
let validators_query = entropy::storage().session().validators();
let all_validators = query_chain(&api, &rpc, validators_query, None).await.unwrap().unwrap();

// Get stash account of a non-signer, to become the new signer
// Since we only have 4 nodes in our test setup, this will be the same one the chain chooses
let new_signer = all_validators.iter().find(|v| !signer_stash_accounts.contains(v)).unwrap();
let reshare_data_query = entropy::storage().staking_extension().reshare_data();
let reshare_data = query_chain(&api, &rpc, reshare_data_query, None).await.unwrap().unwrap();

let block_number = TEST_RESHARE_BLOCK_NUMBER;
let onchain_reshare_request =
OcwMessageReshare { new_signer: new_signer.0.to_vec(), block_number };
let onchain_reshare_request = OcwMessageReshare {
new_signers: reshare_data.new_signers.into_iter().map(|s| s.to_vec()).collect(),
block_number: block_number - 1,
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Does this blow up on block 0? Maybe checked_sub?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

the block number is hard coded one line above to a number way over 0

};

run_to_block(&rpc, block_number + 1).await;
run_to_block(&rpc, block_number).await;
// Send the OCW message to all TS servers who don't have a chain node
let client = reqwest::Client::new();
let response_results = join_all(
Expand Down
2 changes: 1 addition & 1 deletion node/cli/src/chain_spec/integration_tests.rs
Original file line number Diff line number Diff line change
Expand Up @@ -218,7 +218,7 @@ pub fn integration_tests_genesis_config(
],
vec![EVE_VERIFYING_KEY.to_vec(), DAVE_VERIFYING_KEY.to_vec()],
),
mock_signer_rotate: (true, mock_signer_rotate_data, vec![get_account_id_from_seed::<sr25519::Public>("Charlie//stash")],),
mock_signer_rotate: (true, mock_signer_rotate_data, vec![get_account_id_from_seed::<sr25519::Public>("Charlie//stash")]),
},
"elections": ElectionsConfig {
members: endowed_accounts
Expand Down
2 changes: 1 addition & 1 deletion pallets/propagation/src/lib.rs
Original file line number Diff line number Diff line change
Expand Up @@ -174,7 +174,7 @@ pub mod pallet {
BlockNumberFor::<T>::try_into(block_number).unwrap_or_default();

let req_body = OcwMessageReshare {
new_signer: reshare_data.new_signer,
new_signers: reshare_data.new_signers,
// subtract 1 from blocknumber since the request is from the last block
block_number: converted_block_number.saturating_sub(1),
};
Expand Down
4 changes: 2 additions & 2 deletions pallets/propagation/src/tests.rs
Original file line number Diff line number Diff line change
Expand Up @@ -59,7 +59,7 @@ fn knows_how_to_mock_several_http_calls() {
uri: "http://localhost:3001/validator/reshare".into(),
sent: true,
response: Some([].to_vec()),
body: [32, 1, 0, 0, 0, 0, 0, 0, 0, 6, 0, 0, 0].to_vec(),
body: [4, 32, 1, 0, 0, 0, 0, 0, 0, 0, 6, 0, 0, 0].to_vec(),
..Default::default()
});
state.expect_request(testing::PendingRequest {
Expand Down Expand Up @@ -94,7 +94,7 @@ fn knows_how_to_mock_several_http_calls() {
Propagation::post_reshare(7).unwrap();
pallet_staking_extension::ReshareData::<Test>::put(ReshareInfo {
block_number: 7,
new_signer: 1u64.encode(),
new_signers: vec![1u64.encode()],
});
// now triggers
Propagation::post_reshare(7).unwrap();
Expand Down
39 changes: 22 additions & 17 deletions pallets/staking/src/benchmarking.rs
Original file line number Diff line number Diff line change
Expand Up @@ -29,11 +29,13 @@ use frame_support::{
use frame_system::{EventRecord, RawOrigin};
use pallet_parameters::{SignersInfo, SignersSize};
use pallet_staking::{
Event as FrameStakingEvent, MaxNominationsOf, Nominations, Pallet as FrameStaking,
RewardDestination, ValidatorPrefs,
Event as FrameStakingEvent, MaxNominationsOf, MaxValidatorsCount, Nominations,
Pallet as FrameStaking, RewardDestination, ValidatorPrefs,
};
use sp_std::{vec, vec::Vec};

// type MaxValidators<T> = <<T as Config>::BenchmarkingConfig as BenchmarkingConfig>::MaxValidators;

JesseAbram marked this conversation as resolved.
Show resolved Hide resolved
const NULL_ARR: [u8; 32] = [0; 32];
const SEED: u32 = 0;

Expand Down Expand Up @@ -427,13 +429,22 @@ benchmarks! {
new_session {
let c in 1 .. MAX_SIGNERS as u32 - 1;
let l in 0 .. MAX_SIGNERS as u32;
let v in 0 .. MAX_SIGNERS as u32;
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Shouldn't this value be higher than MAX_SIGNERS?


let caller: T::AccountId = whitelisted_caller();
let validator_ids = create_validators::<T>(100, 1);
let second_signer: T::AccountId = account("second_signer", 0, SEED);
let second_signer_id =
<T as pallet_session::Config>::ValidatorId::try_from(second_signer.clone())
.or(Err(Error::<T>::InvalidValidatorId))
.unwrap();
// let mut signers = //create_validators::<T>((MAX_SIGNERS - 1) as u32, SEED);
JesseAbram marked this conversation as resolved.
Show resolved Hide resolved
let mut signers = vec![second_signer_id.clone(); c as usize];

// For the purpose of the bench these values don't actually matter, we just care that there's a
// storage entry available
SignersInfo::<T>::put(SignersSize {
total_signers: MAX_SIGNERS,
total_signers: 5,
threshold: 3,
last_session_change: 0,
});
Expand All @@ -442,23 +453,17 @@ benchmarks! {
.or(Err(Error::<T>::InvalidValidatorId))
.unwrap();

let second_signer: T::AccountId = account("second_signer", 0, SEED);
let second_signer_id =
<T as pallet_session::Config>::ValidatorId::try_from(second_signer.clone())
.or(Err(Error::<T>::InvalidValidatorId))
.unwrap();

// full signer list leaving room for one extra validator
let mut signers = vec![second_signer_id.clone(); c as usize];

Signers::<T>::put(signers.clone());
signers.push(second_signer_id.clone());

// place new signer in the signers struct in different locations to calculate random selection
// re-run
signers[l as usize % c as usize] = validator_id.clone();
// as well validators may be dropped before chosen
signers[l as usize % c as usize] = validator_ids[l as usize % c as usize].clone();

for i in 0 .. c {
signers[i as usize] = validator_ids[i as usize].clone();
}
Signers::<T>::put(signers.clone());
}: {
let _ = Staking::<T>::new_session_handler(&signers);
let _ = Staking::<T>::new_session_handler(&validator_ids);
}
verify {
assert!(NextSigners::<T>::get().is_some());
Expand Down
Loading
Loading