Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Unit tests for pallet EPM-MB #6332

Merged
merged 16 commits into from
Nov 21, 2024

Conversation

Zebedeusz
Copy link
Contributor

No description provided.

Copy link
Contributor

@gpestana gpestana left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

A few comments and nits, especially around merging test cases that are similar and simplifying the test code.

Other than that, this is great, thanks for the help!

// let some_bn = 0;
// let some_page_index = 0;

let account_id = 99;
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

nit but since the account_id is not so important for the tests, no need to define it as a separate var. Just use 99 directly in the calls.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I prefer to use a variable than a number that looks a little magical. This way wherever this variable is used, one can see what it actually represents.

}

#[test]
fn force_clear_submission_fails_if_called_by_account_none() {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I would re-factor this test to check that all origins are correct when calling bail. I.e.

  • Origin none fails
  • Origin signed fails if there's no registration
  • Origin signed is OK if there's a registration
  • etc

This would cover all the cases and no need for the test above (bail_while_having_no_submissions_does_not_modify_balances)

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is a test for force_clear but I guess you mean the one for bail.
Anyway, if I understand correctly you would put all the test cases for "bail" function in one test function.
If yes, then I would be against it. It would surely provide the coverage we need and verify all we need but it would result in overloaded tests, checking lots of things, instead of many tests, each checking very specific outcome.
So let me understand your reasoning here.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Quick 2-cents from me: in this case, testing different origins is a simple procedure, so I get @Zebedeusz's point of having different #[test]s

If testing different origins required a (more or less) similar setup everytime, doing it all in the same test would make more sense as @gpestana suggests.

Example: a verbose integration test for Westend's people parachain checking for different wrong origins.

}

#[test]
fn force_clear_submission_fails_if_submitter_done_no_submissions_at_all() {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This test is useful but I'd try to integrate more test cases to cover all the possible edge cases in one test alone. This will help to keep the number of tests low and cover everything.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Is there a specific reason why you're suggesting the approach of having few tests with lots of checks inside each one?
My preferred approach is exactly the opposite one, so I'd like to understand your point of view.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The reasons why I'd merge these small tests cases are:

  1. Simplify - we can re-use setup code across tests cases;
  2. Context - the tests are basically testing the same unit (the force_clear) behaviour
  3. Follow-along/readability - it's easier to undestand all the invariants of force_clear etc if they are all in the same test, we can use comments to explain what is being tested and why errors happen, etc
  4. Low verbosity - especially due to 1.

See, for example, this test case in staking: https://github.com/paritytech/polkadot-sdk/blob/master/substrate/frame/staking/src/tests.rs#L1283

We test within a test all the flow of Call::unbound. We could split it in different tests but the setup/boilerplate code would be repeated all over the individual tests. Also through that test you can read and understand the whole unbound flow, requirements, invariants, etc

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

https://github.com/paritytech/polkadot-sdk/blob/master/substrate/frame/staking/src/tests.rs#L1495 being another good example, where assert_noop! is used too to ensure that errors do not change the storage root (and state).

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think our dispute boils down to the level at which we see these tests.
You (and maybe it's the way in the whole repo, I can't know yet) advocate for a higher level tests, where a single test function would run a validation flow through the tested function with some failure checks alongside. This makes the test functionality test or flow test or smth like that. So it reads like a story.
My initial approach is in general to test the smallest possible units of functionality first then go to flow tests.
So each unit test function is called fn_results_with_x_when_z where conditions and expected results are clear in the name of the test. This way expectations for a given function are set in the unit tests.
Then I go for writing functional/flow tests and I do them as you would, just with less failure checks.

As to the verbosity and boilerplate, it can be often packed in setup functions (to avoid duplication) but it's rather good to be explicit in test setup so as to describe the initial conditions.

So to sum up, I think we actually align on the way to write tests, it's just that I went lower in the test pyramid, which I would argue is a good thing and to improve readability we could arrange the tests differently.

})
}

#[test]
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This test could be merged with the above test. Calling it get_wrong_solution_page_works and then testing all the cases where the None is returned due to bad page index is a good call !

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

As above. I'm not in favour for merged scenarios so let's discuss that bigger topic first

@Zebedeusz Zebedeusz requested a review from gpestana November 6, 2024 08:19
@@ -66,6 +67,14 @@ pub type T = Runtime;
pub type Block = frame_system::mocking::MockBlock<Runtime>;
pub(crate) type Solver = SequentialPhragmen<AccountId, sp_runtime::PerU16, ()>;

pub struct Weighter;

impl Get<RuntimeDbWeight> for Weighter {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Do we really need this? The type DbWeight can be set to unit in the configs in test mocks, ie.

 type DbWeight = ();

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Take a look at test called on_initialize_returns_specific_weight_in_off_phase.
If DbWeight is set to () it fails.
So I defined it as such for the reason explained in the test.

@@ -80,6 +89,7 @@ frame_election_provider_support::generate_solution_type!(
impl frame_system::Config for Runtime {
type Block = Block;
type AccountData = pallet_balances::AccountData<Balance>;
type DbWeight = Weighter;
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
type DbWeight = Weighter;
type DbWeight = ();

self
}

pub(crate) fn no_desired_targets(self) -> Self {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

why do we need this? I think we should remove this, also setting desired targets to Err("none") doesn't seem correct to me.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

In short to make some actions fail due to desired targets not being configured.
Needed for:

  1. Test worker_fails_to_mine_solution
  2. Test desired_targets_not_in_snapshot- note: this one I will simplify a bit.

Are these not valid scenarios to test?
It seemed like valuable checks to me.

})
}

#[test]
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

so: given there are existing submissions, if someone bails from a submission that they didn't do, then the balances of all the participants, the one that had a submission and especially the one that bailed but didn't have one will remain unchanged.

This can be tested using the assert_noop! macro. Basically, when assert_noop!(call(), Error), the macro checks a couple things:

  1. The call() will return the expected `Error
  2. The Ext storage was not changed after calling call(). this means that the balances, and other states remain the same.

I'd just remove the whole set up of the registration etc and call an assert_noop!(bail_for(1), Err). This tests that the bailing does not work in this case and the storage root remained the same, so no state has changed (+ simplify the tests considerably).

}

#[test]
fn bail_while_having_no_submissions_does_not_modify_balances() {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Same as above, assert_noop!() will simplify the test case, make it more comprehensive and simpler.

}

#[test]
fn force_clear_submission_fails_if_submitter_done_no_submissions_at_all() {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The reasons why I'd merge these small tests cases are:

  1. Simplify - we can re-use setup code across tests cases;
  2. Context - the tests are basically testing the same unit (the force_clear) behaviour
  3. Follow-along/readability - it's easier to undestand all the invariants of force_clear etc if they are all in the same test, we can use comments to explain what is being tested and why errors happen, etc
  4. Low verbosity - especially due to 1.

See, for example, this test case in staking: https://github.com/paritytech/polkadot-sdk/blob/master/substrate/frame/staking/src/tests.rs#L1283

We test within a test all the flow of Call::unbound. We could split it in different tests but the setup/boilerplate code would be repeated all over the individual tests. Also through that test you can read and understand the whole unbound flow, requirements, invariants, etc

}

#[test]
fn force_clear_submission_fails_if_submitter_done_no_submissions_at_all() {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

https://github.com/paritytech/polkadot-sdk/blob/master/substrate/frame/staking/src/tests.rs#L1495 being another good example, where assert_noop! is used too to ensure that errors do not change the storage root (and state).

@gpestana gpestana marked this pull request as ready for review November 21, 2024 16:28
@gpestana gpestana requested a review from a team as a code owner November 21, 2024 16:28
@gpestana gpestana merged commit 7cea73e into gpestana/pallet-epm-mb Nov 21, 2024
88 of 160 checks passed
@gpestana gpestana deleted the zebedeusz/pallet-epm-mb-unit-tests branch November 21, 2024 16:29
@paritytech-workflow-stopper
Copy link

All GitHub workflows were cancelled due to failure one of the required jobs.
Failed workflow url: https://github.com/paritytech/polkadot-sdk/actions/runs/11957490817
Failed job name: build-runtimes-polkavm

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants