Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Robust + Move breakdown to bucket generic to take any size even if not a power of 2 #794

Merged
merged 7 commits into from
Oct 12, 2023

Conversation

richajaindce
Copy link
Contributor

@richajaindce richajaindce commented Oct 6, 2023

adding tests for aggregation + move bucket

@richajaindce richajaindce changed the title Making move breakdown to bucket generic to take any size even if not a power of 2 and adding tests for aggregation Move breakdown to bucket generic to take any size even if not a power of 2 and adding tests for aggregation Oct 9, 2023
let result: Vec<_> = world
.semi_honest(
(
get_bits::<Fp32BitPrime>(breakdown_key.try_into().unwrap(), Gf5Bit::BITS),
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is very hard to read. Can you make this easier to understand?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

extracted to a separate variable

Comment on lines 1074 to 1075
fn semi_honest_move_value_to_single_bucket_not_power_two() {
const BREAKDOWN_COUNT: usize = 50;
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

There is too much duplication between semi_honest_move_value_to_single_bucket and semi_honest_move_value_to_single_bucket_not_power_two.

I recommend just merging them into a single test that loops many times, trying many combinations of "num_breakdowns" (some of which are powers of two, and some of which are not), and value.

We do not have good test coverage as it is, since you're only testing 50, and only for a single value.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Running a loop 10 times and combined them

Copy link
Collaborator

@benjaminsavage benjaminsavage left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Left some comments, mainly I'm not content with the tests.

@richajaindce richajaindce changed the title Move breakdown to bucket generic to take any size even if not a power of 2 and adding tests for aggregation Robust + Move breakdown to bucket generic to take any size even if not a power of 2 Oct 10, 2023
Copy link
Collaborator

@benjaminsavage benjaminsavage left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I love it. Ship it.

@benjaminsavage benjaminsavage merged commit c75dfd7 into private-attribution:main Oct 12, 2023
3 checks passed
let mut row_contribution = vec![value; 1 << BK::BITS];

assert!(breakdown_count <= 1 << BK::BITS);
let mut row_contribution = vec![value; breakdown_count];
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It might be more efficient to initialize the vector with zero values instead, but that means putting "value" at index 0 as an extra step.

fn semi_honest_move_value_to_single_bucket_out_of_range() {
const MAX_BREAKDOWN_COUNT: usize = 127;
for robust_for_breakdown_key_gt_count in [true, false] {
for _ in 1..10 {
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We should not be running multiple iterations of a test like this.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants