Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Hashing functionality needed to support Fiat-Shamir #993

Merged
merged 11 commits into from
Apr 2, 2024
Merged

Conversation

benjaminsavage
Copy link
Collaborator

In our zero-knowledge proof generation, we will employ "Fiat-Shamir", which basically entails having the prover generate the random challenge point by hashing the proof, rather than waiting for the verifier to generate a totally random point.

In principle, this is just as secure, since the prover cannot select the random challenge point specifically to cheat. This does make a few assumptions, like the assumption that the hash function is basically a really good one.

The idea is that the verifier can generate the same hash of the proof, and thus generate the same challenge point.

The overall benefit of this approach is that it turns a multi-round protocol into a single round protocol, since the prover can iterate through all the steps, computing random challenge points along the way (since they are just derived from hashes of each proof), and then just sends a big batch of proofs to the verifiers.

Copy link

codecov bot commented Mar 27, 2024

Codecov Report

Attention: Patch coverage is 98.54545% with 4 lines in your changes are missing coverage. Please review.

Project coverage is 89.75%. Comparing base (365bb1e) to head (578eb2c).

Files Patch % Lines
...src/protocol/ipa_prf/malicious_security/hashing.rs 97.47% 3 Missing ⚠️
.../src/protocol/ipa_prf/malicious_security/prover.rs 99.35% 1 Missing ⚠️
Additional details and impacted files
@@            Coverage Diff             @@
##             main     #993      +/-   ##
==========================================
+ Coverage   89.65%   89.75%   +0.09%     
==========================================
  Files         168      169       +1     
  Lines       22828    23029     +201     
==========================================
+ Hits        20467    20670     +203     
+ Misses       2361     2359       -2     

☔ View full report in Codecov by Sentry.
📢 Have feedback on the report? Share it here.

akoshelev added a commit to akoshelev/raw-ipa that referenced this pull request Mar 27, 2024
Motivated by private-attribution#993 and the main reason is to bring `sha2` closer to latest `generic-array` version. `0.13` uses 1.0
};

#[derive(Clone, Copy, Debug, PartialEq)]
pub struct Hash(pub(crate) GenericArray<u8, U32>);
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
pub struct Hash(pub(crate) GenericArray<u8, U32>);
pub struct Hash(GenericArray<u8, U32>);

Allowing arbitrary mutations to the internal value is probably not what we want here.

Hash(*GenericArray::<u8, U32>::from_slice(&sha.finalize()[0..32]))
}

/// This function allows to hash a vector of field elements into a single field element
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

this description does not match the reality imo

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Haha, yeah, you're right. Copy-pasted from @danielmasny 's draft and I didn't update the comment. Fixing.

sha.update(buf);
}
// compute hash
Hash(*GenericArray::<u8, U32>::from_slice(&sha.finalize()[0..32]))
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I stumbled across this because digest also returns a GenericArray so you shouldn't be doing these conversions. It turns out the version of sha2 we use still points to GenericArray version < 1.0, while we've migrated to 1.0 already.

I tried to upgrade sha2 and I failed miserably - there is a strong dependency between hkdf and sha2 crates and they must be upgraded together. hdkf crate moved to hybrid-array: RustCrypto/traits#1319 while we still use generic-array.

Long story short, I think there is a way to make this code look nicer. Here is what I suggest:

use sha2::digest::{Output, OutputSizeUser};

#[derive(Clone, Debug, PartialEq)]
pub struct Hash(Output<Sha256>);

impl Serializable for Hash {
    type Size = <Sha256 as OutputSizeUser>::OutputSize;

    type DeserializationError = Infallible;

    fn serialize(&self, buf: &mut GenericArray<u8, Self::Size>) {
        buf.copy_from_slice(&self.0);
    }

    fn deserialize(buf: &GenericArray<u8, Self::Size>) -> Result<Self, Self::DeserializationError> {
        Ok(Hash(*Output::<Sha256>::from_slice(buf)))
    }
}

pub fn compute_hash<'a, I, S>(input: I) -> Hash
where
    I: IntoIterator<Item = &'a S>,
    S: Serializable + 'a,
{
    // set up hash
    let mut sha = Sha256::new();
    let mut buf = GenericArray::default();

    // set state
    for x in input {
        x.serialize(&mut buf);
        sha.update(&buf);
    }
    // compute hash
    Hash(sha.finalize())
}

In this implementation we don't need to assume the size of the hash. More importantly, it opens the door for upgrading digest and sha2 versions later as it is compatible with hybrid-array.

Let me know what you think

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Sounds good to me =). I'll use this.

Comment on lines 37 to 43
let mut sha = Sha256::new();
// set state
for x in input {
let mut buf = GenericArray::default();
x.serialize(&mut buf);
sha.update(buf);
}
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
let mut sha = Sha256::new();
// set state
for x in input {
let mut buf = GenericArray::default();
x.serialize(&mut buf);
sha.update(buf);
}
let mut sha = Sha256::new();
let mut buf = GenericArray::default();
// set state
for x in input {
x.serialize(&mut buf);
sha.update(&buf);
}

ff::{Field, Serializable}, helpers::Message, protocol::prss::FromRandomU128
};

#[derive(Clone, Copy, Debug, PartialEq)]
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

are you sure about Copy? I think we kinda settled on using Clone on relatively large structs, for example AdditiveShare<Fp31> is not Copy despite being just 2 bytes wide

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I just copied and pasted from @danielmasny 's draft PR =). I will just remove them all for now, and only add these back in when we need them.

@@ -1,3 +1,4 @@
pub mod hashing;
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

you probably want to keep it private as there may not be a need to use it outside of malicious security module

Copy link
Collaborator

@danielmasny danielmasny left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Looks good to me. The hashing looks correct to me and is as we have discussed it. There are some other parts that are not directly related to the hashing. I left some comments anyway. Since I haven't reviewed your previous PRs, I am lacking some contexts. I asked some clarification questions.

let denominator = CanonicalLagrangeDenominator::<F, λ>::new();
let lagrange_table_r = LagrangeTable::<F, λ, U1>::new(&denominator, &r);
let lagrange_table = LagrangeTable::<F, λ, <λ as Sub<U1>>::Output>::from(denominator);
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

the denominator and table only depend on lambda, so you don't need to recompute them for every proof part, you could add a reference to lagrange table to the input

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

You're right. I was planning to do that but forgot. Thanks for the reminder! I'll add it.

let mut v = v.into_iter();

assert_eq!(u.len(), λ::USIZE); // We should pad with zeroes eventually
assert_eq!(v.len(), λ::USIZE); // We should pad with zeroes eventually

// We need a table of size `λ + 1` since we add a random point at x=0
let denominator = CanonicalLagrangeDenominator::<F, Sum<λ, U1>>::new();
let lagrange_table = LagrangeTable::<F, Sum<λ, U1>, λ>::from(denominator);
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

you could use compute proof here as well to have less redundant code. You can even use the same lagrange table by only submitting lambda minus 1 real u, v value to leave one entry space for the masks. Then for the final proof you just need to slightly adapt he proof generator/ u, v by appending the masks when calling generate proof the last time

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

That's a good point. I'll think about this. Maybe I can just remove this function entirely.

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Oh man... I tried, I really did, but the trait bounds are WILD! I just couldn't figure out how to make the compiler understand that it should work.

@benjaminsavage benjaminsavage merged commit fc791b2 into main Apr 2, 2024
20 checks passed
@benjaminsavage benjaminsavage deleted the fiat_shamir branch April 2, 2024 05:28
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants