Skip to content

Commit

Permalink
Browse files Browse the repository at this point in the history
* Add DigestBuilder.

* Make digest and claims private.

* refactor: Refactor DigestBuilder

- Refactored `src/digest.rs` to replace `Vec<u8>` storage with dedicated Write I/O.
- Removed optional `hasher` and introduced dedicated factory method.
- Reworked digest computation and mapping into separate functions.
- Merged build and digest computation to enhance coherence.
- Improved type safety with Result error propagation.

* Propagate DigestBuilder changes.

* Fix tests.

* Correct assertion for OutputSize scale.

* Remove commented.

* Remove dbg!.

* Fixup rebase.

---------

Co-authored-by: porcuquine <[email protected]>
Co-authored-by: François Garillot <[email protected]>

feat: add a digest to R1CSShape (privacy-scaling-explorations#49)

* refactor: Refactor Digestible trait

- Removed `to_bytes` method from the `Digestible` trait in `src/digest.rs` file.

* fix: Make bincode serialization in digest.rs more rigorous

- Updated `bincode::serialize_into(byte_sink, self)` with a configurable version to enable "little endian" and "fixint encoding" options.
- Added a comment in `src/digest.rs` about `bincode`'s recursive length-prefixing during serialization.

* refactor: Refactor digest computation using `OnceCell` and `DigestComputer`

This gives up on a generic builder and instead uses an idempotent `OnceCell`
+ a generic digest computer to populate the digest of a structure.

- this shows how to set up digest computation so it doesn't depend on the digest field,
- the digest can't be set twice,
- an erroneous digest can't be inferred from the serialized data.

In Details:

- Overhauled digest functionality in multiple files by replacing `DigestBuilder` with `DigestComputer`, significantly altering the handling of hashes.
- Incorporated `once_cell::sync::OnceCell` and `ff::PrimeField` dependencies to improve performance and simplify code.
- Modified `VerifierKey` and `RunningClaims` structures to include a `OnceCell` for digest, leading to a change in function calls and procedures.
- Simplified `setup_running_claims` by removing error handling and directly returning `RunningClaims` type.
- Adapted test functions according to the changes including the removal of unnecessary unwrapping in certain scenarios.
- Updated Cargo.toml with the new dependency `once_cell` version `1.18.0`.

* refactor: rename pp digest in VerifierKey to pp_digest

* feat: add a digest to R1CSShape

* fix: Small issues

- Introduced a new assertion within the `write_bytes` method of `src/supernova/mod.rs` for validating whether the `claims` are empty
- Improved code comment clarity regarding the creation of a running claim in `src/supernova/mod.rs`.

Co-authored-by: porcuquine <[email protected]>
  • Loading branch information
huitseeker and porcuquine authored Sep 26, 2023
1 parent 13577b3 commit 284c985
Show file tree
Hide file tree
Showing 7 changed files with 308 additions and 90 deletions.
1 change: 1 addition & 0 deletions Cargo.toml
Original file line number Diff line number Diff line change
Expand Up @@ -34,6 +34,7 @@ byteorder = "1.4.3"
thiserror = "1.0"
halo2curves = { version = "0.4.0", features = ["derive_serde"] }
group = "0.13.0"
once_cell = "1.18.0"

[target.'cfg(any(target_arch = "x86_64", target_arch = "aarch64"))'.dependencies]
pasta-msm = { version = "0.1.4" }
Expand Down
166 changes: 166 additions & 0 deletions src/digest.rs
Original file line number Diff line number Diff line change
@@ -0,0 +1,166 @@
use bincode::Options;
use ff::PrimeField;
use serde::Serialize;
use sha3::{Digest, Sha3_256};
use std::io;
use std::marker::PhantomData;

use crate::constants::NUM_HASH_BITS;

/// Trait for components with potentially discrete digests to be included in their container's digest.
pub trait Digestible {
/// Write the byte representation of Self in a byte buffer
fn write_bytes<W: Sized + io::Write>(&self, byte_sink: &mut W) -> Result<(), io::Error>;
}

/// Marker trait to be implemented for types that implement `Digestible` and `Serialize`.
/// Their instances will be serialized to bytes then digested.
pub trait SimpleDigestible: Serialize {}

impl<T: SimpleDigestible> Digestible for T {
fn write_bytes<W: Sized + io::Write>(&self, byte_sink: &mut W) -> Result<(), io::Error> {
let config = bincode::DefaultOptions::new()
.with_little_endian()
.with_fixint_encoding();
// Note: bincode recursively length-prefixes every field!
config
.serialize_into(byte_sink, self)
.map_err(|e| io::Error::new(io::ErrorKind::InvalidData, e))
}
}

pub struct DigestComputer<'a, F: PrimeField, T> {
inner: &'a T,
_phantom: PhantomData<F>,
}

impl<'a, F: PrimeField, T: Digestible> DigestComputer<'a, F, T> {
fn hasher() -> Sha3_256 {
Sha3_256::new()
}

fn map_to_field(digest: &mut [u8]) -> F {
let bv = (0..NUM_HASH_BITS).map(|i| {
let (byte_pos, bit_pos) = (i / 8, i % 8);
let bit = (digest[byte_pos] >> bit_pos) & 1;
bit == 1
});

// turn the bit vector into a scalar
let mut digest = F::ZERO;
let mut coeff = F::ONE;
for bit in bv {
if bit {
digest += coeff;
}
coeff += coeff;
}
digest
}

/// Create a new DigestComputer
pub fn new(inner: &'a T) -> Self {
DigestComputer {
inner,
_phantom: PhantomData,
}
}

/// Compute the digest of a `Digestible` instance.
pub fn digest(&self) -> Result<F, io::Error> {
let mut hasher = Self::hasher();
self
.inner
.write_bytes(&mut hasher)
.expect("Serialization error");
let mut bytes: [u8; 32] = hasher.finalize().into();
Ok(Self::map_to_field(&mut bytes))
}
}

#[cfg(test)]
mod tests {
use ff::Field;
use once_cell::sync::OnceCell;
use pasta_curves::pallas;
use serde::{Deserialize, Serialize};

use crate::traits::Group;

use super::{DigestComputer, SimpleDigestible};

#[derive(Serialize, Deserialize)]
struct S<G: Group> {
i: usize,
#[serde(skip, default = "OnceCell::new")]
digest: OnceCell<G::Scalar>,
}

impl<G: Group> SimpleDigestible for S<G> {}

impl<G: Group> S<G> {
fn new(i: usize) -> Self {
S {
i,
digest: OnceCell::new(),
}
}

fn digest(&self) -> G::Scalar {
self
.digest
.get_or_try_init(|| DigestComputer::new(self).digest())
.cloned()
.unwrap()
}
}

type G = pallas::Point;

#[test]
fn test_digest_field_not_ingested_in_computation() {
let s1 = S::<G>::new(42);

// let's set up a struct with a weird digest field to make sure the digest computation does not depend of it
let oc = OnceCell::new();
oc.set(<G as Group>::Scalar::ONE).unwrap();

let s2: S<G> = S { i: 42, digest: oc };

assert_eq!(
DigestComputer::<<G as Group>::Scalar, _>::new(&s1)
.digest()
.unwrap(),
DigestComputer::<<G as Group>::Scalar, _>::new(&s2)
.digest()
.unwrap()
);

// note: because of the semantics of `OnceCell::get_or_try_init`, the above
// equality will not result in `s1.digest() == s2.digest`
assert_ne!(
s2.digest(),
DigestComputer::<<G as Group>::Scalar, _>::new(&s2)
.digest()
.unwrap()
);
}

#[test]
fn test_digest_impervious_to_serialization() {
let good_s = S::<G>::new(42);

// let's set up a struct with a weird digest field to confuse deserializers
let oc = OnceCell::new();
oc.set(<G as Group>::Scalar::ONE).unwrap();

let bad_s: S<G> = S { i: 42, digest: oc };
// this justifies the adjective "bad"
assert_ne!(good_s.digest(), bad_s.digest());

let naughty_bytes = bincode::serialize(&bad_s).unwrap();

let retrieved_s: S<G> = bincode::deserialize(&naughty_bytes).unwrap();
assert_eq!(good_s.digest(), retrieved_s.digest())
}
}
3 changes: 3 additions & 0 deletions src/errors.rs
Original file line number Diff line number Diff line change
Expand Up @@ -56,4 +56,7 @@ pub enum NovaError {
/// return when error during synthesis
#[error("SynthesisError")]
SynthesisError,
/// returned when there is an error creating a digest
#[error("DigestError")]
DigestError,
}
Loading

0 comments on commit 284c985

Please sign in to comment.