Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Client implementation #1

Merged
merged 29 commits into from
Jan 8, 2025
Merged
Show file tree
Hide file tree
Changes from 28 commits
Commits
Show all changes
29 commits
Select commit Hold shift + click to select a range
d49c23e
Initial commit
gianbelinche Nov 29, 2024
0b367c5
Fix compilation
gianbelinche Nov 29, 2024
7f3959e
Fix tests
gianbelinche Nov 29, 2024
3943dba
Fix client tests
gianbelinche Nov 29, 2024
25ada9c
Add thiserror
gianbelinche Nov 29, 2024
a44decf
Fix warnings
gianbelinche Nov 29, 2024
6a439ae
Fix remaining warnings
gianbelinche Nov 29, 2024
3ac1f62
Fix comments
gianbelinche Dec 2, 2024
c925859
Fix PR comments
gianbelinche Dec 2, 2024
850b125
Sync blob info
gianbelinche Dec 12, 2024
47b956e
Sync verifier
gianbelinche Dec 12, 2024
261b6f8
Sync config
gianbelinche Dec 12, 2024
a5c1340
Sync rest of files
gianbelinche Dec 12, 2024
847e533
Fix tests
gianbelinche Dec 12, 2024
f8d1bab
Remove rlp
gianbelinche Dec 12, 2024
30fe3f1
Add docs
gianbelinche Dec 13, 2024
44df161
Add documentation
gianbelinche Dec 16, 2024
beb238d
Add pub crate
gianbelinche Dec 16, 2024
cabdb56
Update with new changes
gianbelinche Dec 19, 2024
402ec48
Fix comments
gianbelinche Dec 23, 2024
4974c95
Merge branch 'client-implementation' into thiserror
gianbelinche Dec 23, 2024
9c2c39d
Merge branch 'thiserror' into sync-latest-changes
gianbelinche Jan 3, 2025
b1e7baa
Fix merge
gianbelinche Jan 3, 2025
5ac66b3
Format code
gianbelinche Jan 3, 2025
10d95ce
Change full for fs
gianbelinche Jan 3, 2025
26278cb
Remove zksync comment
gianbelinche Jan 3, 2025
48d9de2
Merge pull request #1 from lambdaclass/thiserror
juanbono Jan 3, 2025
01935ad
Merge pull request #2 from lambdaclass/sync-latest-changes
juanbono Jan 3, 2025
3703873
Fix comments (#4)
gianbelinche Jan 7, 2025
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
36 changes: 36 additions & 0 deletions Cargo.toml
Original file line number Diff line number Diff line change
@@ -0,0 +1,36 @@
[package]
name = "eigen-client"
description = "Eigen Client"
version = "0.1.0"
samlaf marked this conversation as resolved.
Show resolved Hide resolved
edition = "2021"
license = "MIT OR Apache-2.0"

[dependencies]
tokio-stream = "0.1.16"
rust-kzg-bn254 = "0.2.1"
ark-bn254 = "0.5.0"
num-bigint = "0.4.6"
rand = "0.8"
sha3 = "0.10.8"
tiny-keccak = "2"
ethabi = "18.0.0"
thiserror = "1"
ethereum-types = { version = "0.14.1", features = ["serialize"] }
prost = "0.12.6"
tonic = { version = "0.11.0", features = ["tls-roots", "prost", "codegen"] }
secp256k1 = { version = "0.27.0", features = ["recovery", "global-context"] }
subxt-signer = { version = "0.34", features = ["sr25519", "native"] }
bytes = {version= "1", features = ["serde"]}
reqwest = {version = "0.12", features = ["json"] }
serde = "1"
serde_json = "1"
backon = "0.4.4"
tokio = {version = "1", features = ["fs"]}
async-trait = "0.1"
hex = "0.4"
secrecy = "0.8.0"
byteorder = "1.5.0"
url = "2.5.2"

[dev-dependencies]
serial_test = "3.1.1"
Binary file added resources/g1.point
Binary file not shown.
Binary file added resources/g2.point.powerOf2
Binary file not shown.
175 changes: 175 additions & 0 deletions src/blob_info.rs
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Why is this file needed? Seems like its just implementing rlp serde on top of the generated/common.rs structs? Why are we reimplementing the same structs though? Can't we only implement the serialization instead and rename this file rlp_serde.rs or something? I might be missing something, but in any case would be helpful to add a comment at the top of the file indicating its purpose.

Original file line number Diff line number Diff line change
@@ -0,0 +1,175 @@
use crate::errors::ConversionError;

use super::{
common::G1Commitment as DisperserG1Commitment,
disperser::{
BatchHeader as DisperserBatchHeader, BatchMetadata as DisperserBatchMetadata,
BlobHeader as DisperserBlobHeader, BlobInfo as DisperserBlobInfo,
BlobQuorumParam as DisperserBlobQuorumParam,
BlobVerificationProof as DisperserBlobVerificationProof,
},
};

/// Internal of BlobInfo
/// Contains the KZG Commitment
#[derive(Debug, PartialEq, Clone)]
pub(crate) struct G1Commitment {
pub(crate) x: Vec<u8>,
pub(crate) y: Vec<u8>,
}

impl From<DisperserG1Commitment> for G1Commitment {
fn from(value: DisperserG1Commitment) -> Self {
Self {
x: value.x,
y: value.y,
}
}
}
Comment on lines +21 to +28
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Is there any point in redefining G1Commitment here? Its literally the same struct as the one in generated/common.rs. Perhaps we can just use that one directly?


/// Internal of BlobInfo
/// Contains data related to the blob quorums
#[derive(Debug, PartialEq, Clone)]
pub(crate) struct BlobQuorumParam {
pub(crate) quorum_number: u32,
pub(crate) adversary_threshold_percentage: u32,
pub(crate) confirmation_threshold_percentage: u32,
pub(crate) chunk_length: u32,
}

impl From<DisperserBlobQuorumParam> for BlobQuorumParam {
fn from(value: DisperserBlobQuorumParam) -> Self {
Self {
quorum_number: value.quorum_number,
adversary_threshold_percentage: value.adversary_threshold_percentage,
confirmation_threshold_percentage: value.confirmation_threshold_percentage,
chunk_length: value.chunk_length,
}
}
}

/// Internal of BlobInfo
/// Contains the blob header data
#[derive(Debug, PartialEq, Clone)]
pub(crate) struct BlobHeader {
pub(crate) commitment: G1Commitment,
pub(crate) data_length: u32,
pub(crate) blob_quorum_params: Vec<BlobQuorumParam>,
}

impl TryFrom<DisperserBlobHeader> for BlobHeader {
type Error = ConversionError;
fn try_from(value: DisperserBlobHeader) -> Result<Self, Self::Error> {
let blob_quorum_params: Vec<BlobQuorumParam> = value
.blob_quorum_params
.iter()
.map(|param| BlobQuorumParam::from(param.clone()))
.collect();
Ok(Self {
commitment: G1Commitment::from(
value
.commitment
.ok_or(ConversionError::NotPresent("BlobHeader".to_string()))?,
),
data_length: value.data_length,
blob_quorum_params,
})
}
}

/// Internal of BlobInfo
#[derive(Debug, PartialEq, Clone)]
pub(crate) struct BatchHeader {
pub(crate) batch_root: Vec<u8>,
pub(crate) quorum_numbers: Vec<u8>,
pub(crate) quorum_signed_percentages: Vec<u8>,
pub(crate) reference_block_number: u32,
}

impl From<DisperserBatchHeader> for BatchHeader {
fn from(value: DisperserBatchHeader) -> Self {
Self {
batch_root: value.batch_root,
quorum_numbers: value.quorum_numbers,
quorum_signed_percentages: value.quorum_signed_percentages,
reference_block_number: value.reference_block_number,
}
}
}

/// Internal of BlobInfo
#[derive(Debug, PartialEq, Clone)]
pub(crate) struct BatchMetadata {
pub(crate) batch_header: BatchHeader,
pub(crate) signatory_record_hash: Vec<u8>,
pub(crate) fee: Vec<u8>,
pub(crate) confirmation_block_number: u32,
pub(crate) batch_header_hash: Vec<u8>,
}

impl TryFrom<DisperserBatchMetadata> for BatchMetadata {
type Error = ConversionError;
fn try_from(value: DisperserBatchMetadata) -> Result<Self, Self::Error> {
Ok(Self {
batch_header: BatchHeader::from(
value
.batch_header
.ok_or(ConversionError::NotPresent("BatchMetadata".to_string()))?,
),
signatory_record_hash: value.signatory_record_hash,
fee: value.fee,
confirmation_block_number: value.confirmation_block_number,
batch_header_hash: value.batch_header_hash,
})
}
}

/// Internal of BlobInfo
#[derive(Debug, PartialEq, Clone)]
pub(crate) struct BlobVerificationProof {
pub(crate) batch_id: u32,
pub(crate) blob_index: u32,
pub(crate) batch_medatada: BatchMetadata,
pub(crate) inclusion_proof: Vec<u8>,
pub(crate) quorum_indexes: Vec<u8>,
}

impl TryFrom<DisperserBlobVerificationProof> for BlobVerificationProof {
type Error = ConversionError;
fn try_from(value: DisperserBlobVerificationProof) -> Result<Self, Self::Error> {
Ok(Self {
batch_id: value.batch_id,
blob_index: value.blob_index,
batch_medatada: BatchMetadata::try_from(value.batch_metadata.ok_or(
ConversionError::NotPresent("BlobVerificationProof".to_string()),
)?)?,
inclusion_proof: value.inclusion_proof,
quorum_indexes: value.quorum_indexes,
})
}
}

/// Data returned by the disperser when a blob is dispersed
#[derive(Debug, PartialEq, Clone)]
pub(crate) struct BlobInfo {
pub(crate) blob_header: BlobHeader,
pub(crate) blob_verification_proof: BlobVerificationProof,
}

impl TryFrom<DisperserBlobInfo> for BlobInfo {
type Error = ConversionError;
fn try_from(value: DisperserBlobInfo) -> Result<Self, Self::Error> {
Ok(Self {
blob_header: BlobHeader::try_from(
value
.blob_header
.ok_or(ConversionError::NotPresent("BlobInfo".to_string()))?,
)?,
blob_verification_proof: BlobVerificationProof::try_from(
value
.blob_verification_proof
.ok_or(ConversionError::NotPresent("BlobInfo".to_string()))?,
)?,
})
}
}
68 changes: 68 additions & 0 deletions src/client.rs
Original file line number Diff line number Diff line change
@@ -0,0 +1,68 @@
use crate::errors::{ConfigError, EigenClientError};

use super::{
config::{EigenConfig, EigenSecrets},
sdk::RawEigenClient,
};
use async_trait::async_trait;
use secp256k1::SecretKey;
use secrecy::ExposeSecret;
use std::error::Error;
use std::{str::FromStr, sync::Arc};

/// This trait provides a method call which given the blob id, returns the blob data or None
/// It you don't need to use it, just return None and it would be as if it didn't exist
/// It can be used as extra verification if you also store the blob yourself
#[async_trait]
pub trait GetBlobData: std::fmt::Debug + Send + Sync {
async fn get_blob_data(
&self,
input: &str,
) -> Result<Option<Vec<u8>>, Box<dyn Error + Send + Sync>>;

fn clone_boxed(&self) -> Box<dyn GetBlobData>;
}

/// EigenClient is a client for the Eigen DA service.
#[derive(Debug, Clone)]
pub struct EigenClient {
pub(crate) client: Arc<RawEigenClient>,
}

impl EigenClient {
/// Creates a new EigenClient
pub async fn new(
config: EigenConfig,
secrets: EigenSecrets,
get_blob_data: Box<dyn GetBlobData>,
) -> Result<Self, EigenClientError> {
let private_key = SecretKey::from_str(secrets.private_key.0.expose_secret().as_str())
.map_err(ConfigError::Secp)?;

let client = RawEigenClient::new(private_key, config, get_blob_data).await?;
Ok(Self {
client: Arc::new(client),
})
}

/// Dispatches a blob to the Eigen DA service
pub async fn dispatch_blob(&self, data: Vec<u8>) -> Result<String, EigenClientError> {
let blob_id = self.client.dispatch_blob(data).await?;

Ok(blob_id)
}

/// Gets the inclusion data for a blob
pub async fn get_inclusion_data(
&self,
blob_id: &str,
) -> Result<Option<Vec<u8>>, EigenClientError> {
let inclusion_data = self.client.get_inclusion_data(blob_id).await?;
Ok(inclusion_data)
}

/// Returns the blob size limit
pub fn blob_size_limit(&self) -> Option<usize> {
Some(RawEigenClient::blob_size_limit())
}
}
Loading