Skip to content

Commit

Permalink
Enable payload continue (#8)
Browse files Browse the repository at this point in the history
* Add KIP-10 Transaction Introspection Opcodes, 8-byte arithmetic and Hard Fork Support (kaspanet#487)

* implement new opcodes

* example of mutual tx

* add docs describing scenario

* introduce feature gate for new features

* introduce hf feature that enables txscript hf feature

* style: fmt and clippy fix

* implement new opcodes

* example of mutual tx

* add docs describing scenario

* introduce feature gate for new features

* style: fmt and clippy fix

* remove unused feature

* fmt

* make opcode invalid in case of feature disabled

* feature gate test

* change test set based on feature
add ci cd test

* rename InputSPK -> InputSpk

* enable kip10 opcodes based on daa_score in runtime

* use dummy kip10 activation daa score in params

* use dummy kip10 activation daa score in params

* suppress clippy lint

* add example with shared key

* fix clippy

* remove useless check from example

* add one-time borrowing example

* Implement one-time and two-times threshold borrowing scenarios

- Add threshold_scenario_limited_one_time function
- Add threshold_scenario_limited_2_times function
- Create generate_limited_time_script for reusable script generation
- Implement nested script structure for two-times borrowing
- Update documentation for both scenarios
- Add tests for owner spending, borrowing, and invalid attempts in both cases
- Ensure consistent error handling and logging across scenarios
- Refactor to use more generic script generation approach

* fix: fix incorrect sig-op count

* correct error description

* style: fmt

* pass kip-10 flag in constructor params

* remove borrow scenario from tests.
run tests against both kip1- enabled/disabled engine

* introduce method that converts spk to bytes.
add tests covering new opcodes

* return comment describing where invalid opcodes starts from.
add comments describing why 2 files are used.

* fix wring error messages

* support introspection by index

* test input spk

* test output spk

* tests refactor

* support 8-byte arithmetics

* Standartize fork activation logic (kaspanet#588)

* Use ForkActivation for all fork activations

* Avoid using negation in some ifs

* Add is_within_range_from_activation

* Move 'is always' check inside is_within_range_from_activation

* lints

* Refactoring for cleaner pruning proof module (kaspanet#589)

* Cleanup manual block level calc

There were two areas in pruning proof mod that
manually calculated block level.

This replaces those with a call to calc_block_level

* Refactor pruning proof build functions

* Refactor apply pruning proof functions

* Refactor validate pruning functions

* Add comments for clarity

* only enable 8 byte arithmetics for kip10

* use i64 value in 9-byte tests

* fix tests covering kip10 and i64 deserialization

* fix test according to 8-byte math

* finish test covering kip10 opcodes: input/output/amount/spk

* fix kip10 examples

* rename test

* feat: add input index op

* feat: add input/outpiut opcodes

* reseve opcodes
reorder kip10 opcodes.
reflect script tests

* fix example

* introspection opcodes are reserverd, not disables

* use ForkActivation type

* cicd: run kip-10 example

* move spk encoding to txscript module

* rework bound check ot input/output index

* fix tests by importing spkencoding trait

* replace todo in descripotions of over[under]flow errors

* reorder new opcodes, reserve script sig opcode, remove txid

* fix bitcoin script tests

* add simple opcode tests

* rename id(which represents input index) to idx

* fix comments

* add input spk tests

* refactor test cases

* refactor(txscript): Enforce input index invariant via assertion

Change TxScriptEngine::from_transaction_input to assert valid input index
instead of returning Result. This better reflects that an invalid index is a
caller's (transaction validation) error rather than a script engine error,
since the input must be part of the transaction being validated.

An invalid index signifies a mismatch between the transaction and the input being
validated - this is a programming error in the transaction validator layer, not
a script engine concern. The script engine should be able to assume it receives
valid inputs from its caller.

The change simplifies error handling by enforcing this invariant early,
while maintaining identical behavior for valid inputs. The function is
now documented to panic on malformed inputs.

This is a breaking change for code that previously handled
InvalidIndex errors, though such handling was likely incorrect
as it indicated an inconsistency in transaction validation.

* refactor error types to contain correct info

* rename id to idx

* rename opcode

* make construction of TxScriptEngine from transaction input infallible

* style: format combinators chain

* add integration test covering activation of kip10

* rename kip10_activation_daa_score to kip10_activation

* Update crypto/txscript/src/lib.rs

refactor vector filling

* rework assert

* verify that block is disqualified in case of it has tx which requires
block that contains the tx with kip10 opcode is accepted after daa score has being reached

* revert changer to infallible api

* add doc comments

* Update crypto/txscript/src/opcodes/mod.rs

Fallible conversion of output amount (usize -> i64)

* Update crypto/txscript/src/opcodes/mod.rs

Fallible conversion of input amount (usize -> i64)

* add required import

* refactor: SigHashReusedValuesUnsync doesnt neet to be mutable

* fix test description

* rework example

* 9 byte integers must fail to serialize

* add todo

* rewrite todo

* remove redundant code

* remove redundant mut in example

* remove redundant mut in example

* remove redundant mut in example

* cicd: apply lint to examples

---------

Co-authored-by: Ori Newman <[email protected]>

* Some simplification to script number types (kaspanet#594)

* Some simplification to script number types

* Add TODO

* Address review comments

* feat: add signMessage noAuxRand option for kaspa wasm (kaspanet#587)

* feat: add signMessageWithoutRand method for kaspa wasm

* enhance: sign message api

* fix: unit test fail

* chore: update noAuxRand of ISignMessage

* chore: add sign message demo for noAuxRand

* test reflects enabling payload

* Enhance benchmarking: add payload size variations

Refactored `mock_tx` to `mock_tx_with_payload` to support custom payload sizes. Introduced new benchmark function `benchmark_check_scripts_with_payload` to test performance with varying payload sizes. Commented out the old benchmark function to focus on payload-based tests.

* Enhance script checking benchmarks

Added benchmarks to evaluate script checking performance with varying payload sizes and input counts. This helps in understanding the impact of transaction payload size on validation and the relationship between input count and payload processing overhead.

* Add new test case for transaction hashing and refactor code

This commit introduces a new test case to verify that transaction IDs and hashes change with payload modifications. Additionally, code readability and consistency are improved by refactoring multi-line expressions into single lines where appropriate.

* Add payload activation test for transactions

This commit introduces a new integration test to validate the enforcement of payload activation rules at a specified DAA score. The test ensures that transactions with large payloads are rejected before activation and accepted afterward, maintaining consensus integrity.

* style: fmt

* test: add test that checks that payload change reflects sighash

* rename test

---------

Co-authored-by: Ori Newman <[email protected]>
Co-authored-by: witter-deland <[email protected]>
  • Loading branch information
3 people authored Nov 24, 2024
1 parent ebaf7fa commit e9711ac
Show file tree
Hide file tree
Showing 28 changed files with 7,923 additions and 310 deletions.
5 changes: 4 additions & 1 deletion .github/workflows/ci.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -128,6 +128,9 @@ jobs:

- name: Run cargo doc
run: cargo doc --release --no-deps --workspace
- name: Run kip-10 example
run: cargo run --example kip-10


# test-release:
# name: Test Suite Release
Expand Down Expand Up @@ -210,7 +213,7 @@ jobs:
run: cargo fmt --all -- --check

- name: Run cargo clippy
run: cargo clippy --workspace --tests --benches -- -D warnings
run: cargo clippy --workspace --tests --benches --examples -- -D warnings


check-wasm32:
Expand Down
4 changes: 3 additions & 1 deletion cli/src/modules/message.rs
Original file line number Diff line number Diff line change
@@ -1,5 +1,6 @@
use kaspa_addresses::Version;
use kaspa_bip32::secp256k1::XOnlyPublicKey;
use kaspa_wallet_core::message::SignMessageOptions;
use kaspa_wallet_core::{
account::{BIP32_ACCOUNT_KIND, KEYPAIR_ACCOUNT_KIND},
message::{sign_message, verify_message, PersonalMessage},
Expand Down Expand Up @@ -87,8 +88,9 @@ impl Message {

let pm = PersonalMessage(message);
let privkey = self.get_address_private_key(&ctx, kaspa_address).await?;
let sign_options = SignMessageOptions { no_aux_rand: false };

let sig_result = sign_message(&pm, &privkey);
let sig_result = sign_message(&pm, &privkey, &sign_options);

match sig_result {
Ok(signature) => {
Expand Down
63 changes: 50 additions & 13 deletions consensus/benches/check_scripts.rs
Original file line number Diff line number Diff line change
Expand Up @@ -13,8 +13,10 @@ use kaspa_utils::iter::parallelism_in_power_steps;
use rand::{thread_rng, Rng};
use secp256k1::Keypair;

// You may need to add more detailed mocks depending on your actual code.
fn mock_tx(inputs_count: usize, non_uniq_signatures: usize) -> (Transaction, Vec<UtxoEntry>) {
fn mock_tx_with_payload(inputs_count: usize, non_uniq_signatures: usize, payload_size: usize) -> (Transaction, Vec<UtxoEntry>) {
let mut payload = vec![0u8; payload_size];
thread_rng().fill(&mut payload[..]);

let reused_values = SigHashReusedValuesUnsync::new();
let dummy_prev_out = TransactionOutpoint::new(kaspa_hashes::Hash::from_u64_word(1), 1);
let mut tx = Transaction::new(
Expand All @@ -24,10 +26,11 @@ fn mock_tx(inputs_count: usize, non_uniq_signatures: usize) -> (Transaction, Vec
0,
SubnetworkId::from_bytes([0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0]),
0,
vec![],
payload,
);
let mut utxos = vec![];
let mut kps = vec![];

for _ in 0..inputs_count - non_uniq_signatures {
let kp = Keypair::new(secp256k1::SECP256K1, &mut thread_rng());
tx.inputs.push(TransactionInput { previous_outpoint: dummy_prev_out, signature_script: vec![], sequence: 0, sig_op_count: 1 });
Expand All @@ -40,6 +43,7 @@ fn mock_tx(inputs_count: usize, non_uniq_signatures: usize) -> (Transaction, Vec
});
kps.push(kp);
}

for _ in 0..non_uniq_signatures {
let kp = kps.last().unwrap();
tx.inputs.push(TransactionInput { previous_outpoint: dummy_prev_out, signature_script: vec![], sequence: 0, sig_op_count: 1 });
Expand All @@ -51,31 +55,32 @@ fn mock_tx(inputs_count: usize, non_uniq_signatures: usize) -> (Transaction, Vec
is_coinbase: false,
});
}

for (i, kp) in kps.iter().enumerate().take(inputs_count - non_uniq_signatures) {
let mut_tx = MutableTransaction::with_entries(&tx, utxos.clone());
let sig_hash = calc_schnorr_signature_hash(&mut_tx.as_verifiable(), i, SIG_HASH_ALL, &reused_values);
let msg = secp256k1::Message::from_digest_slice(sig_hash.as_bytes().as_slice()).unwrap();
let sig: [u8; 64] = *kp.sign_schnorr(msg).as_ref();
// This represents OP_DATA_65 <SIGNATURE+SIGHASH_TYPE> (since signature length is 64 bytes and SIGHASH_TYPE is one byte)
tx.inputs[i].signature_script = std::iter::once(65u8).chain(sig).chain([SIG_HASH_ALL.to_u8()]).collect();
}

let length = tx.inputs.len();
for i in (inputs_count - non_uniq_signatures)..length {
let kp = kps.last().unwrap();
let mut_tx = MutableTransaction::with_entries(&tx, utxos.clone());
let sig_hash = calc_schnorr_signature_hash(&mut_tx.as_verifiable(), i, SIG_HASH_ALL, &reused_values);
let msg = secp256k1::Message::from_digest_slice(sig_hash.as_bytes().as_slice()).unwrap();
let sig: [u8; 64] = *kp.sign_schnorr(msg).as_ref();
// This represents OP_DATA_65 <SIGNATURE+SIGHASH_TYPE> (since signature length is 64 bytes and SIGHASH_TYPE is one byte)
tx.inputs[i].signature_script = std::iter::once(65u8).chain(sig).chain([SIG_HASH_ALL.to_u8()]).collect();
}

(tx, utxos)
}

fn benchmark_check_scripts(c: &mut Criterion) {
for inputs_count in [100, 50, 25, 10, 5, 2] {
for non_uniq_signatures in [0, inputs_count / 2] {
let (tx, utxos) = mock_tx(inputs_count, non_uniq_signatures);
let (tx, utxos) = mock_tx_with_payload(inputs_count, non_uniq_signatures, 0);
let mut group = c.benchmark_group(format!("inputs: {inputs_count}, non uniq: {non_uniq_signatures}"));
group.sampling_mode(SamplingMode::Flat);

Expand All @@ -84,7 +89,7 @@ fn benchmark_check_scripts(c: &mut Criterion) {
let cache = Cache::new(inputs_count as u64);
b.iter(|| {
cache.clear();
check_scripts_sequential(black_box(&cache), black_box(&tx.as_verifiable())).unwrap();
check_scripts_sequential(black_box(&cache), black_box(&tx.as_verifiable()), false).unwrap();
})
});

Expand All @@ -93,21 +98,20 @@ fn benchmark_check_scripts(c: &mut Criterion) {
let cache = Cache::new(inputs_count as u64);
b.iter(|| {
cache.clear();
check_scripts_par_iter(black_box(&cache), black_box(&tx.as_verifiable())).unwrap();
check_scripts_par_iter(black_box(&cache), black_box(&tx.as_verifiable()), false).unwrap();
})
});

// Iterate powers of two up to available parallelism
for i in parallelism_in_power_steps() {
if inputs_count >= i {
group.bench_function(format!("rayon, custom thread pool, thread count {i}"), |b| {
let tx = MutableTransaction::with_entries(tx.clone(), utxos.clone());
// Create a custom thread pool with the specified number of threads
let pool = rayon::ThreadPoolBuilder::new().num_threads(i).build().unwrap();
let cache = Cache::new(inputs_count as u64);
b.iter(|| {
cache.clear();
check_scripts_par_iter_pool(black_box(&cache), black_box(&tx.as_verifiable()), black_box(&pool)).unwrap();
check_scripts_par_iter_pool(black_box(&cache), black_box(&tx.as_verifiable()), black_box(&pool), false)
.unwrap();
})
});
}
Expand All @@ -116,11 +120,44 @@ fn benchmark_check_scripts(c: &mut Criterion) {
}
}

/// Benchmarks script checking performance with different payload sizes and input counts.
///
/// This benchmark evaluates the performance impact of transaction payload size
/// on script validation, testing multiple scenarios:
///
/// * Payload sizes: 0KB, 16KB, 32KB, 64KB, 128KB
/// * Input counts: 1, 2, 10, 50 transactions
///
/// The benchmark helps understand:
/// 1. How payload size affects validation performance
/// 2. The relationship between input count and payload processing overhead
fn benchmark_check_scripts_with_payload(c: &mut Criterion) {
let payload_sizes = [0, 16_384, 32_768, 65_536, 131_072]; // 0, 16KB, 32KB, 64KB, 128KB
let input_counts = [1, 2, 10, 50];
let non_uniq_signatures = 0;

for inputs_count in input_counts {
for &payload_size in &payload_sizes {
let (tx, utxos) = mock_tx_with_payload(inputs_count, non_uniq_signatures, payload_size);
let mut group = c.benchmark_group(format!("script_check/inputs_{}/payload_{}_kb", inputs_count, payload_size / 1024));
group.sampling_mode(SamplingMode::Flat);

group.bench_function("parallel_validation", |b| {
let tx = MutableTransaction::with_entries(tx.clone(), utxos.clone());
let cache = Cache::new(inputs_count as u64);
b.iter(|| {
cache.clear();
check_scripts_par_iter(black_box(&cache), black_box(&tx.as_verifiable()), false).unwrap();
})
});
}
}
}

criterion_group! {
name = benches;
// This can be any expression that returns a `Criterion` object.
config = Criterion::default().with_output_color(true).measurement_time(std::time::Duration::new(20, 0));
targets = benchmark_check_scripts
targets = benchmark_check_scripts, benchmark_check_scripts_with_payload
}

criterion_main!(benches);
2 changes: 1 addition & 1 deletion consensus/client/src/signing.rs
Original file line number Diff line number Diff line change
Expand Up @@ -178,7 +178,7 @@ pub fn calc_schnorr_signature_hash(
let utxo = cctx::UtxoEntry::from(utxo.as_ref());

let hash_type = SIG_HASH_ALL;
let mut reused_values = SigHashReusedValues::new();
let reused_values = SigHashReusedValuesUnsync::new();

// let input = verifiable_tx.populated_input(input_index);
// let tx = verifiable_tx.tx();
Expand Down
18 changes: 17 additions & 1 deletion consensus/core/src/config/params.rs
Original file line number Diff line number Diff line change
Expand Up @@ -110,6 +110,18 @@ pub struct Params {
/// DAA score from which storage mass calculation and transaction mass field are activated as a consensus rule
pub storage_mass_activation: ForkActivation,

/// DAA score from which tx engine:
/// 1. Supports 8-byte integer arithmetic operations (previously limited to 4 bytes)
/// 2. Supports transaction introspection opcodes:
/// - OpTxInputCount (0xb3): Get number of inputs
/// - OpTxOutputCount (0xb4): Get number of outputs
/// - OpTxInputIndex (0xb9): Get current input index
/// - OpTxInputAmount (0xbe): Get input amount
/// - OpTxInputSpk (0xbf): Get input script public key
/// - OpTxOutputAmount (0xc2): Get output amount
/// - OpTxOutputSpk (0xc3): Get output script public key
pub kip10_activation: ForkActivation,

/// DAA score after which the pre-deflationary period switches to the deflationary period
pub deflationary_phase_daa_score: u64,

Expand Down Expand Up @@ -383,6 +395,7 @@ pub const MAINNET_PARAMS: Params = Params {

storage_mass_parameter: STORAGE_MASS_PARAMETER,
storage_mass_activation: ForkActivation::never(),
kip10_activation: ForkActivation::never(),

// deflationary_phase_daa_score is the DAA score after which the pre-deflationary period
// switches to the deflationary period. This number is calculated as follows:
Expand Down Expand Up @@ -448,7 +461,7 @@ pub const TESTNET_PARAMS: Params = Params {

storage_mass_parameter: STORAGE_MASS_PARAMETER,
storage_mass_activation: ForkActivation::never(),

kip10_activation: ForkActivation::never(),
// deflationary_phase_daa_score is the DAA score after which the pre-deflationary period
// switches to the deflationary period. This number is calculated as follows:
// We define a year as 365.25 days
Expand Down Expand Up @@ -520,6 +533,7 @@ pub const TESTNET11_PARAMS: Params = Params {

storage_mass_parameter: STORAGE_MASS_PARAMETER,
storage_mass_activation: ForkActivation::always(),
kip10_activation: ForkActivation::never(),

skip_proof_of_work: false,
max_block_level: 250,
Expand Down Expand Up @@ -575,6 +589,7 @@ pub const SIMNET_PARAMS: Params = Params {

storage_mass_parameter: STORAGE_MASS_PARAMETER,
storage_mass_activation: ForkActivation::always(),
kip10_activation: ForkActivation::never(),

skip_proof_of_work: true, // For simnet only, PoW can be simulated by default
max_block_level: 250,
Expand Down Expand Up @@ -623,6 +638,7 @@ pub const DEVNET_PARAMS: Params = Params {

storage_mass_parameter: STORAGE_MASS_PARAMETER,
storage_mass_activation: ForkActivation::never(),
kip10_activation: ForkActivation::never(),

// deflationary_phase_daa_score is the DAA score after which the pre-deflationary period
// switches to the deflationary period. This number is calculated as follows:
Expand Down
11 changes: 8 additions & 3 deletions consensus/core/src/hashing/sighash.rs
Original file line number Diff line number Diff line change
Expand Up @@ -209,9 +209,6 @@ pub fn payload_hash(tx: &Transaction, reused_values: &impl SigHashReusedValues)
return ZERO_HASH;
}

// TODO: Right now this branch will never be executed, since payload is disabled
// for all non coinbase transactions. Once payload is enabled, the payload hash
// should be cached to make it cost O(1) instead of O(tx.inputs.len()).
let mut hasher = TransactionSigningHash::new();
hasher.write_var_bytes(&tx.payload);
hasher.finalize()
Expand Down Expand Up @@ -632,6 +629,14 @@ mod tests {
action: ModifyAction::NoAction,
expected_hash: "846689131fb08b77f83af1d3901076732ef09d3f8fdff945be89aa4300562e5f", // should change the hash
},
TestVector {
name: "native-all-0-modify-payload",
populated_tx: &native_populated_tx,
hash_type: SIG_HASH_ALL,
input_index: 0,
action: ModifyAction::Payload,
expected_hash: "72ea6c2871e0f44499f1c2b556f265d9424bfea67cca9cb343b4b040ead65525", // should change the hash
},
// subnetwork transaction
TestVector {
name: "subnetwork-all-0",
Expand Down
7 changes: 7 additions & 0 deletions consensus/core/src/hashing/tx.rs
Original file line number Diff line number Diff line change
Expand Up @@ -157,6 +157,13 @@ mod tests {
expected_hash: "31da267d5c34f0740c77b8c9ebde0845a01179ec68074578227b804bac306361",
});

// Test #8, same as 7 but with a non-zero payload. The test checks id and hash are affected by payload change
tests.push(Test {
tx: Transaction::new(2, inputs.clone(), outputs.clone(), 54, subnets::SUBNETWORK_ID_REGISTRY, 3, vec![1, 2, 3]),
expected_id: "1f18b18ab004ff1b44dd915554b486d64d7ebc02c054e867cc44e3d746e80b3b",
expected_hash: "a2029ebd66d29d41aa7b0c40230c1bfa7fe8e026fb44b7815dda4e991b9a5fad",
});

for (i, test) in tests.iter().enumerate() {
assert_eq!(test.tx.id(), Hash::from_str(test.expected_id).unwrap(), "transaction id failed for test {}", i + 1);
assert_eq!(
Expand Down
14 changes: 14 additions & 0 deletions consensus/core/src/tx.rs
Original file line number Diff line number Diff line change
Expand Up @@ -293,6 +293,8 @@ pub trait VerifiableTransaction {
fn id(&self) -> TransactionId {
self.tx().id()
}

fn utxo(&self, index: usize) -> Option<&UtxoEntry>;
}

/// A custom iterator written only so that `populated_inputs` has a known return type and can de defined on the trait level
Expand Down Expand Up @@ -342,6 +344,10 @@ impl<'a> VerifiableTransaction for PopulatedTransaction<'a> {
fn populated_input(&self, index: usize) -> (&TransactionInput, &UtxoEntry) {
(&self.tx.inputs[index], &self.entries[index])
}

fn utxo(&self, index: usize) -> Option<&UtxoEntry> {
self.entries.get(index)
}
}

/// Represents a validated transaction with populated UTXO entry data and a calculated fee
Expand Down Expand Up @@ -370,6 +376,10 @@ impl<'a> VerifiableTransaction for ValidatedTransaction<'a> {
fn populated_input(&self, index: usize) -> (&TransactionInput, &UtxoEntry) {
(&self.tx.inputs[index], &self.entries[index])
}

fn utxo(&self, index: usize) -> Option<&UtxoEntry> {
self.entries.get(index)
}
}

impl AsRef<Transaction> for Transaction {
Expand Down Expand Up @@ -507,6 +517,10 @@ impl<T: AsRef<Transaction>> VerifiableTransaction for MutableTransactionVerifiab
self.inner.entries[index].as_ref().expect("expected to be called only following full UTXO population"),
)
}

fn utxo(&self, index: usize) -> Option<&UtxoEntry> {
self.inner.entries.get(index).and_then(Option::as_ref)
}
}

/// Specialized impl for `T=Arc<Transaction>`
Expand Down
1 change: 1 addition & 0 deletions consensus/src/consensus/services.rs
Original file line number Diff line number Diff line change
Expand Up @@ -146,6 +146,7 @@ impl ConsensusServices {
tx_script_cache_counters,
mass_calculator.clone(),
params.storage_mass_activation,
params.kip10_activation,
params.payload_activation,
);

Expand Down
26 changes: 18 additions & 8 deletions consensus/src/consensus/test_consensus.rs
Original file line number Diff line number Diff line change
Expand Up @@ -13,11 +13,8 @@ use kaspa_hashes::Hash;
use kaspa_notify::subscription::context::SubscriptionContext;
use parking_lot::RwLock;

use kaspa_database::create_temp_db;
use kaspa_database::prelude::ConnBuilder;
use std::future::Future;
use std::{sync::Arc, thread::JoinHandle};

use super::services::{DbDagTraversalManager, DbGhostdagManager, DbWindowManager};
use super::Consensus;
use crate::pipeline::virtual_processor::test_block_builder::TestBlockBuilder;
use crate::processes::window::WindowManager;
use crate::{
Expand All @@ -35,9 +32,10 @@ use crate::{
pipeline::{body_processor::BlockBodyProcessor, virtual_processor::VirtualStateProcessor, ProcessingCounters},
test_helpers::header_from_precomputed_hash,
};

use super::services::{DbDagTraversalManager, DbGhostdagManager, DbWindowManager};
use super::Consensus;
use kaspa_database::create_temp_db;
use kaspa_database::prelude::ConnBuilder;
use std::future::Future;
use std::{sync::Arc, thread::JoinHandle};

pub struct TestConsensus {
params: Params,
Expand Down Expand Up @@ -138,6 +136,12 @@ impl TestConsensus {
self.validate_and_insert_block(self.build_block_with_parents(hash, parents).to_immutable()).virtual_state_task
}

/// Adds a valid block with the given transactions and parents to the consensus.
///
/// # Panics
///
/// Panics if block builder validation rules are violated.
/// See `kaspa_consensus_core::errors::block::RuleError` for the complete list of possible validation rules.
pub fn add_utxo_valid_block_with_parents(
&self,
hash: Hash,
Expand All @@ -149,6 +153,12 @@ impl TestConsensus {
.virtual_state_task
}

/// Builds a valid block with the given transactions, parents, and miner data.
///
/// # Panics
///
/// Panics if block builder validation rules are violated.
/// See `kaspa_consensus_core::errors::block::RuleError` for the complete list of possible validation rules.
pub fn build_utxo_valid_block_with_parents(
&self,
hash: Hash,
Expand Down
Loading

0 comments on commit e9711ac

Please sign in to comment.