Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

feat(Gas Oracle): Move gas prices to be U256 to avoid overflows #1324

Conversation

juan518munoz
Copy link
Contributor

@juan518munoz juan518munoz commented Mar 1, 2024

What ❔

Replace variables types from u64 to U256.

Why ❔

Prevent possible overflows.

Checklist

  • PR title corresponds to the body of PR (we generate changelog entries from PRs).
  • Code has been formatted via zk fmt and zk lint.

@juan518munoz juan518munoz marked this pull request as ready for review March 1, 2024 21:46
Copy link
Member

@popzxc popzxc left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Not sure if this PR is ready to be reviewed, but I think that if we're going to switch to U256 because u64 can now overflow with custom base token, it is certainly in scope to address all the places where an overflow can occur (otherwise there isn't much value in the change).

@jrchatruc
Copy link
Contributor

Not sure if this PR is ready to be reviewed, but I think that if we're going to switch to U256 because u64 can now overflow with custom base token, it is certainly in scope to address all the places where an overflow can occur (otherwise there isn't much value in the change).

That makes sense, do you want to add someone else to review this PR or are you checking it internally?

header.l1_gas_price as i64,
header.l2_fair_gas_price as i64,
(header.l1_gas_price).as_u64() as i64,
(header.l2_fair_gas_price).as_u64() as i64, // TODO: this might overflow
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Fixing this overflow requires a database migration moving the field from bigint into numeric(80,0). What is the policy for postgres migrations? Do we change the existing field or should we create a new one to start using now?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Policy:

  • migration shouldn't be breaking, i.e. old server version should be able to work with a DB with applied migration
  • migration shouldn't take much time, we don't have exact number set, but I would say everything up to 5s is acceptable, cc @Deniallugo @RomanBrodetski maybe you have different opinion.

So, if this holds for changing type for existing column then we can go this way. It should be easy to check if such a migration is breaking or not. For measuring time you can ask someone who has access to EN DB (me, @Deniallugo, @RomanBrodetski) and we can measure time for the migration on real mainnet data.

Copy link
Contributor

@jrchatruc jrchatruc Mar 8, 2024

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I added a migration creating new columns for the values that need to be moved to U256 here 518ee35. I don't believe it's possible to have an old server version work with a DB with applied migrations if we simply modify the column type.

Essentially the migration creates new columns with a _u256 suffix, copies all the existing values on the non-_u256 column over to it, and the rust dal/sqlx code is updated to make selects take the _u256 version and map it to the existing struct fields. Inserts are updated to insert into the new _u256 column, and an SQL trigger is created for inserts and updates to those tables to also insert/update the old column with the same value. The only issue here is what happens when the value inserted/updated does not fit in a bigint/i64. The trigger implementation I pushed will set the old column value to -1 in that case to signify an error.

Let me know what you think of this, in theory that scenario should not be a problem, as in the currently deployed environments the value should always fit in a bigint, but it might be better to error out in those environments to make absolutely sure nothing weird happens.

@@ -474,8 +475,7 @@ impl BlocksDal<'_, '_> {
// Serialization should always succeed.
let used_contract_hashes = serde_json::to_value(&header.used_contract_hashes)
.expect("failed to serialize used_contract_hashes to JSON value");
let base_fee_per_gas = BigDecimal::from_u64(header.base_fee_per_gas.as_u64()) // TODO: this might overflow
.context("block.base_fee_per_gas should fit in u64")?;
let base_fee_per_gas = BigDecimal::from_str(&header.base_fee_per_gas.to_string())?;
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

fixes possible overflow by converting directly from U256 to BigDecimal.

@popzxc popzxc requested a review from perekopskiy March 5, 2024 07:13
@popzxc
Copy link
Member

popzxc commented Mar 5, 2024

@jrchatruc I've asked @perekopskiy to review and assist you with this PR.

Copy link
Contributor

@perekopskiy perekopskiy left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Some general questions/comments

  • Do I get it right that base_fee_per_gas/l2_fair_gas_price may overflow u64 if base token price is low?
  • Do I get it right that l1_gas_price may overflow u64 if L1 is not Ethereum but some other chain with cheap base token?
  • u64 is sufficient for gas_per_pubdata, because all VM gas fits into u32, so even u32 is enough. So please revert all changes you've done to gas_per_pubdata

header.l1_gas_price as i64,
header.l2_fair_gas_price as i64,
(header.l1_gas_price).as_u64() as i64,
(header.l2_fair_gas_price).as_u64() as i64, // TODO: this might overflow
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Policy:

  • migration shouldn't be breaking, i.e. old server version should be able to work with a DB with applied migration
  • migration shouldn't take much time, we don't have exact number set, but I would say everything up to 5s is acceptable, cc @Deniallugo @RomanBrodetski maybe you have different opinion.

So, if this holds for changing type for existing column then we can go this way. It should be easy to check if such a migration is breaking or not. For measuring time you can ask someone who has access to EN DB (me, @Deniallugo, @RomanBrodetski) and we can measure time for the migration on real mainnet data.

base_fee_per_gas: U256::from(
l1_batch
.base_fee_per_gas
.to_u64()
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

You should avoid casting to u64 here

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Addressed here 37fccd5

@@ -559,7 +564,7 @@ impl From<StorageMiniblockHeader> for MiniblockHeader {
hash: H256::from_slice(&row.hash),
l1_tx_count: row.l1_tx_count as u16,
l2_tx_count: row.l2_tx_count as u16,
base_fee_per_gas: row.base_fee_per_gas.to_u64().unwrap(),
base_fee_per_gas: U256::from(row.base_fee_per_gas.to_u64().unwrap()),
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Same

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Same as above, addressed here 37fccd5

Comment on lines 41 to 44
pub l1_gas_price: u64,
pub l1_gas_price: U256,
pub l2_fair_gas_price: u64,
pub fair_pubdata_price: Option<u64>,
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Why only type of l1_gas_price is changed?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yeah you're right, we will also update l2_fair_gas_price and fair_pubdata_price. Also I realized the type for prices need to be updated for sync blocks along with the protobuf .proto files used for external node sync messages (writing it here to track it)

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We have updated both l2_fair_gas_price and fair_pubdata_price and the .proto files, but I realized they are not actually used for syncing, as that's done through the regular RPC api the operator exposes.

Protobuf types were updated here a478592 and a simple unit test for the change was added here 2c75075, but to make sure: what is the whole protobuf/storagesync stuff used for? How do we know it's working outside of some basic unit tests?

@@ -36,11 +36,10 @@ impl<H: HistoryMode, S: WriteStorage> VmInstance<S, H> {
// For now, bootloader charges only for base fee.
let effective_gas_price = self.block_context.base_fee;

let bootloader_eth_price_per_pubdata_byte = U256::from(effective_gas_price)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

You should avoid changing multivm version except for latest. Instead you should glue results

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The problem here is we have updated the L1PeggedBatchFeeModelInput struct to move gas prices to U256. This struct is defined on the zksync_types crate under fee model, outside the vm/multivm stuff. This change breaks all the multivm methods/functions that use it because they expect their fields to still be u64 (for example the derive_base_fee_and_gas_per_pubdata function).

I'm not sure what the best way to proceed here is; we can introduce a new struct (something like L1PeggedBatchFeeModelInputU256) that the latest vm uses, while the old versions keep using the non U256 version and make the derive_base_fee_and_gas_per_pubdata function in utils dispatch accordingly. Is that reasonable?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes! IMO it's better to update L1PeggedBatchFeeModelInput to use U256 and introduce struct L1PeggedBatchFeeModelInputLegacy with u64 and describe why we need it with a doc-comment. And in old VM version modules you can do use ...::L1PeggedBatchFeeModelInputLegacy as L1PeggedBatchFeeModelInput so the diff will be really small

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

After trying out the changes I realized that doing this ends up requiring going all the way to the top to the Vm struct and VmInterface trait, because it contains an they both contain an L1BatchEnv, which holds the gas prices that need to be changed from u64 to U256.

This is a lot of structs/traits (at least 7) that would now have their Legacy version, which is a lot of changes. We can go that route but I want to double check if it's reasonable or if there are other alternatives.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@perekopskiy @popzxc can you please check the previous message. Thanks!

Comment on lines 28 to 31
pub l1_gas_price: u64,
pub l1_gas_price: U256,
/// L2 gas price used as VM parameter for the L1 batch corresponding to this L2 block.
pub l2_fair_gas_price: u64,
/// The pubdata price used as VM parameter for the L1 batch corresponding to this L2 block.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Same here, why did you change only l1_gas_price?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Same comment as this

@jrchatruc
Copy link
Contributor

Some general questions/comments

  • Do I get it right that base_fee_per_gas/l2_fair_gas_price may overflow u64 if base token price is low?
  • Do I get it right that l1_gas_price may overflow u64 if L1 is not Ethereum but some other chain with cheap base token?
  • u64 is sufficient for gas_per_pubdata, because all VM gas fits into u32, so even u32 is enough. So please revert all changes you've done to gas_per_pubdata
  • Indeed both base_fee_per_gas and l2_fair_gas_price may overflow 64 bits if the base token price is low.
  • Yes, I believe gas prices in Ethereum are also technically U256, so I think it's a good idea to change it so it matches.
  • Makes sense, we'll revert them

Comment on lines 8 to 13
UPDATE l1_batches SET l1_gas_price_u256=l1_gas_price;
UPDATE l1_batches SET l2_fair_gas_price_u256=l2_fair_gas_price;

UPDATE miniblocks SET l1_gas_price_u256=l1_gas_price;
UPDATE miniblocks SET l2_fair_gas_price_u256=l2_fair_gas_price;
UPDATE miniblocks SET fair_pubdata_price_u256=fair_pubdata_price;
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Got it, I'll remove the updates on the migration and instead add a task that every few seconds updates some of the rows in batches of around 5 or 10k until it's done. Is it reasonable to write the task in Rust/Tokio inside the operator or do you have another method of doing it? We couldn't find an example in the codebase.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Check fee_address_migration as an example

Comment on lines -1577 to +1584
let raw_batches = sqlx::query_as!(
StorageL1Batch,
let raw_batches = sqlx::query_as::<_, StorageL1Batch>(
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hmm, why are you changing macro to function?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The reason for this is the query_as! macro needs the struct field names to be the exact same as the ones in postgres, but with the new columns mapped to the old struct fields, so the names do not match anymore. The query_as function allows for this

@jrchatruc
Copy link
Contributor

The change required for this ticket (related to this discussion) was merged into this PR (here a11e12c) since it requires its changes to do it.

RomanBrodetski and others added 16 commits April 17, 2024 12:22
matter-labs#1711)

So far we used a simpler interface where statekeeper was provided the
next miniblock only after it persisted the previous one. With the
current statekeeper implementation it severely limited the throughput.
I've updated consensus API to allow for storing certificates
asynchronously to improve the throughput.
## What ❔

Fixes EVM-561


## Why ❔

<!-- Why are these changes done? What goal do they contribute to? What
are the principles behind them? -->
<!-- Example: PR templates ensure PR reviewers, observers, and future
iterators are in context about the evolution of repos. -->

## Checklist

<!-- Check your PR fulfills the following items. -->
<!-- For draft PRs check the boxes as you complete them. -->

- [ ] PR title corresponds to the body of PR (we generate changelog
entries from PRs).
- [ ] Tests for the changes have been added / updated.
- [ ] Documentation comments have been added / updated.
- [ ] Code has been formatted via `zk fmt` and `zk lint`.
- [ ] Spellcheck has been run via `zk spellcheck`.
- [ ] Linkcheck has been run via `zk linkcheck`.
## What ❔

Dedups system requirements in External Node docs and makes them
(hopefully) more clear

## Why ❔

Users are being confused by unclear system requirements mentioned in two
different places in docs.

## Checklist

- [x] PR title corresponds to the body of PR (we generate changelog
entries from PRs).
- [ ] Tests for the changes have been added / updated.
- [x] Documentation comments have been added / updated.
- [x] Code has been formatted via `zk fmt` and `zk lint`.
- [x] Spellcheck has been run via `zk spellcheck`.
- [x] Linkcheck has been run via `zk linkcheck`.
…#1284)

## What ❔

Adding miniblock sealing according to max payload size.

## Why ❔

Compliance with max payload size imposed in consensus modules.

## Notes
* Building the payload incrementally isn’t feasible because the
miniblock’s transactions are flushed to DB before being reconstructed to
be encoded as the payload. So the payload size accumulator added to
`MiniblockUpdates` is just a heuristic done by encoding every executed
transaction only to find out the estimated encoding size.
* `miniblock_max_payload_size` was added to `StateKeeperConfig`,
decoupled from `max_payload_size` in `consensus::config::Config`. It is
possible to only use the latter, or to make the latter define/override
the former. The state keeper max size should be less than 1/2 the
consensus max size, because it never rejects the latest transaction, and
doesn’t account for some metadata fields. Actual value in that respect
was not set yet, because I wasn’t sure about values used for different
non-test environments, or if we even want to keep this approach.
* `MiniblockMaxPayloadSizeSealer` was added, implementing
`should_seal_miniblock()` alone (without
`should_seal_l1_batch_unconditionally`, as in trait `IoSealCriteria`).
This isn’t an ideal solution, so other ideas are welcome.

The applied config are: 
* Statekeeper threshold: 1mb
* Consensus threshold: 2.5mb

---

Closing
[BFT-417](https://linear.app/matterlabs/issue/BFT-417/limit-l2-block-size)

## Checklist

<!-- Check your PR fulfills the following items. -->
<!-- For draft PRs check the boxes as you complete them. -->

- [ ] PR title corresponds to the body of PR (we generate changelog
entries from PRs).
- [ ] Tests for the changes have been added / updated.
- [ ] Documentation comments have been added / updated.
- [ ] Code has been formatted via `zk fmt` and `zk lint`.
- [ ] Spellcheck has been run via `zk spellcheck`.
- [ ] Linkcheck has been run via `zk linkcheck`.
## What ❔

* Fixing docker build and scripts for the local node/setup.


## Why ❔

* This will allow local node to work again - which is crucial before the
bridgehub udpate.
* This PR is only fixing it in the 'main' branch - there will be a
followup PR to the bridgehub branch coming later.
## What ❔

Extract `zksync_core::house_keeper` to separate crate in the `node`
folder

## Why ❔

Part of the "modularize the codebase" process, required for legolizer
and publishing on crates.io.

## Checklist

- [x] PR title corresponds to the body of PR (we generate changelog
entries from PRs).
- [x] Code has been formatted via `zk fmt` and `zk lint`.
- [x] Spellcheck has been run via `zk spellcheck`.
- [x] Linkcheck has been run via `zk linkcheck`.
…-kl-factory-2

chore: Merge main to kl factory
## What ❔
Updated the bytecode compression explanation example, to be valid and
working.

In the connecting contracts PR:
matter-labs/era-contracts#193 I added a test
that checks the bytecode and runs the `publishCompressBytecode` function
with the bytecode from this example.

## Why ❔
The previous example was not valid, it could confuse newcomers, because
the example had different bytecode compared to what was described.

## Checklist
- [x] PR title corresponds to the body of PR (we generate changelog
entries from PRs).
- [x] Tests for the changes have been added / updated.
- [x] Documentation comments have been added / updated.
- [x] Code has been formatted via `zk fmt` and `zk lint`.
- [x] Spellcheck has been run via `zk spellcheck`.
- [x] Linkcheck has been run via `zk linkcheck`.
## What ❔

- Allows to configure concurrency during DB recovery and restricts it to
10 by default.
- Sets the "affected" health status while the recovery is in progress.

## Why ❔

- DB recovery is very I/O-heavy, so having concurrency less than the DB
pool size can utilize node resources more efficiently.
- "Affected" health status is more idiomatic; the node with such a
status still passes readiness checks, but it signals that it isn't fully
functional.

## Checklist

- [x] PR title corresponds to the body of PR (we generate changelog
entries from PRs).
- [x] Tests for the changes have been added / updated.
- [x] Documentation comments have been added / updated.
- [x] Code has been formatted via `zk fmt` and `zk lint`.
- [x] Spellcheck has been run via `zk spellcheck`.
- [x] Linkcheck has been run via `zk linkcheck`.
…labs#1696)

## What ❔

Renames "miniblock" to "L2 block" in types, DAL and state keeper.

## Why ❔

- "Miniblock" is an internal term that is somewhat difficult to
understand; "L2 block" is better in this regard.
- It makes sense for renaming to be complete (i.e., not only concern
logs, but also the codebase), so that naming is reasonably consistent.

## Checklist

- [x] PR title corresponds to the body of PR (we generate changelog
entries from PRs).
- [x] Tests for the changes have been added / updated.
- [x] Documentation comments have been added / updated.
- [x] Code has been formatted via `zk fmt` and `zk lint`.
- [x] Spellcheck has been run via `zk spellcheck`.
- [x] Linkcheck has been run via `zk linkcheck`.
…s` (matter-labs#1717)

## What ❔

don't panic in `BlockOutputWithProofs::verify_proofs` and rather return
a `Result`

## Why ❔

So `BlockOutputWithProofs::verify_proofs` can be used by other
components.

## Checklist

- [x] PR title corresponds to the body of PR (we generate changelog
entries from PRs).
- [x] Tests for the changes have been added / updated.
- [x] Documentation comments have been added / updated.
- [x] Code has been formatted via `zk fmt` and `zk lint`.
- [ ] Spellcheck has been run via `zk spellcheck`.
- [ ] Linkcheck has been run via `zk linkcheck`.

Signed-off-by: Harald Hoyer <[email protected]>
…-fix

fix(yarn.lock): zksync-ethers commit fix
slowli and others added 28 commits May 17, 2024 14:45
## What ❔

Fixes minor issues with the EN:

- Transient error detection for RPC clients is suboptimal, leading to
what really is a transient error crashing consistency checker.
- Since refactoring signal handling, `ManagedTasks::wait_single()` on EN
may inappropriately log task termination errors even after a signal is
being received.
- Since recent refactoring (?), block fetcher doesn't tag its L2 client.
- Block fetcher may produce many rate limiting logs, which currently
have WARN level.
- Initializing L2 client multiple times (e.g., in a load test) produces
many WARN logs.

## Why ❔

Improves EN UX.

## Checklist

- [x] PR title corresponds to the body of PR (we generate changelog
entries from PRs).
- [x] Documentation comments have been added / updated.
- [x] Code has been formatted via `zk fmt` and `zk lint`.
- [x] Spellcheck has been run via `zk spellcheck`.
…ter-labs#1948)

## What ❔

Reworks `EthInterface` to be an extension trait for L1 RPC client
instead of a separate entity.

## Why ❔

Allows to eliminate L1 client / `EthInterface` duplication and ensure
that the `EthInterface` wrapper code is covered by tests.

## Checklist

- [x] PR title corresponds to the body of PR (we generate changelog
entries from PRs).
- [x] Tests for the changes have been added / updated.
- [x] Documentation comments have been added / updated.
- [x] Code has been formatted via `zk fmt` and `zk lint`.
- [x] Spellcheck has been run via `zk spellcheck`.
…#1988)

## What ❔

Runs `MainNodeFeeParamsFetcher` in `run_api`.
Removes it from `run_core`.

## Why ❔

En's core doesn't use `MainNodeFeeParamsFetcher`. API uses it. API
operated normally only if Core and Api are launched in the same process
which is not true for distributed nodes. The PR fixes it

## Checklist

<!-- Check your PR fulfills the following items. -->
<!-- For draft PRs check the boxes as you complete them. -->

- [ ] PR title corresponds to the body of PR (we generate changelog
entries from PRs).
- [ ] Tests for the changes have been added / updated.
- [ ] Documentation comments have been added / updated.
- [ ] Code has been formatted via `zk fmt` and `zk lint`.
- [ ] Spellcheck has been run via `zk spellcheck`.
…al` (#220)

* initial commit

* replace err matching with unwrap && use only std mutex
## What ❔

Add check to `set_tx_id` query, that doesn't allow to update tx_id if it
is not NULL.

## Why ❔

To prevent issues when 1 batch can be commited 2 times.

## Checklist

<!-- Check your PR fulfills the following items. -->
<!-- For draft PRs check the boxes as you complete them. -->

- [x] PR title corresponds to the body of PR (we generate changelog
entries from PRs).
- [x] Tests for the changes have been added / updated.
- [x] Documentation comments have been added / updated.
- [x] Code has been formatted via `zk fmt` and `zk lint`.
- [x] Spellcheck has been run via `zk spellcheck`.
)

This PR adds a check to the batch status command to show that a batch
does not exist or that the proving process for it has already finished.

## Checklist

- [x] PR title corresponds to the body of PR (we generate changelog
entries from PRs).
- [ ] Tests for the changes have been added / updated.
- [ ] Documentation comments have been added / updated.
- [x] Code has been formatted via `zk fmt` and `zk lint`.
- [x] Spellcheck has been run via `zk spellcheck`.
…abs#1995)

## What ❔

- Adds foundry installation to zk-environment Dockerfile

## Why ❔

- This is necessary for adding CI to zk_toolbox

<!-- Why are these changes done? What goal do they contribute to? What
are the principles behind them? -->
<!-- Example: PR templates ensure PR reviewers, observers, and future
iterators are in context about the evolution of repos. -->

## Checklist

<!-- Check your PR fulfills the following items. -->
<!-- For draft PRs check the boxes as you complete them. -->

- [x] PR title corresponds to the body of PR (we generate changelog
entries from PRs).
- [x] Tests for the changes have been added / updated.
- [x] Documentation comments have been added / updated.
- [x] Code has been formatted via `zk fmt` and `zk lint`.
- [x] Spellcheck has been run via `zk spellcheck`.
## What ❔

<!-- What are the changes this PR brings about? -->
<!-- Example: This PR adds a PR template to the repo. -->
<!-- (For bigger PRs adding more context is appreciated) -->

## Why ❔

<!-- Why are these changes done? What goal do they contribute to? What
are the principles behind them? -->
<!-- Example: PR templates ensure PR reviewers, observers, and future
iterators are in context about the evolution of repos. -->

## Checklist

<!-- Check your PR fulfills the following items. -->
<!-- For draft PRs check the boxes as you complete them. -->

- [ ] PR title corresponds to the body of PR (we generate changelog
entries from PRs).
- [ ] Tests for the changes have been added / updated.
- [ ] Documentation comments have been added / updated.
- [ ] Code has been formatted via `zk fmt` and `zk lint`.
- [ ] Spellcheck has been run via `zk spellcheck`.
## What ❔

Integrate GPU proof compressors into monorepo.
GPU compressors can be run with `gpu` feature.

## Why ❔

Running compressors with GPU significantly improves efficiency.
CPU compressor average proving time - 15 minutes
GPU compressor average proving time - 2 minutes

## Checklist

<!-- Check your PR fulfills the following items. -->
<!-- For draft PRs check the boxes as you complete them. -->

- [x] PR title corresponds to the body of PR (we generate changelog
entries from PRs).
- [x] Tests for the changes have been added / updated.
- [x] Documentation comments have been added / updated.
- [x] Code has been formatted via `zk fmt` and `zk lint`.
- [x] Spellcheck has been run via `zk spellcheck`.
- [x] Linkcheck has been run via `zk linkcheck`.
🤖 I have created a release *beep* *boop*
---


##
[24.4.0](matter-labs/zksync-era@core-v24.3.0...core-v24.4.0)
(2024-05-21)


### Features

* **prover:** add GPU feature for compressor
([matter-labs#1838](matter-labs#1838))
([e9a2213](matter-labs@e9a2213))
* **pruning:** remove manual vaccum; add migration configuring
autovacuum
([matter-labs#1983](matter-labs#1983))
([3d98072](matter-labs@3d98072))
* **tests:** Move all env calls to one place in ts-tests
([matter-labs#1968](matter-labs#1968))
([3300047](matter-labs@3300047))


### Bug Fixes

* Disallow non null updates for transactions
([matter-labs#1951](matter-labs#1951))
([a603ac8](matter-labs@a603ac8))
* **en:** Minor node fixes
([matter-labs#1978](matter-labs#1978))
([74144e8](matter-labs@74144e8))
* **en:** run `MainNodeFeeParamsFetcher` in API component
([matter-labs#1988](matter-labs#1988))
([b62677e](matter-labs@b62677e))
* **merkle-tree:** Fix tree API health check status
([matter-labs#1973](matter-labs#1973))
([6235561](matter-labs@6235561))

---
This PR was generated with [Release
Please](https://github.com/googleapis/release-please). See
[documentation](https://github.com/googleapis/release-please#release-please).

---------

Co-authored-by: zksync-era-bot <[email protected]>
I need matter-labs/era-consensus#114 to be
included, so start testing EN validator mode (on stage).
…tter-labs#1893)

Prover queue metrics jobs are adnotate with protocol version. These
metrics are used for autoscaling jobs. These changes will enable
autoscaling different pools of provers at the same time, with different
protocol versions.

The change is necessary for a better protocol upgrade procedure. Flow in
the past: Start protocol upgrade, finalize all old provers, deploy new
provers, finish protocol upgrade.
Flow in the future: Deploy both new and old provers. Be independent of
protocol upgrade and require no manual timing of deployment.

The metrics will be used in autoscaler to tell what versions to have up.
Autoscaler will have a configuration similar to:
deployment green - metric protocol_version=22 - tag prover-v13.0.0
deployment blue - metric protocol_version=24 - tag prover-v14.0.0

The metrics will inform how many instances of each component will be
needed per version.

NOTE: There are some refactorings involved for the upcoming core/prover
house_keeper split. Whilst they could've been separated in a different
PR, they've been done together given CI's condition.
NOTE2: We should really migrate to `vise`. Left it out for this PR as it
was growing enough as is (+ adds operational risks).

---------

Co-authored-by: Artem Fomiuk <[email protected]>
@juan518munoz juan518munoz deleted the gas_oracle_move_gas_to_u256 branch May 22, 2024 19:37
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.