Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Cap the size of the transaction to reduce strain on network #3686

Closed
cmwaters opened this issue Jul 16, 2024 · 18 comments · Fixed by #3909
Closed

Cap the size of the transaction to reduce strain on network #3686

cmwaters opened this issue Jul 16, 2024 · 18 comments · Fixed by #3909
Assignees
Labels
nice to have item is not blocking or required. WS: V3 3️⃣ item is directly relevant to the v3 hardfork

Comments

@cmwaters
Copy link
Contributor

cmwaters commented Jul 16, 2024

Summary

Similar to how Celestia has constraints on block and square size to ensure that the network does not fall over under excess demand, we should also consider adding constraints to the max tx size and max number of transactions.

Currently we do have an antehandler that sets the max tx size to the size of the current max block (note that nodes can locally set a smaller max tx size for their mempool). However, as the block jumps to 8MB, having to gossip an 8MB transaction without any chunking could be detrimental to the network. As a precaution we should set a max size that is no longer equal to the block size. This also makes it easier for submitters to know how large they should batch their transactions.

@cmwaters cmwaters added proposal item is not yet actionable and is suggesting a change that must first be agreed upon WS: V3 3️⃣ item is directly relevant to the v3 hardfork labels Jul 16, 2024
@cmwaters
Copy link
Contributor Author

Does this require a CIP (It is technically a consensus breaking rule)?

What would we want to set this size to? 1MB? 2MB?

@cmwaters cmwaters added the needs:discussion item needs to be discussed as a group in the next sync. if marking an item, pls be prepped to talk label Jul 18, 2024
@evan-forbes
Copy link
Member

yeah, this would require at least 1 CIP, but as touched on above, the reasoning for the two is quite distinct so we might want to do 2 separate things.

The cap on tx size is difficult, as users have indicated that larger blobs can help things like "super blobs". The protocol also might not need such a limit until blocks are increased to meaningfully larger than > 4MB. Configurations and expectations could also just need to be changed for larger txs. Larger txs take longer to gossip, therefore users should not expect them to be included within the typical 2 block period. For the same reason, proposers using any flavor of compact blocks change their calculations for inclusion of large txs. If that's the case, then maybe we don't need a limit.

The cap on the number of txs seems much less difficult to find an effective number and to see the urgency. At least for the medium term. This might be needed sooner rather than later as well, considering 5000 txs can already fit in a 2MB block.

@cmwaters
Copy link
Contributor Author

The cap on the number of txs seems much less difficult to find an effective number and to see the urgency. At least for the medium term. This might be needed sooner rather than later as well, considering 5000 txs can already fit in a 2MB block.

But if the problem here is with execution costs, then don't we want to go about this by setting a max gas?

@cmwaters
Copy link
Contributor Author

The protocol also might not need such a limit until blocks are increased to meaningfully larger than > 4MB

That could be the case, but that's on the horizon. Ideally we add the cap before we increase such that it has less impact on current users

@evan-forbes
Copy link
Member

setting a max gas?

by this, are you referring to setting the max gas value to something other than -1? adjusting resource pricing is a great alternative to simply hardcoding the max number of txs! That seems like a bit more complex of a problem if we continue to price blobs in the same unit as doing things with state.

@cmwaters
Copy link
Contributor Author

by this, are you referring to setting the max gas value to something other than -1?

Yup, I think this would be better if we can calculate a reasonable upper bound

@evan-forbes
Copy link
Member

after a sync discussion, we can first have a soft cap via overriding the mempool config in a patch release, and then a consensus critical hardcoded parameter pending a CIP.

@rootulp
Copy link
Collaborator

rootulp commented Aug 1, 2024

we can first have a soft cap via overriding the mempool config in a patch release

A soft cap on the number of transactions in the mempool or the size of an individual transaction?

@rootulp
Copy link
Collaborator

rootulp commented Aug 5, 2024

Clarified during a call: the soft cap is proposed for the size of an individual transaction:

[mempool]

max_tx_bytes = 7897088

and not for the number of transactions in the mempool.

@adlerjohn
Copy link
Member

adlerjohn commented Aug 5, 2024

Strong approve of this proposal (both the max transaction size and the max number of transactions or just in general max resource consumption of the Celestia state machine).

I'd suggest a maximum number of bytes for a single transaction to be limited to the current ~2 MB, and increasing it only after carefully studying the effects of larger transaction sizes, since the current benchmarking etc. is more focused on larger block sizes. The max transaction size can be increased later as needed, but setting it to ~2 MB to start at least users can continue submitting any blob they could submit today.

@musalbas
Copy link
Member

musalbas commented Aug 5, 2024

I'm not sure a tx size limit needs to be consensus critical, perhaps it can just be a mempool param.

Like if a validator makes a block with a 32mb tx they received out of band, I don't think that causes any issues

@cmwaters cmwaters changed the title Block validity rule to cap the size of the transaction and the number of txs in the block Block validity rule to cap the size of the transaction Aug 26, 2024
@cmwaters cmwaters changed the title Block validity rule to cap the size of the transaction Cap the size of the transaction to reduce strain on network Aug 26, 2024
@cmwaters
Copy link
Contributor Author

EDIT: I have split this out into two different issues. You can find the discussion on limiting the amount of computation here: #3820

@rootulp rootulp added this to the v3 milestone Sep 12, 2024
@evan-forbes evan-forbes added the required issue is required to be closed before workstream can be closed label Sep 12, 2024
@evan-forbes evan-forbes removed the needs:discussion item needs to be discussed as a group in the next sync. if marking an item, pls be prepped to talk label Sep 23, 2024
@ninabarbakadze
Copy link
Member

I'll write a CIP for this. @rootulp @cmwaters @evan-forbes

@musalbas
Copy link
Member

Can't it just be a mempool parameter, rather than consensus critical?

@evan-forbes
Copy link
Member

for posterity, cross posting this here

we know we want to do celestiaorg/celestia-core#1446 . fortunately the difference in terms of implementation is very small, so we can move forward with implementing a version constant for max tx size, and then make it a tx validity rule after.

The reason to be defensive here is because once users have the ability to post blobs > 2MB, then there's no going back as that could break users.

Without the ability to chunk, then an 8MB tx would take longer to propagate. How much longer depends on many factors. The existing block prop inherently chunks block data, and this would not have an effect other than the tx taking longer to be picked up by a validator.
the effect of an 8MB vs 2MB tx on compact blocks is more complex. Same goes for vacuum. this means that we'd have to implement chunking for both, which complicates the initial implementations.

The main issue mentioned by John was the potential for spam on the p2p layer. we are unable to cut off a peer until we download the entire 8MB message. fortunately, fixing this does not require a tx validity rule, only a versioned constant

to summarize: we should be able to handle 8MB txs, but then need to add chunking to compact blocks and vacuum. we want a versioned constant no matter what.

I'm supportive of simply adding the versioned constant at 2MB in the mempool, with a plan to increase it to 8MB after chunking is added to vacuum or compact blocks. Users can still get an 8MB tx included in a block if they submit directly to a validator and wait for them to produce a block. If this occurs, then users clearly want 8MB blocks, and we need to add chunking to vacuum and compact blocks to support those users.

@evan-forbes evan-forbes added nice to have item is not blocking or required. and removed proposal item is not yet actionable and is suggesting a change that must first be agreed upon required issue is required to be closed before workstream can be closed labels Oct 2, 2024
@evan-forbes evan-forbes removed this from the v3 milestone Oct 2, 2024
@rootulp
Copy link
Collaborator

rootulp commented Oct 4, 2024

Can't it just be a mempool parameter, rather than consensus critical?

I'm wondering the same thing. Earlier in this thread, we discussed addressing this issue via the existing mempool config:

[mempool]

max_tx_bytes = 7897088

Why does #3909 introduce a versioned constant and a tx validity rule instead of relying on the mempool parameter? Perhaps this can be clarified in the CIP.

@adlerjohn
Copy link
Member

@rootulp there's a few motivations, but non-exhaustively: (if people start using large blobs then the max blob size can't be decreased) if throughput is the bottleneck, and having large blobs also requires large squares, then that could actually prevent decreasing block size and block time at the same time while keeping throughput the same. For this and other unknown unknown cases, it's better to be safe than sorry otherwise we might be painting ourselves into a corner.

ninabarbakadze added a commit that referenced this issue Oct 11, 2024
## Overview

Fixes [#3686](#3686)

---------

Co-authored-by: Rootul P <[email protected]>
@rootulp
Copy link
Collaborator

rootulp commented Oct 17, 2024

I understand that we don't want to claim support for 8 MiB blobs now because we may need to decrease that later. I was wondering why we chose to limit tx size via a consensus breaking change instead of changing the default mempool MaxTxBytes. IMO if we change the default max tx bytes to 2 MiB (and ask validator's to configure that locally) then I expect most blob submitters to continue submitting blobs < 2 MiB because blobs larger than that will be rejected by most nodes' mempools.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
nice to have item is not blocking or required. WS: V3 3️⃣ item is directly relevant to the v3 hardfork
Projects
None yet
Development

Successfully merging a pull request may close this issue.

6 participants