-
Notifications
You must be signed in to change notification settings - Fork 92
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
SIMD-0186: Loaded Transaction Data Size Specification #186
base: main
Are you sure you want to change the base?
Conversation
af85c16
to
c565856
Compare
c565856
to
a364ce7
Compare
a364ce7
to
092a67c
Compare
must be loaded, including any programs it may call. The amount of data a | ||
transaction is allowed to load is capped, and if it exceeds that limit, loading | ||
is aborted. This functionality is already implemented in the validator. The | ||
purpose of this SIMD is to explicitly define how transaction size is calculated. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Think should be more precise here:
"transaction size" => "transaction account data size" or "transaction loaded data size"
enshrine the current Agave behavior in the protocol. This is undesirable because | ||
the current behavior is highly idiosyncratic, and LoaderV3 program sizes are | ||
routinely undercounted. | ||
* Builtin programs are backed by accounts that only contain the program name as |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
it's also in the works to make (some/most) builtins just normal programs, so adding any special cases here would probably be nullified in the future anyway
|
||
## Backwards Compatibility | ||
|
||
Transactions that call LoaderV3 programs via CPI and are extremely close to the |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
do you happen to know if there are any such programs on mnb?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
ill edit this to be clearer, but its a property of the transaction, not the program
for instance if i have a transaction that normally uses 63 1mb accounts and a 64th account which is a 1mb program used for cpi, right now (ignoring loader sizes as not relevant) the total transaction size would be 63mb + 36 bytes because we incorrectly dont charge for the program data. with this change the calculated transaction size would become 63mb + 1mb + 36 bytes and the transaction would fail
it seems extremely unreasonable to me that any legitimate transaction would actually use >60mb of data, but i dont know how we would be able to find that out
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
shoutout to tao for help with metrics, looks like in the past 30 days grouped by hour, we saw one max size 110mb (?!) one 65mb one 57mb, three in the 40s, everything else 20s and below. change seems safe to me, this shouldnt break any real workflows. for point of comparison jupiter v6 is 3mb so with a 30mb transaction you would need to cpi into a dozen different jupiter-sized programs to even be near the limit
@brooksprumo adding you as a second reviewer since youve done research on transaction sizes |
@topointon-jump also i would add you but it doesnt seem like github lets me add people not in the suggested list (unless this ping fixes that) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Looks good to me. Please consider this approval as "soft", since I am not an expert in the whole runtime code. Defining and simplifying this counting of account data size is much appreciated.
* The size of a program owned by LoaderV3 may or may not include the size of its | ||
programdata depending on how the program account is used on the transaction. | ||
Programdata is also itself counted if included in the transaction accounts list. | ||
This means programdata may be counted zero, one, or two times per transaction. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This means programdata may be counted zero, one, or two times per transaction. | |
This means programdata may be counted zero, one, or two times per transaction. |
🫠
i added another existing issue in the current implementation to the problem description. the proposal itself is unchanged |
25d0096
to
c694155
Compare
transaction is allowed to load is capped, and if it exceeds that limit, loading | ||
is aborted. This functionality is already implemented in the validator. The | ||
purpose of this SIMD is to explicitly define how loaded transaction data size is | ||
calculated. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Last 2 sentences here make it sound like the SIMD just documents the existing behavior.
Let's change the last sentence so its' clear there is a proposal for a new algorithm for calculating loaded account size.
once and only once. | ||
2. A valid program owned by LoaderV3 also includes the size of its programdata. | ||
3. Other than point 2, no accounts are implicitly added to the total data size. | ||
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
edge-case for consideration.
Does it matter if the account is used as a top-level or instruction account? Realistically we should not see these if people construct txs reasonably, but if an account is:
- not the fee-payer
- not passed to any instructions
- not used a program
we could reasonably choose to not load it at all?
With the proposed algorithm we would still need to load it in order to find the size.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
as-written it wouldnt matter, all accounts are treated the same
but thats an interesting idea, i wasnt thinking about account loading with this simd. i was only focused on how to arrive at a correct total given a set of accounts. but youre right, i cant think of a reason why we should load these kinds of accounts at all, other than the fact that we already do it now
if we do that (im in favor), there are several formulas that seem plausible to me
Option 1a: sum the sizes of the unique set of accounts instruction accounts + program ids + fee payer
. any account in that set which is a LoaderV3 program account also includes its programdata
Option 1b: the same as above but programdata is only added to program size if the program is valid for execution in the current slot
Option 2: sum the sizes of the unique set of accounts instruction accounts + fee payer
. add it to the sum of the sizes of the unique set of program ids, which include programdata sizes for LoaderV3 program ids
i think Option 1 is preferable. Option 2 fails to count cpi programs, but it sidesteps any issues with tombstoned programs because (after fee-only transactions... i say that a lot lately) the sizes dont matter, transactions that use them as program ids would fail anyway
Option 1a and 1b behave the same for closed programs since there is no programdata. they differ on handling programs which fail validation and programs that are changed in-slot. this is especially germaine to simd83 since transactions will be able to mess with programs in the same batch
all the options can double-count LoaderV3-owned accounts but i think this is appropriate, the compiled program and the account data if accessed in an instruction are basically their own separate things
im undecided between 1a and 1b. do you have an opinion?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Difference of a and b is just with valid programs v3 program's programdata?
A bit of a detail, but can we determine if it's valid or not without loading the programdata? In the general case** assuming not in cache already.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
no, you need programdata because once the program account is deployed it never changes
1a is probably safer than 1b on reflection, i was thinking 1b is easy because we can just check program cache tombstones (and firedancer has to have something similar to conform) but the FailedVerification
case means both our elf verifiers would need to produce identical results to maintain consensus
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yeah 1a seems better option to me in that case
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Agreed on 1a. And I think I agree that it's best to double count programdata accounts if they are included in the ix account list since the SVM would effectively keep 2 copies in memory anyways.
@@ -0,0 +1,152 @@ | |||
--- | |||
simd: '0186' | |||
title: Transaction Data Size Specification |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This title reads as if the proposal specifies the size of a serialized transaction rather than size of the account data loaded for processing a transaction. How about Transaction Account Data Size Specification
?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
went with Loaded Transaction Data Size Specification
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
went with Loaded Transaction Data Size Specification
once and only once. | ||
2. A valid program owned by LoaderV3 also includes the size of its programdata. | ||
3. Other than point 2, no accounts are implicitly added to the total data size. | ||
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Agreed on 1a. And I think I agree that it's best to double count programdata accounts if they are included in the ix account list since the SVM would effectively keep 2 copies in memory anyways.
ac55c34
to
7816e67
Compare
7816e67
to
078a6f9
Compare
* The fee-payer. | ||
2. Each account's size is determined solely by the byte length of its data prior | ||
to transaction execution. | ||
3. For any `LoaderV3` program account, add the size of the programdata account |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
nit: On the first read I missed the fact that any loaded account which happened to be a LoaderV3
program account would have its programdata account size counted as well. I assumed it was instruction programs that would be handled this way. Can you reword slightly to say something like:
For any loaded account which is identified as a
LoaderV3
program account, add the size of the programdata account it references, regardless of whether it is directly invoked by a transaction instruction.
2. Each account's size is determined solely by the byte length of its data prior | ||
to transaction execution. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Should we include an overhead amount of bytes as well? There is still accounts db / svm overhead when loading a bunch of accounts with no data. Rent calculations use this constant:
pub const ACCOUNT_STORAGE_OVERHEAD: u64 = 128;
but the size of AccountSharedData
is 64 bytes, so that seems like a reasonable value to use
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I feel like we should include everything that is serialized, not just data.
This is the first i'm realizing the checks only count the data field's length.
@tao-stones any reason why we shouldn't just make this a loaded_size
instead of loaded_data_size
?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
iirc, original motivation is primarily focus on memory footprint and how that chunk of memory is used, loading from accountdb and serializing wasn't discussed much then, it might make sense now
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
How about 96 bytes? From https://github.com/anza-xyz/agave/blob/4e7f7f76f453e126b171c800bbaca2cb28637535/programs/bpf_loader/src/serialization.rs#L417, I see we have to serialize 3 bytes for is_writable
, is_signer
, and is_executable
, 4 bytes for original data length, 8 bytes each for data length
, lamports
, and rent_epoch
, 32 bytes for the account_key
, and 32 bytes for the owner
and we adjust for alignment to the nearest multiple of 8. cc @Lichtso
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yes that is the size of the serialized account metadata. But have you thought about the resize / realloc padding (MAX_PERMITTED_DATA_INCREASE
)?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yes, serialization will stay ABIv1 in loader-v4.
ABIv2 is its own SIMD.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
realloc headroom is quite different conceptually from "loaded data" tho isnt it? memory is reserved in case its needed, but no data is loaded
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Sounds like we need a (loose?) definition for what loaded data is intended to mean. It could be the amount of bytes read from accounts db (in that case maybe programdata is only counted once?) or it could be roughly the amount of bytes loaded into memory before tx execution?
In either case I think it makes sense to include the account metadata overhead of approx. 64 bytes and exclude the overhead of the account key itself and the realloc buffer since they're not really loaded in any sense.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
im equally fine counting programdata once or twice, and after this pr lets us refactor we should only have to actually load it once. main concerns are that the algorithm is described unambiguously, is very simple to implement correctly, and always counts programdata if you use the program
maybe a new definition (counting programdata once) like:
- Given a transaction, take the unique set of account keys which are used as:
- An instruction account.
- A program ID for an instruction.
- The fee-payer.
- For all LoaderV3 program accounts, add the key of the programdata account it references to the set of account keys, if it does not already exist.
- Each account's size is defined as the byte length of its data prior to transaction execution, plus 64 bytes to account for metadata. This is irrespective of how the account is used in the transaction.
- The total transaction loaded account data size is the sum of these sizes.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'm also fine either way, slight bias to counting it once now if we only need to load it once from accounts-db. Do we agree on 64 bytes overhead?
`ComputeBudgetInstruction::SetLoadedAccountsDataSizeLimit` instruction to define | ||
a data size limit for the transaction. Otherwise, the default limit is 64MiB |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Might be worth calling out that the default limit of 64MiB is also the max limit. The limit can only be lowered right now
`ComputeBudgetInstruction::SetLoadedAccountsDataSizeLimit` instruction to define | |
a data size limit for the transaction. Otherwise, the default limit is 64MiB | |
`ComputeBudgetInstruction::SetLoadedAccountsDataSizeLimit` instruction to define | |
a lower data size limit for the transaction. Otherwise, the default limit is 64MiB |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Just fyi, anza-xyz/agave#1355 attempted to introduce DEFAULT_LOADED_ACCOUNTS_DATA_SIZE_BYTES. Leader currently used "actual" loaded accounts size after execution in block packing, so a lower DEFAULT is critical atm
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
im definitely on board with reducing the default, but it should be done separately from this simd. after we implement the new algorithm, we can run it against ledger history to collect new data about what transaction sizes would look like with it enabled. calculated sizes will increase nontrivially and we dont want to pick a number that will cause significant breakage
for example, in #1355 2MiB is suggested as a good number that only causes 5% breakage. however, the jupiter v4 program is 2.5MiB and jupiter v6 is close to 3MiB. this means they must be passing their program as an instruction account, which due to the current algorithm, means the program is only counted as 36 bytes. it also means under the new algorithm which properly counts programs, a 2MiB default limit would break 100% of jupiter transactions
once we fix the size calculations, we will be in a better position to judge what the real sizes of transactions actually are
2. Each account's size is determined solely by the byte length of its data prior | ||
to transaction execution, irrespective of it is used on the transaction. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
2. Each account's size is determined solely by the byte length of its data prior | |
to transaction execution, irrespective of it is used on the transaction. | |
2. Add each account's size, which is determined solely by the byte length of its data prior | |
to transaction execution, irrespective of how it is used on the transaction. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
in my head i imagine 1 as defining a list of accounts, 2 and 3 defining how each one maps to a size, and 4 says to sum them, to avoid any ambiguity over how many times programdata is counted
3. For any loaded account identified as a `LoaderV3` program account, add the | ||
size of the programdata account it references to its own size, irrespective of | ||
how the program account is used on the transaction. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
3. For any loaded account identified as a `LoaderV3` program account, add the | |
size of the programdata account it references to its own size, irrespective of | |
how the program account is used on the transaction. | |
3. For any loaded account identified as a `LoaderV3` program account, add the | |
size of the programdata account it references, irrespective of | |
how the program account is used in the transaction. |
3. For any loaded account identified as a `LoaderV3` program account, add the | ||
size of the programdata account it references to its own size, irrespective of | ||
how the program account is used on the transaction. | ||
4. The total transaction size is the sum of these sizes. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
4. The total transaction size is the sum of these sizes. | |
4. The total transaction loaded account data size is the sum of these sizes. |
1. Given a transaction, take the unique set of account keys which are used as: | ||
* An instruction account. | ||
* A program ID for an instruction. | ||
* The fee-payer. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yes, it is generally bad to try to have extra conditions for TX accounts depending on how they are used. That is how we got to the write lock demotion and only top-level-instructions counting loader-v3 programdata account mess we have.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
cool, ill revert that change
also i love this simd, i was thinking the other week it would be so nice if you could specify accounts you might want to execute but dont need to see instruction data for
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Nice catch!
ive updated the language for the algorithm again and i think i finally got the declarative rather than procedural description ive been aiming for |
1. The set of accounts that determine loaded transaction data size is defined as | ||
the unique intersection of: | ||
* The set of account keys explicitly specified on the transaction, | ||
irrespective of how they are used. | ||
* The set of programdata accounts referenced by the LoaderV3 program | ||
accounts specified on the transaction. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Oh just realized we should probably also include ALT account sizes as well
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Hmm but ALTs are not read or write locked by the transaction so they could be extended in parallel so we can't use the actual size. Maybe just use the max size of LOOKUP_TABLE_META_SIZE + LOOKUP_TABLE_MAX_ADDRESSES * 32
?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
is it possible to know whether an ALT was used at all in transaction processing? my understanding is resolution happens too early and all the information is erased. im not sure if there is an easy solution to this or if we should defer adding it to transaction data size after the ALT async execution simd changes how that code works
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It's possible, we could simply include that information in static meta and make it available to the account loader. The ALT SIMD changes will most likely only really cover failure cases so I don't think we need to defer this decision.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
If we're just using the max size of a table, we don't need to store any additional information in static meta. We have the number of ALTs in the message itself.
and I (accidently) have already exposed getting that information in SVMMessage
trait, so we should be able to get it in our specific implementation easily.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
(based on discussion we are considering a flat 8192 byte cost for ALT usage, pending feedback from @tao-stones)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
(based on discussion we are considering a flat 8192 byte cost for ALT usage ...
Should be 8248 byte cost because there is space in the ALT accounts for 56 bytes of metadata as well
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
8248 * 8 / 32000 = 2 CU
, that doesn't sound bad at all.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Cool, let's go forward with that in the proposal for now then
As a consequence of 1 and 3, for LoaderV3 programs, programdata is counted twice | ||
if a transaction explicitly references the program account and its programdata | ||
account. This is done partly for simplicity, and partly to account for the cost | ||
of maintaining the compiled program in addition to the actual bytes of | ||
the programdata account. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think we settled on just counting them once right?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
yes i forgot to delete this
* We considered loading and counting sizes for accounts on the transaction | ||
account list which are not used for any purpose. This is the current behavior, | ||
but there is no reason to load such accounts at all. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This section is no longer relevant
…s in another simd
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Great to simplify things here! 🎉
If a transaction exceeds its data size limit, the transaction is failed. Fees | ||
will be charged once `enable_transaction_loading_failure_fees` is enabled. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Where will this be checked? Do transactions that fail this check make it into blocks? Do you mind clarifying this in the SIMD? 🙏
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
ok i updated the wording to:
If a transaction exceeds its data size limit, a loading failure occurs. This SIMD does not change any aspect of how such a failure is handled. At time of writing, such a transaction would be excluded from the ledger. When enable_transaction_loading_failure_fees is enabled, it will be written to the ledger and charged fees as a processed, failed transaction.
the idea is we arent changing any of the logic about how loading works here, the existing flow stays the same (loading happens at the same time in the same place of transaction processing, what kind of error you get if you exceed the limit, how that error is handled and how its reflected in ledger history, etc)
the only thing that changes is what the number of bytes you arrive at will be and how you determine what that number is
@jacobcreech i believe we have consensus |
precisely define how transaction data sizes are calculated