Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Cancel RefundRequested transactions #198
Cancel RefundRequested transactions #198
Changes from 15 commits
d71a672
0091fb9
f79f2a5
8a98e30
25d96d3
b8c66fa
a3339a4
cfc2040
f6cd0d4
b2f36ae
42e6304
aef077a
f53ae64
dcebd36
f053c89
4d2e0cc
75ee30e
File filter
Filter by extension
Conversations
Jump to
There are no files selected for viewing
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Do you know if every call to next is a new read to the storage? if that's the case we probably need to substract it from the remaining weight. I wonder if there's a better way to go through the list of tasks since we could be potentially spending quite some time every block just skipping tasks 🤔
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Sorry for being indecisive, I'm thinking maybe we should mix
on_initialize
andon_idle
? in a not so good case imagine merchants have a week or a month to react to the refund and suppose there's a bunch of releases queued but only the few that can actually be executed are at the end of the iteration meaning we can often run out weight before we get to do any refunds.We could instead have "two queues", first a map that indexes by "future block number" -> [(from, to), ...(MAX)] where MAX is the bound representing the maximum refunds that can be requested in a block that will likely be a small number, if it gets full we can fail in the
request_refund
with a specific error, it will be up to the front-end to try again and make it nice to the user. This queue is processed in the on_initialize for each block where we keep work to a minimum basically just moving the everything in the bounded vec to the second "ready queue" that is the double map you have here. That way when we iterate here we don't need to check the block number because every task is ready to be processed.There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Is the issue the cancel_block timing? Could you take a look at 4d2e0cc, so I've reduced it to one DB read, then make a Vec sorted by the cancel_block, so if the first one cannot be cancelled we can return all, also helps when we lag behind if the blocks are full. I've benchmarked the weights for reading and writing the
Attributes
storage, will add that to the expected weights if this looks okThere was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Added the cost to read/write the scheduled tasks in 75ee30e wdyt?