-
Notifications
You must be signed in to change notification settings - Fork 10
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Cancel RefundRequested transactions #198
Conversation
pallets/payment/src/lib.rs
Outdated
/// be executed and will process them. | ||
fn on_idle(now: T::BlockNumber, mut remaining_weight: Weight) -> Weight { | ||
while remaining_weight > T::WeightInfo::cancel() { | ||
let task = ScheduledTasks::<T>::iter().next(); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Do you know if every call to next is a new read to the storage? if that's the case we probably need to substract it from the remaining weight. I wonder if there's a better way to go through the list of tasks since we could be potentially spending quite some time every block just skipping tasks 🤔
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Sorry for being indecisive, I'm thinking maybe we should mix on_initialize
and on_idle
? in a not so good case imagine merchants have a week or a month to react to the refund and suppose there's a bunch of releases queued but only the few that can actually be executed are at the end of the iteration meaning we can often run out weight before we get to do any refunds.
We could instead have "two queues", first a map that indexes by "future block number" -> [(from, to), ...(MAX)] where MAX is the bound representing the maximum refunds that can be requested in a block that will likely be a small number, if it gets full we can fail in the request_refund
with a specific error, it will be up to the front-end to try again and make it nice to the user. This queue is processed in the on_initialize for each block where we keep work to a minimum basically just moving the everything in the bounded vec to the second "ready queue" that is the double map you have here. That way when we iterate here we don't need to check the block number because every task is ready to be processed.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Is the issue the cancel_block timing? Could you take a look at 4d2e0cc, so I've reduced it to one DB read, then make a Vec sorted by the cancel_block, so if the first one cannot be cancelled we can return all, also helps when we lag behind if the blocks are full. I've benchmarked the weights for reading and writing the Attributes
storage, will add that to the expected weights if this looks ok
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Added the cost to read/write the scheduled tasks in 75ee30e wdyt?
pallets/payment/src/lib.rs
Outdated
/// be executed and will process them. | ||
fn on_idle(now: T::BlockNumber, mut remaining_weight: Weight) -> Weight { | ||
while remaining_weight > T::WeightInfo::cancel() { | ||
let task = ScheduledTasks::<T>::iter().next(); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Sorry for being indecisive, I'm thinking maybe we should mix on_initialize
and on_idle
? in a not so good case imagine merchants have a week or a month to react to the refund and suppose there's a bunch of releases queued but only the few that can actually be executed are at the end of the iteration meaning we can often run out weight before we get to do any refunds.
We could instead have "two queues", first a map that indexes by "future block number" -> [(from, to), ...(MAX)] where MAX is the bound representing the maximum refunds that can be requested in a block that will likely be a small number, if it gets full we can fail in the request_refund
with a specific error, it will be up to the front-end to try again and make it nice to the user. This queue is processed in the on_initialize for each block where we keep work to a minimum basically just moving the everything in the bounded vec to the second "ready queue" that is the double map you have here. That way when we iterate here we don't need to check the block number because every task is ready to be processed.
resolves virto-network/open-runtime-module-library#12