-
Notifications
You must be signed in to change notification settings - Fork 1
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Feebump algorithm #3
Comments
I'm not sure how your proposal helps to mitigate paying too much during a short fee-spike. Selecting an arbitrary subset of blocks to enable fee-bumping doesn't necessarily reduce the likelihood that you are fee-bumping during a fee-spike. Instead I think we should rely on estimatesmartfee with a longer target (if remaining lock-time is large) to not overpay during short fee-spikes. Please correct me if estimatesmartfee doesn't work this way. In the following I define a function that returns the amount to fee-bump by, rather than a boolean, since I wanted to distinguish different cases. I have probably made mistakes with units in fee calculations, its only pseudocode, and concrete values are only given as an example from bitcoinlib import getblock, getblockheight, estimatesmartfee
PANIC_THRESHOLD = 20
ATTACK_THRESHOLD = 5
SMALLEST_FEEBUMP_COIN_VALUE = 20 # 20 sats/vbyte
# search through persistant data store for the allocated fee-reserve for the vault associated with this tx
def get_fee_reserve_for_vault(tx):
return db.get_fee_reserve_for_vault(tx)
# get number of times this tx was censored from a block
def get_attack_score(tx):
return db.get_attack_score(tx)
# set number of times this tx was censored from a block
def set_attack_score(tx, val):
db.set_attack_score(tx, val)
# Run at each new block connection
def update_feerate(tx, remaining_locktime, height_at_broadcast) -> int: # Return number of sats to bump by
# If not in panic mode
if remaining_locktime > PANIC_THRESHOLD:
# If our feerate was sufficient for inclusion
if tx.feerate > sort([ transaction.feerate for transaction in getblock(getblockheight()).transactions ])[-1]:
# Allow one block to assume tx has time to propagate to all miners (in case we broadcast just before a new
# block is mined). Otherwise consider this as a censorship or network-layer attack signal
if getblockheight() - height_at_broadcast =! 1:
set_attack_score(tx, get_attack_score() +1)
# Active censorship or network-layer attack
if get_attack_score() >= ATTACK_THRESHOLD:
# Use full fee-reserve in attempt to outbid attacker
return get_fee_reserve_for_vault(tx) - tx.feerate
# else, our feerate was insufficient for inclusion
else:
# Can't determine whether fee-market increase will remain or not,
# so just estimate again and bump if it's a significant difference.
# To reduce over-sensitivity to short-term spikes, if remaining_locktime is large,
# use a longer target
if remaining_locktime > 144 # ~1 day
df = estimatesmartfee(target=5) - tx.feerate
elif remaining_locktime > 72 # ~1/2 day
df = estimatesmartfee(target=3) - tx.feerate
else:
df = estimatesmartfee() - tx.feerate
# If the difference is significant (assuming our coin creation is reasonable)
if df >= SMALLEST_FEEBUMP_COIN_VALUE:
return df
# Otherwise don't fee-bump
else:
return 0
# If in panic mode
if remaining_locktime <= PANIC_THRESHOLD:
# Use full fee-reserve to try maximise likelihood of inclusion
return get_fee_reserve_for_vault(tx) - tx.feerate |
Right, what you describe is basically the same but with less chance of getting confirmed. However it's probably cheaper. However your following proposal isn't quite as generalist as my above. Given a
Sure. Just a few opportunistic comments on the logic as it's a good showcase of what you have in mind. # Active censorship or network-layer attack
if get_attack_score() >= ATTACK_THRESHOLD:
# Use full fee-reserve in attempt to outbid attacker
return get_fee_reserve_for_vault(tx) - tx.feerate Interesting. I don't think this is of any value for most attacks and having this trigger could rather incentivize some other attacks. For the two attacks mentioned in your comment specifically:
I would therefore argue that it is effectively burning money. # To reduce over-sensitivity to short-term spikes, if remaining_locktime is large,
# use a longer target Oh, you are bumping still at each block. I think it's strictly worse than my initial proposal then as it increases the load on you and on the network (and makes the heuristics to find you are the originator of the tx much much worse). # If the difference is significant (assuming our coin creation is reasonable)
if df >= SMALLEST_FEEBUMP_COIN_VALUE:
return df
# Otherwise don't fee-bump
else:
return 0 I would rather fee-bump at any cost if we need to. ie if 1. we need to feebump for the next block 2. our feerate is less than the next block feerate 3. we still have a coin left. Then just feebump no matter if we overpay. # If in panic mode
if remaining_locktime <= PANIC_THRESHOLD:
# Use full fee-reserve to try maximise likelihood of inclusion
return get_fee_reserve_for_vault(tx) - tx.feerate Same, i think that it's just burning money (when you only have a fee-bumping wallet, every problem looks like a fee-paying one :p). I think we should always feebump to |
That's the point, to smooth out sensitivity to short fee-spikes. Not reacting to spike == less chance of getting confirmed.
By this definition, isn't the target always 1? We should also consider that not all deployments would want to optimise for minimum fees on Cancels, but may want to prioritise business continuity (catching the invalid spend, and revaulting quickly to allow the correct spend to go through asap).
I think making the
Right. I think the detection of attack is fairly sound. But distinguishing which type of attack and how best to respond is an open question.
No I don't think so either. I think it primarily serves as a stronger proof that censorship occurred.
No. Only bumping if the difference in the
"if we need to"... I think the above method specifies this need.
Think about this: 20 or 100 blocks have passed and all our previous "rationalised feebumping methods" failed. We now have 12 blocks left and the cancel has still not been mined. Why not use the full per-vault fee-reserve that was allocated to it? The purpose of the buffer is in part for unexpected failures, and we should optimise for pushing the cancel at this stage, not for saving moderate amounts on fees. |
Woops. Typo, i meant
Yeah, i should have explicit it. In practice I also think you misread the suggestion, it's a function of the remaining locktime not the CSV. In Python pseudocode: if current_height >= start_height + CSV/4:
if tx_feerate < next_block_feerate:
feebump()
Yes, i know.. I meant "you are checking if you need to feebump at each block". I think doing this is more sensible: if current_height >= start_height + CSV/4:
if tx_feerate < next_block_feerate:
feebump(estimatesmartfee(1))
elif never_feebumped:
target = start_height + CSV/4 - current_height
feebump(estimatesmartfee(target))
I don't care much to be honest, we can put it i just don't want us to say it's helping whatsoever. It's basically like saying "Bitcoin mining is centralized and pools get regulated, i'm going to solo mine with my laptop to help censorship resistance because it's the only thing i'm able to do". |
Hard to say without knowing the deployment context. If the feebump wallet amounts are negligible compared to the amounts being spent per vault, and the business depends on spending for their operations, I can easily see them prioritizing business continuity (which would require figuring out what went wrong, and who can still be trusted, before allowing managers to continue).
Ok, nice!
I don't understand how so. if current_height >= start_height + CSV/4:
if tx_feerate < next_block_feerate:
feebump(estimatesmartfee(1)) If one deployment has CSV = 12, and another has CSV = 144, then one starts to rush (target=1) at If instead we say if tx_feerate < next_block_feerate:
if remaining_locktime >= 72:
feebump(estimatesmartfee(10))
elif remaining_locktime >= 24:
feebump(estimatesmartfee(1))
else:
feebump(full_vault_reserve) Then we have the same Going back to the point about different optimizations for different deployment use-cases, we may add another criterion: "parametrise our algorithm to enable optimisation for either finality or of operational costs". This helps us address the inherent trade-off by saying, 'you choose what matters to you'. So they set |
if tx_feerate < next_block_feerate:
if remaining_locktime >= 72:
feebump(estimatesmartfee(10))
elif remaining_locktime >= 24:
feebump(estimatesmartfee(1))
else:
feebump(full_vault_reserve) Then you try generalising it and end up with my first message :p .
You don't want to have the same actually, that was my point:
With a larger CSV, you allocate 1/4 of its capacity to cost optimization and 3/4 to increasing your probability of getting confirmed. So yeah it's arbitrary and not very generalistic but we can probably tweak it..
ACK. So getting with: if bypass_cost_optimization or current_height >= start_height + CSV/4:
if tx_feerate < next_block_feerate:
feebump() Or even more generalistic, with a variable like if current_height >= start_height + (CSV / (1 + (COST_OPTI_FACTOR * CSV))) // 1:
if tx_feerate < next_block_feerate:
feebump() |
Also, after thinking more about this (and discussing it with @kloaec and @danielabrozzoni ) i don't think it's safe to implement. That's because the factor |
For the "feebumping algorithm" we could go for something simpler. The goal is to not overpay the fee for inclusion before the deadline. Overpaying fees increases the burden on the WT operator and indirectly increases risk by reducing the reserves too much while it's not necessary. The tradeoffs are a potential increased risk of not having the transaction be confirmed before the timelock maturation and a fund availability cost to the operations. At each block that the Cancel transaction was not included in a block:
This addresses the tradeoffs as for any short deadline ( |
Your latest proposal is ok. Interesting that it doesn't have any "panic mode" functionality. If the vault is high-value, it would be rational to use significantly more than the Counter-intuitively, it may be that having a "panic mode" decreases the likelihood of early confirmations with cancel transactions. So I'm starting to think something like you've proposed is ideal. |
Kevin's input: waiting more might actually make us overpay more if we miss a low fee period at the time of broadcast because we underpaid.
Jacob points out that it's very unlikely to happen. TL;DR: might not be worth the complexity. |
Goals: Adapt to increasing feerate to ensure cancel confirmation. Constraint: don't over pay on fees, don't create too much risk Trade-off: overpayment <---> finality (risk & continuity) Miner risk: avoiding processing cancels because replacement transactions will have significantly higher fees Time-lock safety (CSV length) not necessarily how long managers want to wait for confirmation Replace at each-block: sensitive to feerate spike & always incurrs replacement fee EstimateSmartFee doesn't consider feerate trends, so waiting to replace exposes WT to feerate increases as well as decreases. It's not necessarily true that feerate will decrease, so waiting is not necessarily good. Waiting through high-volatility events could be beneficial if they can be accurately identified and there is still sufficient time before time-lock expiry. |
Before even considering if we should based on current fee estimates, and if so by how much we should, we need to decide whether to feebump in the first place.
A quick answer is "lol, every time the fee estimation is updated (each block connection), what else" but this conservative answer is really expensive in most cases. What if:
We would in the above example feebump at an insane feerate for no good reason (feerate would turn back to normal after the 3 blocks connections and we'd have a day anyways).
However, we must for sure update our feerate if needed each time the fee estimates change if we have very few blocks left! In other terms, start lazily then enter panic mode.
So here is my simple suggestion:
with
X
being IMO well set to6
.We could further tweak the formula to exponentially decrease the threshold for high CSVs, therefore retaining the "additional breathing room" gained by the conservative choice of a longer CSV:
This enters "panic mode" depending on a fraction of the CSV, but is exponentially decreasing (you won't try to re-feebump every block if you have 2016 blocks in front of you!)
The text was updated successfully, but these errors were encountered: