-
Notifications
You must be signed in to change notification settings - Fork 1
[Batch Rewards] Orderbook Query for Batch Rewards #26
Conversation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Left one comment for discussion.
Other than that, LGTM!
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Lg, I am just wondering for the log_index
.
fee, | ||
-- auction_transaction | ||
auction_id | ||
FROM settlement_observations so |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I guess this is a question for the PR that introduced the settlement_observations
table, but just wanted to clarify here. Does this table also contain the settlements that reverted?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Not expected to, with the current backend infrastructure. There would be nothing to be observed, but this is a great point. we can get that information from dune, but I think there is a tendency to try and be able to compute this entirely from our own db. In that case one would have to monitor all call to the contract (I.e failed transactions) and this would have to be part of the backend services which it DEFINITELY is not currently.
If there are parts of the table populated that don't join we would have some partial information but could not link it back to block numbers (which we will need) this means the orderbook will need to index failed settlements too.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
but I think there is a tendency to try and be able to compute this entirely from our own db. In that case one would have to monitor all call to the contract (I.e failed transactions) and this would have to be part of the backend services which it DEFINITELY is not currently.
Yes, I would expect that we do everything from our own db. And to me, although I might be completely wrong here, it makes sense that for every auction id that has a winning solution, we store all onchain details around it, if any. This would imply storing a tx hash (and whatever relevant information we need) even in the case of a revert, as this maps our offchain auction to the corresponding onchain activity, if any.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
For reverted transactions we would populate only block_number, effective_gas_price, gas used
right?
Is this MUST HAVE for CIP20? Backend relies on emitted events so currently there is no way to index reverted transactions, implementing this would probably delay CIP20 for a few days. I would rather do it as a follow up.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
As discussed earlier today, it would suffice to add a column block_deadline
to the settlement_scores table for which we can infer these non-existent values and still be able to assign a block number to the batch.
0ca61cc
to
870aa21
Compare
Ok so this query now contains all the requirements (assuming several things about the backend).
Note that this query may not actually be fully syntactically correct, but can be fixed once this data exists. Here is a test query on a postgres DB extracting the last element from a byte array drop table if exists test;
create table test
(
my_array bytea[]
);
insert into test (my_array)
values (ARRAY ['\x1212'::bytea, '\x2312'::bytea, '\x3412'::bytea]);
select
my_array,
array_length(my_array, 1) as arr_length,
my_array[array_length(my_array, 1)] as last_element
from test; |
As part of #27 we implement the Raw Orderbook query that will be used to extract the data to be synced with Dune. The follow up to this PR will implement the python script executing this that is also responsible for transforming it into the JSON files being uploaded to Dune's AWS bucket.
Solvers are expected to change their rewards to the following
where
observedQuality = Surplus + Fee
Furthermore, the reward per batch is planned to be capped by
[-E, E + executionCosts] (E = 0.01 ETH)
This query joins several tables in order to provide all terms required to evaluate the reward (namely surplus, fee, execution_cost, reference_score).
winning_score
is included too which is not really necessary but more for transparency.Furthermore, it is expected that if the total allocated rewards for each accounting period are not reached, then we are to distribute the remaining funds according to the solver participation (found also as a field returned by this query).
For this, we might query something like (but this can come later).
Trying to add @fhenneke as a reviewer to this, but can't.
Note that this is all based on the assumption that the following tables exist in the orderbook (introduced in cowprotocol/services#1166)