-
Notifications
You must be signed in to change notification settings - Fork 86
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Get token first trade block API #3197
base: main
Are you sure you want to change the base?
Conversation
Reminder: Please update the DB Readme and comment whether migrations are reversible (include rollback scripts if applicable). Caused by: |
Reminder: Please consider backward compatibility when modifying the API specification.
Resolved |
@@ -0,0 +1,3 @@ | |||
CREATE INDEX orders_sell_buy_tokens ON orders (sell_token, buy_token); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
A similar composite index already exists:
create index order_quoting_parameters
on orders (sell_token, buy_token, sell_amount);
But after adding a separate (sell_token, buy_token)
index, the execution time improved more than twice.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Could you please check how 2 indices (one for sell_token and one for buy_token) behaves?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
There is almost no difference between separate indexes or completely without them.
crates/orderbook/openapi.yml
Outdated
@@ -308,6 +308,29 @@ paths: | |||
description: No liquidity was found. | |||
"500": | |||
description: Unexpected error. | |||
"/api/v1/token/{token}/first_trade_block": |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think for now I'd prefer to not advertise this endpoint as external people should not rely on it yet IMO.
Also creating individual endpoints for every piece of metadata we might want to compute for a token doesn't seem so nice.
Maybe just have a single endpoint which can be a catch all for different metadata things would be nicer?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Maybe just have a single endpoint which can be a catch all for different metadata things would be nicer?
Yeah, makes sense. Does something like the following look good?
GET /api/v1/token/{token}}/metadata?fields=first_trade_block,native_price
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I like the idea but I think we should not introduce this filtering mechanism unnecessarily. If this endpoint ends up being used a lot and some fields are very expensive to compute we can introduce it IMO.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Ok, reworked. Now it returns token metadata.
@@ -0,0 +1,3 @@ | |||
CREATE INDEX orders_sell_buy_tokens ON orders (sell_token, buy_token); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think we might have enough indices already to write a fast query for this request.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This reverts commit 20a9984.
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Seems like you put a lot of thought into the query so I assume it's good. Not that it matters too much given that this endpoint is currently just intended to be used with the alerting system.
get_native_prices_request().and_then(move |token: H160| { | ||
let db = db.clone(); | ||
async move { | ||
Result::<_, Infallible>::Ok( |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This code is unnecessarily hard to read IMO. If you split it up you can make it easier to parse:
let result = db.token_metadata(&token).await;
let response = match result {
Ok(metadata) => reply::with_status(reply::json(&metadata), StatusCode::OK),
Err(err) => {
tracing::error!(?err, ?token, "Failed to fetch token's first trade block");
crate::api::internal_error_reply()
}
};
Result::<_, Infallible>::Ok(response)
Also I don't think the .context()
does anything here since the error gets printed immediately after adding the context.
Description
This is a pre-requisite for improving
internal buffers at risk
alert by eliminating false positives. The idea is to provide the token's first trade timestamp so https://github.com/cowprotocol/tenderly-alerts/ can then decide whether the token should be ignored. Currently, there is no way to return the timestamp since theorder_events
table gets cleaned up periodically. Technically, it can be used to get a timestamp, but the result will be useless if a token was deployed more than 30 days ago and wasn't traded for the last 30 days. Instead, the block number is returned, so Tenderly RPC must be used to fetch the block's timestamp. Otherwise, this would require access to an archive node from the orderbook side.Changes
For some reason, this query takes around 20s on mainnet-prod for avg-popular tokens. Unfortunately, this couldn't be reproduced locally. I assume this is related to available resources on prod. I added indexes that improved the query speed ~x2.5 times.
Caching
To avoid querying the DB for the same tokens too often, I would consider introducing caching on the NGINX side rather than in memory since we often have multiple orderbook instances. Also, the first trade block never changes and can be kept in the NGINX cache forever. All the errors won't be kept in the cache. Requires an infra PR.
How to test
The query is tested on prod and locally. This would require further testing on prod by collecting metrics and adjusting resources. Potentially,
work_mem
,max_parallel_workers_per_gather
, etc.