Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

CachedTokenInfoFetcher hogs cache Mutex while requests are in flight #1923

Closed
fleupold opened this issue Oct 6, 2023 · 1 comment · Fixed by #1931
Closed

CachedTokenInfoFetcher hogs cache Mutex while requests are in flight #1923

fleupold opened this issue Oct 6, 2023 · 1 comment · Fixed by #1931
Assignees

Comments

@fleupold
Copy link
Contributor

fleupold commented Oct 6, 2023

The CachedTokenInfoFetcher holds on to a mutex while waiting for a RPC roundtrip to complete

impl TokenInfoFetching for CachedTokenInfoFetcher {
async fn get_token_infos(&self, addresses: &[H160]) -> HashMap<H160, TokenInfo> {
let mut cache = self.cache.lock().await;
// Compute set of requested addresses that are not in cache.
let to_fetch: Vec<H160> = addresses
.iter()
.filter(|address| !cache.contains_key(address))
.cloned()
.collect();
// Fetch token infos not yet in cache.
if !to_fetch.is_empty() {
let fetched = self.inner.get_token_infos(to_fetch.as_slice()).await;

This can significantly delay price estimates if one (or more) of them require fetching token info from the RPC node. In fact I reproduced locally and saw big significant delays in sending out the Paraswap requests (compared to the point the price estimation competition started). If a token has no decimals, it isn't cached and will thus repeatedly cause a roundtrip to the node.

This is very likely the root cause to the high p95 latencies we have seen for Paraswap and Quasimodo price estimators (as those are the only ones using token info)

Solution

We should release the lock while the request is in flight. In order to avoid request duplication, we can use the RequestSharing abstraction and split the list of token infos to be fetched into individual requests (they will get batched into a single RPC request in our BatchedTransport implementation.

@fleupold fleupold self-assigned this Oct 8, 2023
@fleupold
Copy link
Contributor Author

fleupold commented Oct 9, 2023

#1931

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging a pull request may close this issue.

2 participants