You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
// Compute set of requested addresses that are not in cache.
let to_fetch:Vec<H160> = addresses
.iter()
.filter(|address| !cache.contains_key(address))
.cloned()
.collect();
// Fetch token infos not yet in cache.
if !to_fetch.is_empty(){
let fetched = self.inner.get_token_infos(to_fetch.as_slice()).await;
This can significantly delay price estimates if one (or more) of them require fetching token info from the RPC node. In fact I reproduced locally and saw big significant delays in sending out the Paraswap requests (compared to the point the price estimation competition started). If a token has no decimals, it isn't cached and will thus repeatedly cause a roundtrip to the node.
This is very likely the root cause to the high p95 latencies we have seen for Paraswap and Quasimodo price estimators (as those are the only ones using token info)
Solution
We should release the lock while the request is in flight. In order to avoid request duplication, we can use the RequestSharing abstraction and split the list of token infos to be fetched into individual requests (they will get batched into a single RPC request in our BatchedTransport implementation.
The text was updated successfully, but these errors were encountered:
The CachedTokenInfoFetcher holds on to a mutex while waiting for a RPC roundtrip to complete
services/crates/shared/src/token_info.rs
Lines 86 to 100 in f9f5efd
This can significantly delay price estimates if one (or more) of them require fetching token info from the RPC node. In fact I reproduced locally and saw big significant delays in sending out the Paraswap requests (compared to the point the price estimation competition started). If a token has no decimals, it isn't cached and will thus repeatedly cause a roundtrip to the node.
This is very likely the root cause to the high p95 latencies we have seen for Paraswap and Quasimodo price estimators (as those are the only ones using token info)
Solution
We should release the lock while the request is in flight. In order to avoid request duplication, we can use the
RequestSharing
abstraction and split the list of token infos to be fetched into individual requests (they will get batched into a single RPC request in our BatchedTransport implementation.The text was updated successfully, but these errors were encountered: