You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
We're doing client-side filtering to satisfy the regexes, and we're doing it optimistically. When there's attached more than the 10000 (the limit specified in our code), those won't get looked after.
A quick solution is to paginate all timeseries for the given asset instead of giving up after the first 10000.
That's definitely something we would want, but I'm worried about the time it would take - especially if we consider including subassets, and then getting all the timeseries from a large root node. I have some of the pagination in place for the v1 update, but decided to limit those calls to only get the first 10000 timeseries as before due to some performance issues.
Being able to fetch all cursors will be able to help to introduce a bit more concurrency, but I still feel as though there will be some issues since browsers are limited to the number of requests they keep open at a time.
so we have partitions now, and considering there is a limit on how many connections we can have in browser I'd probably suggest to split it by 4 partitions, and fetch that way.
don't really like that we do this stuff in frontend anyway...
We're doing client-side filtering to satisfy the regexes, and we're doing it optimistically. When there's attached more than the 10000 (the limit specified in our code), those won't get looked after.
A quick solution is to paginate all timeseries for the given asset instead of giving up after the first 10000.
Thoughts @hulien22 ?
The text was updated successfully, but these errors were encountered: