-
Notifications
You must be signed in to change notification settings - Fork 17
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Cached feeds #161
Cached feeds #161
Conversation
Was actually good cause there was a conflict from the windows patches stuff. |
# count: int = 20, # NOTE: any more and we'll overrun underlying buffer | ||
count: int = 6, # NOTE: any more and we'll overrun the underlying buffer | ||
count: int = 10, # NOTE: any more and we'll overrun the underlying buffer |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
iirc this was < 6 to avoid tws throttling on another system I tried.
We might want to include this change as a default?
piker/data/_buffer.py
Outdated
if total_s % delay_s != 0: | ||
continue | ||
|
||
# TODO: numa this! |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Ahh right, that was the last thing we should do. Stick this in a new _sampling.py
module instead of the current data._buffer
which is a lame name.
subs.remove(ctx) | ||
|
||
|
||
async def sample_and_broadcast( |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
There I think isolating these shm sampling routines is much more easy for code readers.
Move all feed/stream agnostic logic and shared mem writing into a new set of routines inside the ``data`` sub-package. This lets us move toward a more standard API for broker and data backends to provide cache-able persistent streams to client apps. The data layer now takes care of - starting a single background brokerd task to start a stream for as symbol if none yet exists and register that stream for later lookups - the existing broker backend actor is now always re-used if possible if it can be found in a service tree - synchronization with the brokerd stream's startup sequence is now oriented around fast startup concurrency such that client code gets a handle to historical data and quote schema as fast as possible - historical data loading is delegated to the backend more formally by starting a ``backfill_bars()`` task - write shared mem in the brokerd task and only destruct it once requested either from the parent actor or further clients - fully de-duplicate stream data by using a dynamic pub-sub strategy where new clients register for copies of the same quote set per symbol This new API is entirely working with the IB backend; others will need to be ported. That's to come shortly.
Avoid bothering with a trio event and expect the caller to do manual shm registering with the write loop. Provide OHLC sample period indexing through a re-branded pub-sub func ``iter_ohlc_periods()``.
Copy of #158 but merging into
master
😂This is a rework on data feed machinery to allow for persistent real-time feeds to remain running after created by some data client where that client detaches but we want to keep the feed live for later use.
For now this means spawning a background feed task which also writes to shared memory buffers and can be accessed by any (returning) data client. This has the immediately useful benefit of allowing for super fast chart loading for pre-existing feeds.
Try it out by,
Both the
ib
andkraken
backends have been ported thus far.Questrade can likely come in a new PR since the single-connection-multi-symbol-subscription style can also be accomplished with kraken's websocket feed and we'll probably want to iron out details of how to request such things with both backends in view.
Some other stuff got slapped in here, namely engaging the Qt hidpi detection on windows (which reportedly works?)