Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Cached feeds #161

Merged
merged 37 commits into from
Apr 10, 2021
Merged

Cached feeds #161

merged 37 commits into from
Apr 10, 2021

Conversation

goodboy
Copy link
Contributor

@goodboy goodboy commented Apr 6, 2021

Copy of #158 but merging into master 😂

This is a rework on data feed machinery to allow for persistent real-time feeds to remain running after created by some data client where that client detaches but we want to keep the feed live for later use.

For now this means spawning a background feed task which also writes to shared memory buffers and can be accessed by any (returning) data client. This has the immediately useful benefit of allowing for super fast chart loading for pre-existing feeds.

Try it out by,

in your first terminal running `pikerd`
in your 2nd run `piker -b kraken XBTUSD`
kill you chart
re-run the chart command and it should load faster and attach to the existing shm history

Both the ib and kraken backends have been ported thus far.

Questrade can likely come in a new PR since the single-connection-multi-symbol-subscription style can also be accomplished with kraken's websocket feed and we'll probably want to iron out details of how to request such things with both backends in view.

Some other stuff got slapped in here, namely engaging the Qt hidpi detection on windows (which reportedly works?)

@goodboy goodboy requested a review from guilledk April 6, 2021 17:14
@goodboy
Copy link
Contributor Author

goodboy commented Apr 6, 2021

Was actually good cause there was a conflict from the windows patches stuff.

# count: int = 20, # NOTE: any more and we'll overrun underlying buffer
count: int = 6, # NOTE: any more and we'll overrun the underlying buffer
count: int = 10, # NOTE: any more and we'll overrun the underlying buffer
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

iirc this was < 6 to avoid tws throttling on another system I tried.

We might want to include this change as a default?

if total_s % delay_s != 0:
continue

# TODO: numa this!
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Ahh right, that was the last thing we should do. Stick this in a new _sampling.py module instead of the current data._buffer which is a lame name.

subs.remove(ctx)


async def sample_and_broadcast(
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

There I think isolating these shm sampling routines is much more easy for code readers.

goodboy added 23 commits April 10, 2021 14:18
Move all feed/stream agnostic logic and shared mem writing into a new
set of routines inside the ``data`` sub-package. This lets us move
toward a more standard API for broker and data backends to provide
cache-able persistent streams to client apps.

The data layer now takes care of
- starting a single background brokerd task to start a stream for as
  symbol if none yet exists and register that stream for later lookups
- the existing broker backend actor is now always re-used if possible
  if it can be found in a service tree
- synchronization with the brokerd stream's startup sequence is now
  oriented around fast startup concurrency such that client code gets
  a handle to historical data and quote schema as fast as possible
- historical data loading is delegated to the backend more formally by
  starting a ``backfill_bars()`` task
- write shared mem in the brokerd task and only destruct it once requested
  either from the parent actor or further clients
- fully de-duplicate stream data by using a dynamic pub-sub strategy
  where new clients register for copies of the same quote set per symbol

This new API is entirely working with the IB backend; others will need
to be ported. That's to come shortly.
Avoid bothering with a trio event and expect the caller to do manual shm
registering with the write loop. Provide OHLC sample period indexing
through a re-branded pub-sub func ``iter_ohlc_periods()``.
@goodboy goodboy merged commit 54d272e into master Apr 10, 2021
@goodboy goodboy deleted the cached_feeds branch April 10, 2021 18:34
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

1 participant