Skip to content
This repository was archived by the owner on Jul 19, 2023. It is now read-only.

Avoid missed records in a high volume scenario #96

Open
wants to merge 14 commits into
base: master
Choose a base branch
from

Conversation

daniel-bray-sonalake
Copy link

A fairly sizeable rewrite of how the sincedb works, full details in the ARCHITECTURE.md

This resolves #74 where records were going missing in a high-volume situation

The change is that rather than have a single "this was the last timestamp for some log group" the change is to maintain a window of N minutes worth of events and use it to:

  • Pick a start time for 'filter_log_events' so we won't skip over records
  • Avoid reprocessing the same records twice

It replaces my last PR for this, as it still had some problems PR#92

Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

Skips large blocks of events during import from CloudWatch?
1 participant