-
Notifications
You must be signed in to change notification settings - Fork 1.7k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Add logs to identify when assumptions of log queuing behaviour are violated #12846
Conversation
|
||
// clean up enqueued block counts | ||
for block := range b.enqueuedBlocks { | ||
if block < blockThreshold-reorgBuffer { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
not fully sure why we need reorgBuffer here?
i think the other functions in buffer simply work on blockThreshold-lookback to blockThreshold range, reorg handling is done at the provider level
blockNumbers[uid.String()] = upkeepBlockInstances + 1 | ||
b.lggr.Debugw("enqueuing logs again for a previously seen block for this upkeep", "blockNumber", blockNumber, "numberOfEnqueues", b.enqueuedBlocks[blockNumber], "upkeepID", uid.String()) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
i think this logic will change with reorg handling?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yes; based on what I have in the spike for reorgs, we should have a set of block numbers available to us to perform this check against 👍
@@ -50,6 +51,96 @@ func TestLogEventBufferV1_SyncFilters(t *testing.T) { | |||
require.Equal(t, 1, buf.NumOfUpkeeps()) | |||
} | |||
|
|||
type readableLogger struct { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Why do we need this?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Just for the purposes of catching and validating log messages within the tests
Quality Gate passedIssues Measures |
…olated (#12846) * Add logs to identify when assumptions of log queuing behaviour are violated * Add tests * go import * Add changeset * Update enqueuing assumption * Update tests * Extract block tracking into a separate function * Clean up outdated enqueued blocks * Clean up imports * Ignore reord buffer in cleanup * Cleanup test name
* Fix metric description on mercury_transmit_queue_load * Add changeset * Add changesets tag * Decouple gas tests (#12972) * Add first version of evm utils * Remove unused context util * Add WSServer tests * Add NewLegacyTransaction test * Update NewTestChainScopedConfig to apply correct defaults * Decouple gas package tests from core * Move testutils * Update paths * Fix import error * Add changeset * Add logs to identify when assumptions of log queuing behaviour are violated (#12846) * Add logs to identify when assumptions of log queuing behaviour are violated * Add tests * go import * Add changeset * Update enqueuing assumption * Update tests * Extract block tracking into a separate function * Clean up outdated enqueued blocks * Clean up imports * Ignore reord buffer in cleanup * Cleanup test name * tools/docker: bump postgres to v14 (#13156) * changed spammy error log to debug (#13153) * [KS-186] Add custom OCR3 Capability Provider (#13137) * [KS-193] Pass MercuryTriggerService to Mercury Transmitter (#13118) 1. Add EnableTriggerCapability flag to Relay config 2. Create MercuryTriggerService lazily, on the first call to NewMercuryProvider() 3. Make it available in the Transmitter (no-op for now) * update changeset to include db_update tag (#13170) * fix go-etheruem compatibility pipeline trigger (#13162) * Revert "Add logs to identify when assumptions of log queuing behaviour are violated" (#13173) * Revert "Add logs to identify when assumptions of log queuing behaviour are vi…" This reverts commit 6a342ae. * Add changeset * chore: update solana-build-contracts for node20 (#13175) * chore: update solana-build-contracts for node20 * chore: bump chainlink-solana version * Decouple utils tests from core (#12993) * Add first version of evm utils * Remove unused context util * Add WSServer tests * Add NewLegacyTransaction test * Update NewTestChainScopedConfig to apply correct defaults * Move testutils * Decouple utils tests from core * Add changeset --------- Co-authored-by: Dimitris Grigoriou <[email protected]> Co-authored-by: ferglor <[email protected]> Co-authored-by: Jordan Krage <[email protected]> Co-authored-by: Patrick <[email protected]> Co-authored-by: Cedric <[email protected]> Co-authored-by: Bolek <[email protected]> Co-authored-by: Chunkai Yang <[email protected]> Co-authored-by: Bartek Tofel <[email protected]> Co-authored-by: Erik Burton <[email protected]>
https://smartcontract-it.atlassian.net/browse/AUTO-10164
WIth this PR, we're updating some comments within the buffer implementation, to explicitly declare our assumptions on how the buffer will be used.
Further to this, we're also adding logs to identify scenarios when these assumptions are violated. To achieve this, we use a map to track the number of times we enqueue upkeep logs for a particular block number.
Testing
Running the load test, we don't see the logs appear for enqueuing upkeep logs for the same block number more than once.