diff --git a/.changeset/popular-stingrays-float.md b/.changeset/popular-stingrays-float.md new file mode 100644 index 0000000000..781f07dab8 --- /dev/null +++ b/.changeset/popular-stingrays-float.md @@ -0,0 +1,5 @@ +--- +"ccip": patch +--- + +Fix slice bounds out of range error in performBatchCall. #bugfix diff --git a/.changeset/red-balloons-repeat.md b/.changeset/red-balloons-repeat.md new file mode 100644 index 0000000000..674cae9602 --- /dev/null +++ b/.changeset/red-balloons-repeat.md @@ -0,0 +1,5 @@ +--- +"ccip": patch +--- + +Commit NewReportingPlugin retries on error diff --git a/.changeset/sour-owls-grab.md b/.changeset/sour-owls-grab.md new file mode 100644 index 0000000000..a45cd3da66 --- /dev/null +++ b/.changeset/sour-owls-grab.md @@ -0,0 +1,7 @@ +--- +"chainlink": patch +--- + +Added config option `HeadTracker.FinalityTagBypass` to force `HeadTracker` to track blocks up to `FinalityDepth` even if `FinalityTagsEnabled = true`. This option is a temporary measure to address high CPU usage on chains with extremely large actual finality depth (gap between the current head and the latest finalized block). #added + +Added config option `HeadTracker.MaxAllowedFinalityDepth` maximum gap between current head to the latest finalized block that `HeadTracker` considers healthy. #added diff --git a/CHANGELOG.md b/CHANGELOG.md index 314626a0bd..b5ba043d0e 100644 --- a/CHANGELOG.md +++ b/CHANGELOG.md @@ -1,32 +1,19 @@ # Changelog Chainlink Core -## 2.12.0 - UNRELEASED +## 2.12.0 - 2024-06-05 ### Minor Changes -- [#13000](https://github.com/smartcontractkit/chainlink/pull/13000) [`1b994043b0`](https://github.com/smartcontractkit/chainlink/commit/1b994043b00cad9e0c900b6d12173dd1008480a5) Thanks [@ettec](https://github.com/ettec)! - #internal changes to core required by change BCF3168 in common to add relayer set +- [#13246](https://github.com/smartcontractkit/chainlink/pull/13246) [`119df08eec`](https://github.com/smartcontractkit/chainlink/commit/119df08eec3609a41880a5b471c466e90fff36f8) Thanks [@ilija42](https://github.com/ilija42)! - Added a mechanism to validate forwarders for OCR2 and fallback to EOA if necessary #added - [#12867](https://github.com/smartcontractkit/chainlink/pull/12867) [`27d9413286`](https://github.com/smartcontractkit/chainlink/commit/27d941328655e0cde608c1eff47de736c11e2e58) Thanks [@dhaidashenko](https://github.com/dhaidashenko)! - Added a new CLI command, `blocks find-lca,` which finds the latest block that is available in both the database and on the chain for the specified chain. Added a new CLI command, `node remove-blocks,` which removes all blocks and logs greater than or equal to the specified block number. #nops #added -- [#12914](https://github.com/smartcontractkit/chainlink/pull/12914) [`28df745115`](https://github.com/smartcontractkit/chainlink/commit/28df74511568df989944ee92cfd625a5d22a2840) Thanks [@krehermann](https://github.com/krehermann)! - #internal Add script to create test database user and update docs - -- [#12837](https://github.com/smartcontractkit/chainlink/pull/12837) [`f7982fa718`](https://github.com/smartcontractkit/chainlink/commit/f7982fa718cd9dc6563019acd8dfc5a40475df9e) Thanks [@cedric-cordenier](https://github.com/cedric-cordenier)! - Add support for workflow jobs to Operator UI #wip #added - - [#12686](https://github.com/smartcontractkit/chainlink/pull/12686) [`2e768c150b`](https://github.com/smartcontractkit/chainlink/commit/2e768c150b44eb3ac8e41e7bafdd46911be57397) Thanks [@nolag](https://github.com/nolag)! - Add a comment to Chain Reader Service constructor that specifies that anonymous events are not supported. -- [#12650](https://github.com/smartcontractkit/chainlink/pull/12650) [`6991af26d9`](https://github.com/smartcontractkit/chainlink/commit/6991af26d9fa0e048b72a05f4f9c13f2306c0328) Thanks [@silaslenihan](https://github.com/silaslenihan)! - #internal Gas Estimator L1Oracles to be chain specific - #removed cmd/arbgas - -- [#12857](https://github.com/smartcontractkit/chainlink/pull/12857) [`d90229e7a7`](https://github.com/smartcontractkit/chainlink/commit/d90229e7a7011f8dc1c331dffb0ad1eeaddba46f) Thanks [@ettec](https://github.com/ettec)! - #internal Updates required to work with chainlink-common changes to support grpc streams for capabilities - - [#12605](https://github.com/smartcontractkit/chainlink/pull/12605) [`1d9dd466e2`](https://github.com/smartcontractkit/chainlink/commit/1d9dd466e2933b7558949554b882f29f63d90b9f) Thanks [@reductionista](https://github.com/reductionista)! - core/chains/evm/logpoller: Stricter finality checks in LogPoller, to be more robust during rpc failover events #updated -- [#12968](https://github.com/smartcontractkit/chainlink/pull/12968) [`c97781582b`](https://github.com/smartcontractkit/chainlink/commit/c97781582bbe0333332b985fb10a06edeaafa524) Thanks [@dimriou](https://github.com/dimriou)! - Moved test functions under evm package to support evm extraction #internal - -- [#12456](https://github.com/smartcontractkit/chainlink/pull/12456) [`78dd3e026a`](https://github.com/smartcontractkit/chainlink/commit/78dd3e026a81cb656b99ac62ce552369573ca736) Thanks [@jmank88](https://github.com/jmank88)! - Use sqlutil instead of pg.Opts/Q/Queryer #internal - - [#12533](https://github.com/smartcontractkit/chainlink/pull/12533) [`ccb8cd85fe`](https://github.com/smartcontractkit/chainlink/commit/ccb8cd85fef8e3bbe3fb5580277a7bd7f477e6bb) Thanks [@DylanTinianov](https://github.com/DylanTinianov)! - #added : Re-enable abandoned transaction tracker - [#12760](https://github.com/smartcontractkit/chainlink/pull/12760) [`3f4573479c`](https://github.com/smartcontractkit/chainlink/commit/3f4573479c32dedf44f04261f9d5d4905f2542c7) Thanks [@DylanTinianov](https://github.com/DylanTinianov)! - #nops : Enable configurable client error regexes for error classification @@ -37,71 +24,47 @@ - [#12767](https://github.com/smartcontractkit/chainlink/pull/12767) [`8db5ccfb39`](https://github.com/smartcontractkit/chainlink/commit/8db5ccfb39f86c9817fcad28292dbe6500821810) Thanks [@pavel-raykov](https://github.com/pavel-raykov)! - Validate user email before asking for a password in the chainlink CLI. -- [#12851](https://github.com/smartcontractkit/chainlink/pull/12851) [`40064f0dfe`](https://github.com/smartcontractkit/chainlink/commit/40064f0dfecda6e404993dff056e7a666cca7d26) Thanks [@amit-momin](https://github.com/amit-momin)! - #internal Updated FindTxesWithAttemptsAndReceiptsByIdsAndState method signature to accept int64 for tx ID instead of big.Int - ### Patch Changes -- [#12907](https://github.com/smartcontractkit/chainlink/pull/12907) [`f0439ec840`](https://github.com/smartcontractkit/chainlink/commit/f0439ec8408b39456a74c37df9a264782ed4725c) Thanks [@ilija42](https://github.com/ilija42)! - Fix in memory data source cache changes/bug that only allowed pipeline results where none of the data sources failed. #bugfix +- [#13327](https://github.com/smartcontractkit/chainlink/pull/13327) [`0abe09d785`](https://github.com/smartcontractkit/chainlink/commit/0abe09d7852cf13970d1bb44b0e570e72be9e1e4) Thanks [@reductionista](https://github.com/reductionista)! - Reducing the scope of 0233 migration to include only 5th word index which is required for CCIP #db_update -- [#12996](https://github.com/smartcontractkit/chainlink/pull/12996) [`0a37c0ed53`](https://github.com/smartcontractkit/chainlink/commit/0a37c0ed5346df509b545c88278c026cb2adf375) Thanks [@DeividasK](https://github.com/DeividasK)! - #wip Keystone contract wrappers updated +- [#13316](https://github.com/smartcontractkit/chainlink/pull/13316) [`4fbcf7d2f8`](https://github.com/smartcontractkit/chainlink/commit/4fbcf7d2f8a51bcbec185f7061ea95078ef0d11c) Thanks [@friedemannf](https://github.com/friedemannf)! - #bugfix allow ChainType to be set to xdai -- [#12923](https://github.com/smartcontractkit/chainlink/pull/12923) [`274a988985`](https://github.com/smartcontractkit/chainlink/commit/274a988985e0530676bdfedbdb35dec4cb9fe8b2) Thanks [@shileiwill](https://github.com/shileiwill)! - use safe lib for approve #bugfix +- [#13260](https://github.com/smartcontractkit/chainlink/pull/13260) [`5daefad14c`](https://github.com/smartcontractkit/chainlink/commit/5daefad14c42011ad0c19d9c21fb1e27d93c649c) Thanks [@dhaidashenko](https://github.com/dhaidashenko)! - Fixed CPU usage issues caused by inefficiencies in HeadTracker. -- [#12991](https://github.com/smartcontractkit/chainlink/pull/12991) [`929312681f`](https://github.com/smartcontractkit/chainlink/commit/929312681fb27529915912e8bd6e4000559ea77f) Thanks [@cds95](https://github.com/cds95)! - generate gethwrappers for updating node operators in capability registry #internal + HeadTracker's support of finality tags caused a drastic increase in the number of tracked blocks on the Arbitrum chain (from 50 to 12,000), which has led to a 30% increase in CPU usage. -- [#12959](https://github.com/smartcontractkit/chainlink/pull/12959) [`e482c79822`](https://github.com/smartcontractkit/chainlink/commit/e482c7982278e232acaaa4b3e9a79165faa35d1c) Thanks [@HenryNguyen5](https://github.com/HenryNguyen5)! - #internal Optimize workflow engine tests + The fix improves the data structure for tracking blocks and makes lookup more efficient. BenchmarkHeadTracker_Backfill shows 40x time reduction. + #bugfix -- [#12754](https://github.com/smartcontractkit/chainlink/pull/12754) [`4d9875ecba`](https://github.com/smartcontractkit/chainlink/commit/4d9875ecba9c7f672a9320d43cdb3d24a529f2ee) Thanks [@amirylm](https://github.com/amirylm)! - Bumping chainlink-automation version to v1.0.3 +- [#13256](https://github.com/smartcontractkit/chainlink/pull/13256) [`d133da44a9`](https://github.com/smartcontractkit/chainlink/commit/d133da44a9bb0a1393363740cbdc7edc18871b4f) Thanks [@samsondav](https://github.com/samsondav)! - Fix panic if mercury server returns error #bugfix -- [#12636](https://github.com/smartcontractkit/chainlink/pull/12636) [`bdc076c139`](https://github.com/smartcontractkit/chainlink/commit/bdc076c1395259298f520d741a3a1b397c3e0037) Thanks [@dimriou](https://github.com/dimriou)! - Removed AppConfig from Evm config #internal +- [#12907](https://github.com/smartcontractkit/chainlink/pull/12907) [`f0439ec840`](https://github.com/smartcontractkit/chainlink/commit/f0439ec8408b39456a74c37df9a264782ed4725c) Thanks [@ilija42](https://github.com/ilija42)! - Fix in memory data source cache changes/bug that only allowed pipeline results where none of the data sources failed. #bugfix -- [#12880](https://github.com/smartcontractkit/chainlink/pull/12880) [`8337fc821b`](https://github.com/smartcontractkit/chainlink/commit/8337fc821baf8011c6c73203482db85f5a44d7ae) Thanks [@DeividasK](https://github.com/DeividasK)! - #wip Keystone wrapper regenerate +- [#12923](https://github.com/smartcontractkit/chainlink/pull/12923) [`274a988985`](https://github.com/smartcontractkit/chainlink/commit/274a988985e0530676bdfedbdb35dec4cb9fe8b2) Thanks [@shileiwill](https://github.com/shileiwill)! - use safe lib for approve #bugfix -- [#12807](https://github.com/smartcontractkit/chainlink/pull/12807) [`dd41ee6c1f`](https://github.com/smartcontractkit/chainlink/commit/dd41ee6c1fb79333bfec4e8ef795a859e09e72c8) Thanks [@jmank88](https://github.com/jmank88)! - core/services: update llo & versioning to use sqlutil #internal +- [#12754](https://github.com/smartcontractkit/chainlink/pull/12754) [`4d9875ecba`](https://github.com/smartcontractkit/chainlink/commit/4d9875ecba9c7f672a9320d43cdb3d24a529f2ee) Thanks [@amirylm](https://github.com/amirylm)! - Bumping chainlink-automation version to v1.0.3 - [#12887](https://github.com/smartcontractkit/chainlink/pull/12887) [`e87b83cd78`](https://github.com/smartcontractkit/chainlink/commit/e87b83cd78595c09061c199916c4bb9145e719b7) Thanks [@jinhoonbang](https://github.com/jinhoonbang)! - #bugfix vrf fix replay number of blocks logic and add logging for job specs - [#12848](https://github.com/smartcontractkit/chainlink/pull/12848) [`91698020fb`](https://github.com/smartcontractkit/chainlink/commit/91698020fb695545eeb4befb2d73e36cc3ded0ab) Thanks [@poopoothegorilla](https://github.com/poopoothegorilla)! - bump mockery in makefile #updated -- [#12810](https://github.com/smartcontractkit/chainlink/pull/12810) [`1fce16e735`](https://github.com/smartcontractkit/chainlink/commit/1fce16e735e417553c00680a3fcae2e081353095) Thanks [@jmank88](https://github.com/jmank88)! - core/services/keystore: switch to sqlutil.DataStore #internal - - [#11936](https://github.com/smartcontractkit/chainlink/pull/11936) [`2b38bd8738`](https://github.com/smartcontractkit/chainlink/commit/2b38bd8738b4edf16e9913c90720820bc2b8dbd1) Thanks [@erikburt](https://github.com/erikburt)! - Validate support for postgresql-client 16, and update docker image's bundled postgresql-client from 15 to 16. #nops #updated -- [#12820](https://github.com/smartcontractkit/chainlink/pull/12820) [`e523aa0bc7`](https://github.com/smartcontractkit/chainlink/commit/e523aa0bc7752fbf11dfbb842c8a411d345f30e7) Thanks [@jmank88](https://github.com/jmank88)! - core/services/keeper: switch to sqlutil.DataSource #internal - -- [#12859](https://github.com/smartcontractkit/chainlink/pull/12859) [`44c9b40e0a`](https://github.com/smartcontractkit/chainlink/commit/44c9b40e0a77be0609c33d06c3101d8a7163c3e7) Thanks [@dimriou](https://github.com/dimriou)! - Drop unused queryTimeout config from TXM strategy #internal - -- [#12909](https://github.com/smartcontractkit/chainlink/pull/12909) [`fa5b22773e`](https://github.com/smartcontractkit/chainlink/commit/fa5b22773e52744d3abab1a05cd12ecc2e103d88) Thanks [@vyzaldysanchez](https://github.com/vyzaldysanchez)! - #internal Generic Plugin `onchainSigningStrategy` support - - [#12845](https://github.com/smartcontractkit/chainlink/pull/12845) [`63abd08cd5`](https://github.com/smartcontractkit/chainlink/commit/63abd08cd55b6dc31e74c6d3e50597eb8400eeb4) Thanks [@bolekk](https://github.com/bolekk)! - #internal Remote Trigger setup -- [#12961](https://github.com/smartcontractkit/chainlink/pull/12961) [`e50d38b0bd`](https://github.com/smartcontractkit/chainlink/commit/e50d38b0bddc34aa0b97ae6bdf23c355b5619682) Thanks [@HenryNguyen5](https://github.com/HenryNguyen5)! - #internal Rename workflow tags to labels - - [#12997](https://github.com/smartcontractkit/chainlink/pull/12997) [`8c8994e242`](https://github.com/smartcontractkit/chainlink/commit/8c8994e24284236645509b4c49152e6270ce0e35) Thanks [@george-dorin](https://github.com/george-dorin)! - #bugfix Fixed an issue where the `rebroadcast-transactions` commands did not execute config validation. -- [#12888](https://github.com/smartcontractkit/chainlink/pull/12888) [`7c059b2c26`](https://github.com/smartcontractkit/chainlink/commit/7c059b2c26ed6d99a40403b4f690c0f3e08154b4) Thanks [@DeividasK](https://github.com/DeividasK)! - #wip Regenerate Keystone wrappers - -- [#12806](https://github.com/smartcontractkit/chainlink/pull/12806) [`9964dc82e5`](https://github.com/smartcontractkit/chainlink/commit/9964dc82e591f8653adb06f0b149a16e0b6cea40) Thanks [@jmank88](https://github.com/jmank88)! - core/services/ocr2/plugins/ocr2keeper/evmregister/v21/upkeepstate: use sqlutil instead of pg.QOpts #internal - -- [#12818](https://github.com/smartcontractkit/chainlink/pull/12818) [`6a0b4a9b09`](https://github.com/smartcontractkit/chainlink/commit/6a0b4a9b099663e3aed202f48f363afc4d111293) Thanks [@jmank88](https://github.com/jmank88)! - cor/services/relay/evm/mercury: switch to sqlutil.DataStore #internal - -- [#12947](https://github.com/smartcontractkit/chainlink/pull/12947) [`758ffd6da0`](https://github.com/smartcontractkit/chainlink/commit/758ffd6da097adac1f49ceded5e0998cdcb98a29) Thanks [@momentmaker](https://github.com/momentmaker)! - Add check for valid semvar value for changeset file #internal - - [#13026](https://github.com/smartcontractkit/chainlink/pull/13026) [`e21be2a890`](https://github.com/smartcontractkit/chainlink/commit/e21be2a890a50bd3cbac60c450e3c2d68ddefbd3) Thanks [@mateusz-sekara](https://github.com/mateusz-sekara)! - Improving LogPoller read queries by properly sorting by multiple columns #updated - [#12638](https://github.com/smartcontractkit/chainlink/pull/12638) [`bcf7653486`](https://github.com/smartcontractkit/chainlink/commit/bcf76534862b32503f4192e38b7e1cb4dd7e312d) Thanks [@dhaidashenko](https://github.com/dhaidashenko)! - #changed Added prefix `RPCClient returned error ({RPC_NAME})` to RPC errors to simplify filtering of RPC related issues. -- [#12811](https://github.com/smartcontractkit/chainlink/pull/12811) [`6b0a7afe23`](https://github.com/smartcontractkit/chainlink/commit/6b0a7afe235399790c066dd725c437403a47a73e) Thanks [@jmank88](https://github.com/jmank88)! - core/services/functions: switch to sqlutil.DataStore #internal - - [#12786](https://github.com/smartcontractkit/chainlink/pull/12786) [`fbb705c4f1`](https://github.com/smartcontractkit/chainlink/commit/fbb705c4f1338c6e0919d728adee827ec1e2007a) Thanks [@mateusz-sekara](https://github.com/mateusz-sekara)! - Narrowing topic, data_word indexes by adding (evm_chain_id, address, event_sig) to the index definition #db_update - [#12747](https://github.com/smartcontractkit/chainlink/pull/12747) [`2729ef76f3`](https://github.com/smartcontractkit/chainlink/commit/2729ef76f34877a2e6e8644b2e67f3e5dfb0c2b6) Thanks [@friedemannf](https://github.com/friedemannf)! - Add support for X Layer (X1) #added -- [#12979](https://github.com/smartcontractkit/chainlink/pull/12979) [`0c4c24ad8c`](https://github.com/smartcontractkit/chainlink/commit/0c4c24ad8c95e505cd2a29be711cc40e612658b0) Thanks [@cds95](https://github.com/cds95)! - update keystone gethwrapper with remove operator function #internal - -- [#12856](https://github.com/smartcontractkit/chainlink/pull/12856) [`0ec92765cc`](https://github.com/smartcontractkit/chainlink/commit/0ec92765ccd419973f4eab5b0cc38df212f4ad21) Thanks [@jmank88](https://github.com/jmank88)! - switch more EVM components to use sqlutil.DataStore #internal - [#12680](https://github.com/smartcontractkit/chainlink/pull/12680) [`f55d8be495`](https://github.com/smartcontractkit/chainlink/commit/f55d8be495a83c97ac5439672563400e12ec2ee7) Thanks [@samsondav](https://github.com/samsondav)! - #added @@ -113,34 +76,25 @@ TransmitTimeout = "5s" # Default ``` -- [#13059](https://github.com/smartcontractkit/chainlink/pull/13059) [`ea08b5f08d`](https://github.com/smartcontractkit/chainlink/commit/ea08b5f08d84d2ff1ddfa2027660ff58a60219c3) Thanks [@HenryNguyen5](https://github.com/HenryNguyen5)! - #internal fix txdb documentation typos - - [#12902](https://github.com/smartcontractkit/chainlink/pull/12902) [`d1845e22d3`](https://github.com/smartcontractkit/chainlink/commit/d1845e22d3b057d9d736bc05c30f0db34c84a7e4) Thanks [@samsondav](https://github.com/samsondav)! - Bump libocr => fd3cab206b2ca3b7ff207996b95673b2d6303ec4 - #internal - -- [#12809](https://github.com/smartcontractkit/chainlink/pull/12809) [`0af4acafbd`](https://github.com/smartcontractkit/chainlink/commit/0af4acafbdf243feea8507e421016933b0e538ca) Thanks [@jmank88](https://github.com/jmank88)! - core/sessions: switch to sqlutil.DataSource #internal - -- [#12808](https://github.com/smartcontractkit/chainlink/pull/12808) [`601c79f891`](https://github.com/smartcontractkit/chainlink/commit/601c79f89120dc0d98db63a528c79644ebb38132) Thanks [@jmank88](https://github.com/jmank88)! - core/bridges: use sqlutil.DataSource #internal - -- [#12903](https://github.com/smartcontractkit/chainlink/pull/12903) [`a293dfe797`](https://github.com/smartcontractkit/chainlink/commit/a293dfe7975b035a71eff7a6197e3ce5a25f1887) Thanks [@shileiwill](https://github.com/shileiwill)! - add getters #internal - - [#12669](https://github.com/smartcontractkit/chainlink/pull/12669) [`3134ce8868`](https://github.com/smartcontractkit/chainlink/commit/3134ce8868ccc22bd4ae670c8b0bfda5fa78a332) Thanks [@leeyikjiun](https://github.com/leeyikjiun)! - vrfv2plus - account for num words in coordinator gas overhead in v2plus wrapper -- [#13022](https://github.com/smartcontractkit/chainlink/pull/13022) [`2805fa6c9b`](https://github.com/smartcontractkit/chainlink/commit/2805fa6c9b469d535edcd3d66c08e1d22bbaa2d0) Thanks [@cds95](https://github.com/cds95)! - #internal - - [#12951](https://github.com/smartcontractkit/chainlink/pull/12951) [`c98ea6413d`](https://github.com/smartcontractkit/chainlink/commit/c98ea6413dcdc02a7d0c82b9b36d3fce97dac94b) Thanks [@ogtownsend](https://github.com/ogtownsend)! - #changed Updating the log trigger log provider's readMaxBatchSize to 56 - [#12944](https://github.com/smartcontractkit/chainlink/pull/12944) [`167782c680`](https://github.com/smartcontractkit/chainlink/commit/167782c680b92b1e99ae3e9d1a8b87fd595dd644) Thanks [@shileiwill](https://github.com/shileiwill)! - minor fixes #bugfix -- [#12906](https://github.com/smartcontractkit/chainlink/pull/12906) [`365c38be8b`](https://github.com/smartcontractkit/chainlink/commit/365c38be8b589d5ffa0b21755dcb40e2e4205652) Thanks [@cds95](https://github.com/cds95)! - update keystone gethwrapper #internal - - [#12966](https://github.com/smartcontractkit/chainlink/pull/12966) [`ac7d3409ed`](https://github.com/smartcontractkit/chainlink/commit/ac7d3409ed9bc98af970ca75c3b92e41e4fb01cf) Thanks [@george-dorin](https://github.com/george-dorin)! - #added JuelsPerFeeCoinCache is enabled by default for OCR2 jobs, added `Disable` field under [pluginConfig.JuelsPerFeeCoinCache] tag to disable this feature (e.g. Disable=true) - [#12916](https://github.com/smartcontractkit/chainlink/pull/12916) [`7ec1d5b7ab`](https://github.com/smartcontractkit/chainlink/commit/7ec1d5b7abb51e100f7a6a48662e33703a589ecb) Thanks [@shileiwill](https://github.com/shileiwill)! - offchain settlement fix #bugfix - [#12998](https://github.com/smartcontractkit/chainlink/pull/12998) [`d50936ce38`](https://github.com/smartcontractkit/chainlink/commit/d50936ce3824d7ad6026f630172e9764a34cc08b) Thanks [@mateusz-sekara](https://github.com/mateusz-sekara)! - Support for retention in LogPoller's filters registered by ContractTransmitter #changed +## 2.11.1 - 2024-05-20 + +### Patch Changes +- [#13254](https://github.com/smartcontractkit/chainlink/pull/13254) [`c0d201a9a8`](https://github.com/smartcontractkit/chainlink/commit/c0d201a9a85b66718c5102427c34276e0b61c84e) Thanks [@samsondav!] - Fix panic if mercury server returns error #bugfix + ## 2.11.0 - 2024-04-30 ### Minor Changes @@ -189,7 +143,7 @@ You may disable if this results in excessive log volume. Disable like so: ``` - [Pipeline] + [JobPipeline] VerboseLogging = false ``` @@ -219,7 +173,7 @@ - [#12404](https://github.com/smartcontractkit/chainlink/pull/12404) [`b74079b672`](https://github.com/smartcontractkit/chainlink/commit/b74079b672f36fb0c241f90ea1e875ea3a9524da) Thanks [@HenryNguyen5](https://github.com/HenryNguyen5)! - Add OCR3 capability contract wrapper -- [#12498](https://github.com/smartcontractkit/chainlink/pull/12498) [`1c576d0e34`](https://github.com/smartcontractkit/chainlink/commit/1c576d0e34d93a6298ddcb662ee89fd04eeda53e) Thanks [@samsondav](https://github.com/samsondav)! - Add new config option Pipeline.VerboseLogging +- [#12498](https://github.com/smartcontractkit/chainlink/pull/12498) [`1c576d0e34`](https://github.com/smartcontractkit/chainlink/commit/1c576d0e34d93a6298ddcb662ee89fd04eeda53e) Thanks [@samsondav](https://github.com/samsondav)! - Add new config option JobPipeline.VerboseLogging VerboseLogging enables detailed logging of pipeline execution steps. This is disabled by default because it increases log volume for pipeline runs, but can @@ -230,7 +184,7 @@ Set it like the following example: ``` - [Pipeline] + [JobPipeline] VerboseLogging = true ``` diff --git a/common/config/chaintype.go b/common/config/chaintype.go index 73c48960a1..3f3150950d 100644 --- a/common/config/chaintype.go +++ b/common/config/chaintype.go @@ -5,10 +5,8 @@ import ( "strings" ) -// ChainType denotes the chain or network to work with type ChainType string -// nolint const ( ChainArbitrum ChainType = "arbitrum" ChainCelo ChainType = "celo" @@ -18,11 +16,103 @@ const ( ChainOptimismBedrock ChainType = "optimismBedrock" ChainScroll ChainType = "scroll" ChainWeMix ChainType = "wemix" - ChainXDai ChainType = "xdai" // Deprecated: use ChainGnosis instead ChainXLayer ChainType = "xlayer" ChainZkSync ChainType = "zksync" ) +// IsL2 returns true if this chain is a Layer 2 chain. Notably: +// - the block numbers used for log searching are different from calling block.number +// - gas bumping is not supported, since there is no tx mempool +func (c ChainType) IsL2() bool { + switch c { + case ChainArbitrum, ChainMetis: + return true + default: + return false + } +} + +func (c ChainType) IsValid() bool { + switch c { + case "", ChainArbitrum, ChainCelo, ChainGnosis, ChainKroma, ChainMetis, ChainOptimismBedrock, ChainScroll, ChainWeMix, ChainXLayer, ChainZkSync: + return true + } + return false +} + +func ChainTypeFromSlug(slug string) ChainType { + switch slug { + case "arbitrum": + return ChainArbitrum + case "celo": + return ChainCelo + case "gnosis", "xdai": + return ChainGnosis + case "kroma": + return ChainKroma + case "metis": + return ChainMetis + case "optimismBedrock": + return ChainOptimismBedrock + case "scroll": + return ChainScroll + case "wemix": + return ChainWeMix + case "xlayer": + return ChainXLayer + case "zksync": + return ChainZkSync + default: + return ChainType(slug) + } +} + +type ChainTypeConfig struct { + value ChainType + slug string +} + +func NewChainTypeConfig(slug string) *ChainTypeConfig { + return &ChainTypeConfig{ + value: ChainTypeFromSlug(slug), + slug: slug, + } +} + +func (c *ChainTypeConfig) MarshalText() ([]byte, error) { + if c == nil { + return nil, nil + } + return []byte(c.slug), nil +} + +func (c *ChainTypeConfig) UnmarshalText(b []byte) error { + c.slug = string(b) + c.value = ChainTypeFromSlug(c.slug) + return nil +} + +func (c *ChainTypeConfig) Slug() string { + if c == nil { + return "" + } + return c.slug +} + +func (c *ChainTypeConfig) ChainType() ChainType { + if c == nil { + return "" + } + return c.value +} + +func (c *ChainTypeConfig) String() string { + if c == nil { + return "" + } + return string(c.value) +} + var ErrInvalidChainType = fmt.Errorf("must be one of %s or omitted", strings.Join([]string{ string(ChainArbitrum), string(ChainCelo), @@ -35,24 +125,3 @@ var ErrInvalidChainType = fmt.Errorf("must be one of %s or omitted", strings.Joi string(ChainXLayer), string(ChainZkSync), }, ", ")) - -// IsValid returns true if the ChainType value is known or empty. -func (c ChainType) IsValid() bool { - switch c { - case "", ChainArbitrum, ChainCelo, ChainGnosis, ChainKroma, ChainMetis, ChainOptimismBedrock, ChainScroll, ChainWeMix, ChainXDai, ChainXLayer, ChainZkSync: - return true - } - return false -} - -// IsL2 returns true if this chain is a Layer 2 chain. Notably: -// - the block numbers used for log searching are different from calling block.number -// - gas bumping is not supported, since there is no tx mempool -func (c ChainType) IsL2() bool { - switch c { - case ChainArbitrum, ChainMetis: - return true - default: - return false - } -} diff --git a/common/headtracker/head_tracker.go b/common/headtracker/head_tracker.go index bc7a4910b3..5191648e71 100644 --- a/common/headtracker/head_tracker.go +++ b/common/headtracker/head_tracker.go @@ -119,7 +119,7 @@ func (ht *headTracker[HTH, S, ID, BLOCK_HASH]) Start(ctx context.Context) error if ctx.Err() != nil { return ctx.Err() } - ht.log.Errorw("Error handling initial head", "err", err) + ht.log.Errorw("Error handling initial head", "err", err.Error()) } ht.wgDone.Add(3) @@ -337,9 +337,23 @@ func (ht *headTracker[HTH, S, ID, BLOCK_HASH]) backfillLoop() { // calculateLatestFinalized - returns latest finalized block. It's expected that currentHeadNumber - is the head of // canonical chain. There is no guaranties that returned block belongs to the canonical chain. Additional verification // must be performed before usage. -func (ht *headTracker[HTH, S, ID, BLOCK_HASH]) calculateLatestFinalized(ctx context.Context, currentHead HTH) (h HTH, err error) { - if ht.config.FinalityTagEnabled() { - return ht.client.LatestFinalizedBlock(ctx) +func (ht *headTracker[HTH, S, ID, BLOCK_HASH]) calculateLatestFinalized(ctx context.Context, currentHead HTH) (latestFinalized HTH, err error) { + if ht.config.FinalityTagEnabled() && !ht.htConfig.FinalityTagBypass() { + latestFinalized, err = ht.client.LatestFinalizedBlock(ctx) + if err != nil { + return latestFinalized, fmt.Errorf("failed to get latest finalized block: %w", err) + } + + if !latestFinalized.IsValid() { + return latestFinalized, fmt.Errorf("failed to get valid latest finalized block") + } + + if currentHead.BlockNumber()-latestFinalized.BlockNumber() > int64(ht.htConfig.MaxAllowedFinalityDepth()) { + return latestFinalized, fmt.Errorf("gap between latest finalized block (%d) and current head (%d) is too large (> %d)", + latestFinalized.BlockNumber(), currentHead.BlockNumber(), ht.htConfig.MaxAllowedFinalityDepth()) + } + + return latestFinalized, nil } // no need to make an additional RPC call on chains with instant finality if ht.config.FinalityDepth() == 0 { diff --git a/common/headtracker/types/config.go b/common/headtracker/types/config.go index 019aa9847d..e0eb422672 100644 --- a/common/headtracker/types/config.go +++ b/common/headtracker/types/config.go @@ -12,4 +12,6 @@ type HeadTrackerConfig interface { HistoryDepth() uint32 MaxBufferSize() uint32 SamplingInterval() time.Duration + FinalityTagBypass() bool + MaxAllowedFinalityDepth() uint32 } diff --git a/common/txmgr/mocks/tx_manager.go b/common/txmgr/mocks/tx_manager.go index 935e731381..974fd45590 100644 --- a/common/txmgr/mocks/tx_manager.go +++ b/common/txmgr/mocks/tx_manager.go @@ -273,9 +273,9 @@ func (_m *TxManager[CHAIN_ID, HEAD, ADDR, TX_HASH, BLOCK_HASH, SEQ, FEE]) FindTx return r0, r1 } -// GetForwarderForEOA provides a mock function with given fields: eoa -func (_m *TxManager[CHAIN_ID, HEAD, ADDR, TX_HASH, BLOCK_HASH, SEQ, FEE]) GetForwarderForEOA(eoa ADDR) (ADDR, error) { - ret := _m.Called(eoa) +// GetForwarderForEOA provides a mock function with given fields: ctx, eoa +func (_m *TxManager[CHAIN_ID, HEAD, ADDR, TX_HASH, BLOCK_HASH, SEQ, FEE]) GetForwarderForEOA(ctx context.Context, eoa ADDR) (ADDR, error) { + ret := _m.Called(ctx, eoa) if len(ret) == 0 { panic("no return value specified for GetForwarderForEOA") @@ -283,17 +283,45 @@ func (_m *TxManager[CHAIN_ID, HEAD, ADDR, TX_HASH, BLOCK_HASH, SEQ, FEE]) GetFor var r0 ADDR var r1 error - if rf, ok := ret.Get(0).(func(ADDR) (ADDR, error)); ok { - return rf(eoa) + if rf, ok := ret.Get(0).(func(context.Context, ADDR) (ADDR, error)); ok { + return rf(ctx, eoa) } - if rf, ok := ret.Get(0).(func(ADDR) ADDR); ok { - r0 = rf(eoa) + if rf, ok := ret.Get(0).(func(context.Context, ADDR) ADDR); ok { + r0 = rf(ctx, eoa) } else { r0 = ret.Get(0).(ADDR) } - if rf, ok := ret.Get(1).(func(ADDR) error); ok { - r1 = rf(eoa) + if rf, ok := ret.Get(1).(func(context.Context, ADDR) error); ok { + r1 = rf(ctx, eoa) + } else { + r1 = ret.Error(1) + } + + return r0, r1 +} + +// GetForwarderForEOAOCR2Feeds provides a mock function with given fields: ctx, eoa, ocr2AggregatorID +func (_m *TxManager[CHAIN_ID, HEAD, ADDR, TX_HASH, BLOCK_HASH, SEQ, FEE]) GetForwarderForEOAOCR2Feeds(ctx context.Context, eoa ADDR, ocr2AggregatorID ADDR) (ADDR, error) { + ret := _m.Called(ctx, eoa, ocr2AggregatorID) + + if len(ret) == 0 { + panic("no return value specified for GetForwarderForEOAOCR2Feeds") + } + + var r0 ADDR + var r1 error + if rf, ok := ret.Get(0).(func(context.Context, ADDR, ADDR) (ADDR, error)); ok { + return rf(ctx, eoa, ocr2AggregatorID) + } + if rf, ok := ret.Get(0).(func(context.Context, ADDR, ADDR) ADDR); ok { + r0 = rf(ctx, eoa, ocr2AggregatorID) + } else { + r0 = ret.Get(0).(ADDR) + } + + if rf, ok := ret.Get(1).(func(context.Context, ADDR, ADDR) error); ok { + r1 = rf(ctx, eoa, ocr2AggregatorID) } else { r1 = ret.Error(1) } diff --git a/common/txmgr/txmgr.go b/common/txmgr/txmgr.go index 4d4eabe5c4..44b518fdaa 100644 --- a/common/txmgr/txmgr.go +++ b/common/txmgr/txmgr.go @@ -46,7 +46,8 @@ type TxManager[ services.Service Trigger(addr ADDR) CreateTransaction(ctx context.Context, txRequest txmgrtypes.TxRequest[ADDR, TX_HASH]) (etx txmgrtypes.Tx[CHAIN_ID, ADDR, TX_HASH, BLOCK_HASH, SEQ, FEE], err error) - GetForwarderForEOA(eoa ADDR) (forwarder ADDR, err error) + GetForwarderForEOA(ctx context.Context, eoa ADDR) (forwarder ADDR, err error) + GetForwarderForEOAOCR2Feeds(ctx context.Context, eoa, ocr2AggregatorID ADDR) (forwarder ADDR, err error) RegisterResumeCallback(fn ResumeCallback) SendNativeToken(ctx context.Context, chainID CHAIN_ID, from, to ADDR, value big.Int, gasLimit uint64) (etx txmgrtypes.Tx[CHAIN_ID, ADDR, TX_HASH, BLOCK_HASH, SEQ, FEE], err error) Reset(addr ADDR, abandon bool) error @@ -545,11 +546,20 @@ func (b *Txm[CHAIN_ID, HEAD, ADDR, TX_HASH, BLOCK_HASH, R, SEQ, FEE]) CreateTran } // Calls forwarderMgr to get a proper forwarder for a given EOA. -func (b *Txm[CHAIN_ID, HEAD, ADDR, TX_HASH, BLOCK_HASH, R, SEQ, FEE]) GetForwarderForEOA(eoa ADDR) (forwarder ADDR, err error) { +func (b *Txm[CHAIN_ID, HEAD, ADDR, TX_HASH, BLOCK_HASH, R, SEQ, FEE]) GetForwarderForEOA(ctx context.Context, eoa ADDR) (forwarder ADDR, err error) { if !b.txConfig.ForwardersEnabled() { return forwarder, fmt.Errorf("forwarding is not enabled, to enable set Transactions.ForwardersEnabled =true") } - forwarder, err = b.fwdMgr.ForwarderFor(eoa) + forwarder, err = b.fwdMgr.ForwarderFor(ctx, eoa) + return +} + +// GetForwarderForEOAOCR2Feeds calls forwarderMgr to get a proper forwarder for a given EOA and checks if its set as a transmitter on the OCR2Aggregator contract. +func (b *Txm[CHAIN_ID, HEAD, ADDR, TX_HASH, BLOCK_HASH, R, SEQ, FEE]) GetForwarderForEOAOCR2Feeds(ctx context.Context, eoa, ocr2Aggregator ADDR) (forwarder ADDR, err error) { + if !b.txConfig.ForwardersEnabled() { + return forwarder, fmt.Errorf("forwarding is not enabled, to enable set Transactions.ForwardersEnabled =true") + } + forwarder, err = b.fwdMgr.ForwarderForOCR2Feeds(ctx, eoa, ocr2Aggregator) return } @@ -646,9 +656,13 @@ func (n *NullTxManager[CHAIN_ID, HEAD, ADDR, TX_HASH, BLOCK_HASH, SEQ, FEE]) Tri func (n *NullTxManager[CHAIN_ID, HEAD, ADDR, TX_HASH, BLOCK_HASH, SEQ, FEE]) CreateTransaction(ctx context.Context, txRequest txmgrtypes.TxRequest[ADDR, TX_HASH]) (etx txmgrtypes.Tx[CHAIN_ID, ADDR, TX_HASH, BLOCK_HASH, SEQ, FEE], err error) { return etx, errors.New(n.ErrMsg) } -func (n *NullTxManager[CHAIN_ID, HEAD, ADDR, TX_HASH, BLOCK_HASH, SEQ, FEE]) GetForwarderForEOA(addr ADDR) (fwdr ADDR, err error) { +func (n *NullTxManager[CHAIN_ID, HEAD, ADDR, TX_HASH, BLOCK_HASH, SEQ, FEE]) GetForwarderForEOA(ctx context.Context, addr ADDR) (fwdr ADDR, err error) { + return fwdr, err +} +func (n *NullTxManager[CHAIN_ID, HEAD, ADDR, TX_HASH, BLOCK_HASH, SEQ, FEE]) GetForwarderForEOAOCR2Feeds(ctx context.Context, _, _ ADDR) (fwdr ADDR, err error) { return fwdr, err } + func (n *NullTxManager[CHAIN_ID, HEAD, ADDR, TX_HASH, BLOCK_HASH, SEQ, FEE]) Reset(addr ADDR, abandon bool) error { return nil } diff --git a/common/txmgr/types/forwarder_manager.go b/common/txmgr/types/forwarder_manager.go index 4d70b73000..6acb491a1f 100644 --- a/common/txmgr/types/forwarder_manager.go +++ b/common/txmgr/types/forwarder_manager.go @@ -1,14 +1,18 @@ package types import ( + "context" + "github.com/smartcontractkit/chainlink-common/pkg/services" + "github.com/smartcontractkit/chainlink/v2/common/types" ) //go:generate mockery --quiet --name ForwarderManager --output ./mocks/ --case=underscore type ForwarderManager[ADDR types.Hashable] interface { services.Service - ForwarderFor(addr ADDR) (forwarder ADDR, err error) + ForwarderFor(ctx context.Context, addr ADDR) (forwarder ADDR, err error) + ForwarderForOCR2Feeds(ctx context.Context, eoa, ocr2Aggregator ADDR) (forwarder ADDR, err error) // Converts payload to be forwarder-friendly ConvertPayload(dest ADDR, origPayload []byte) ([]byte, error) } diff --git a/common/txmgr/types/mocks/forwarder_manager.go b/common/txmgr/types/mocks/forwarder_manager.go index fe40e7bb5e..b2cf9bc9d3 100644 --- a/common/txmgr/types/mocks/forwarder_manager.go +++ b/common/txmgr/types/mocks/forwarder_manager.go @@ -63,9 +63,9 @@ func (_m *ForwarderManager[ADDR]) ConvertPayload(dest ADDR, origPayload []byte) return r0, r1 } -// ForwarderFor provides a mock function with given fields: addr -func (_m *ForwarderManager[ADDR]) ForwarderFor(addr ADDR) (ADDR, error) { - ret := _m.Called(addr) +// ForwarderFor provides a mock function with given fields: ctx, addr +func (_m *ForwarderManager[ADDR]) ForwarderFor(ctx context.Context, addr ADDR) (ADDR, error) { + ret := _m.Called(ctx, addr) if len(ret) == 0 { panic("no return value specified for ForwarderFor") @@ -73,17 +73,45 @@ func (_m *ForwarderManager[ADDR]) ForwarderFor(addr ADDR) (ADDR, error) { var r0 ADDR var r1 error - if rf, ok := ret.Get(0).(func(ADDR) (ADDR, error)); ok { - return rf(addr) + if rf, ok := ret.Get(0).(func(context.Context, ADDR) (ADDR, error)); ok { + return rf(ctx, addr) } - if rf, ok := ret.Get(0).(func(ADDR) ADDR); ok { - r0 = rf(addr) + if rf, ok := ret.Get(0).(func(context.Context, ADDR) ADDR); ok { + r0 = rf(ctx, addr) } else { r0 = ret.Get(0).(ADDR) } - if rf, ok := ret.Get(1).(func(ADDR) error); ok { - r1 = rf(addr) + if rf, ok := ret.Get(1).(func(context.Context, ADDR) error); ok { + r1 = rf(ctx, addr) + } else { + r1 = ret.Error(1) + } + + return r0, r1 +} + +// ForwarderForOCR2Feeds provides a mock function with given fields: ctx, eoa, ocr2Aggregator +func (_m *ForwarderManager[ADDR]) ForwarderForOCR2Feeds(ctx context.Context, eoa ADDR, ocr2Aggregator ADDR) (ADDR, error) { + ret := _m.Called(ctx, eoa, ocr2Aggregator) + + if len(ret) == 0 { + panic("no return value specified for ForwarderForOCR2Feeds") + } + + var r0 ADDR + var r1 error + if rf, ok := ret.Get(0).(func(context.Context, ADDR, ADDR) (ADDR, error)); ok { + return rf(ctx, eoa, ocr2Aggregator) + } + if rf, ok := ret.Get(0).(func(context.Context, ADDR, ADDR) ADDR); ok { + r0 = rf(ctx, eoa, ocr2Aggregator) + } else { + r0 = ret.Get(0).(ADDR) + } + + if rf, ok := ret.Get(1).(func(context.Context, ADDR, ADDR) error); ok { + r1 = rf(ctx, eoa, ocr2Aggregator) } else { r1 = ret.Error(1) } diff --git a/core/chains/evm/client/config_builder.go b/core/chains/evm/client/config_builder.go index d78a981b88..9817879b57 100644 --- a/core/chains/evm/client/config_builder.go +++ b/core/chains/evm/client/config_builder.go @@ -8,6 +8,7 @@ import ( "go.uber.org/multierr" commonconfig "github.com/smartcontractkit/chainlink-common/pkg/config" + "github.com/smartcontractkit/chainlink/v2/common/config" commonclient "github.com/smartcontractkit/chainlink/v2/common/client" evmconfig "github.com/smartcontractkit/chainlink/v2/core/chains/evm/config" @@ -55,7 +56,7 @@ func NewClientConfigs( chainConfig := &evmconfig.EVMConfig{ C: &toml.EVMConfig{ Chain: toml.Chain{ - ChainType: &chainType, + ChainType: config.NewChainTypeConfig(chainType), FinalityDepth: finalityDepth, FinalityTagEnabled: finalityTagEnabled, NoNewHeadsThreshold: commonconfig.MustNewDuration(noNewHeadsThreshold), diff --git a/core/chains/evm/client/pool_test.go b/core/chains/evm/client/pool_test.go index 462aeed43e..71261979cb 100644 --- a/core/chains/evm/client/pool_test.go +++ b/core/chains/evm/client/pool_test.go @@ -169,7 +169,6 @@ func TestPool_Dial(t *testing.T) { if err == nil { t.Cleanup(func() { assert.NoError(t, p.Close()) }) } - assert.True(t, p.ChainType().IsValid()) assert.False(t, p.ChainType().IsL2()) if test.errStr != "" { require.Error(t, err) @@ -333,7 +332,6 @@ func TestUnit_Pool_BatchCallContextAll(t *testing.T) { p := evmclient.NewPool(logger.Test(t), defaultConfig.NodeSelectionMode(), defaultConfig.LeaseDuration(), time.Second*0, nodes, sendonlys, &cltest.FixtureChainID, "") - assert.True(t, p.ChainType().IsValid()) assert.False(t, p.ChainType().IsL2()) require.NoError(t, p.BatchCallContextAll(ctx, b)) } diff --git a/core/chains/evm/config/chain_scoped.go b/core/chains/evm/config/chain_scoped.go index 8f94fef09f..17d4120ddf 100644 --- a/core/chains/evm/config/chain_scoped.go +++ b/core/chains/evm/config/chain_scoped.go @@ -128,7 +128,7 @@ func (e *EVMConfig) ChainType() commonconfig.ChainType { if e.C.ChainType == nil { return "" } - return commonconfig.ChainType(*e.C.ChainType) + return e.C.ChainType.ChainType() } func (e *EVMConfig) ChainID() *big.Int { diff --git a/core/chains/evm/config/chain_scoped_head_tracker.go b/core/chains/evm/config/chain_scoped_head_tracker.go index c46f5b72e6..8bc1ff188a 100644 --- a/core/chains/evm/config/chain_scoped_head_tracker.go +++ b/core/chains/evm/config/chain_scoped_head_tracker.go @@ -21,3 +21,11 @@ func (h *headTrackerConfig) MaxBufferSize() uint32 { func (h *headTrackerConfig) SamplingInterval() time.Duration { return h.c.SamplingInterval.Duration() } + +func (h *headTrackerConfig) FinalityTagBypass() bool { + return *h.c.FinalityTagBypass +} + +func (h *headTrackerConfig) MaxAllowedFinalityDepth() uint32 { + return *h.c.MaxAllowedFinalityDepth +} diff --git a/core/chains/evm/config/config.go b/core/chains/evm/config/config.go index 34de754de2..9ee794e770 100644 --- a/core/chains/evm/config/config.go +++ b/core/chains/evm/config/config.go @@ -70,6 +70,8 @@ type HeadTracker interface { HistoryDepth() uint32 MaxBufferSize() uint32 SamplingInterval() time.Duration + FinalityTagBypass() bool + MaxAllowedFinalityDepth() uint32 } type BalanceMonitor interface { diff --git a/core/chains/evm/config/config_test.go b/core/chains/evm/config/config_test.go index 9553f59ad6..69f6ea0875 100644 --- a/core/chains/evm/config/config_test.go +++ b/core/chains/evm/config/config_test.go @@ -376,6 +376,8 @@ func TestChainScopedConfig_HeadTracker(t *testing.T) { assert.Equal(t, uint32(100), ht.HistoryDepth()) assert.Equal(t, uint32(3), ht.MaxBufferSize()) assert.Equal(t, time.Second, ht.SamplingInterval()) + assert.Equal(t, true, ht.FinalityTagBypass()) + assert.Equal(t, uint32(10000), ht.MaxAllowedFinalityDepth()) } func Test_chainScopedConfig_Validate(t *testing.T) { @@ -406,7 +408,7 @@ func Test_chainScopedConfig_Validate(t *testing.T) { t.Run("arbitrum-estimator", func(t *testing.T) { t.Run("custom", func(t *testing.T) { cfg := configWithChains(t, 0, &toml.Chain{ - ChainType: ptr(string(commonconfig.ChainArbitrum)), + ChainType: commonconfig.NewChainTypeConfig(string(commonconfig.ChainArbitrum)), GasEstimator: toml.GasEstimator{ Mode: ptr("BlockHistory"), }, @@ -437,7 +439,7 @@ func Test_chainScopedConfig_Validate(t *testing.T) { t.Run("optimism-estimator", func(t *testing.T) { t.Run("custom", func(t *testing.T) { cfg := configWithChains(t, 0, &toml.Chain{ - ChainType: ptr(string(commonconfig.ChainOptimismBedrock)), + ChainType: commonconfig.NewChainTypeConfig(string(commonconfig.ChainOptimismBedrock)), GasEstimator: toml.GasEstimator{ Mode: ptr("BlockHistory"), }, diff --git a/core/chains/evm/config/toml/config.go b/core/chains/evm/config/toml/config.go index 1b1baf4109..a835c0bec5 100644 --- a/core/chains/evm/config/toml/config.go +++ b/core/chains/evm/config/toml/config.go @@ -294,18 +294,14 @@ func (c *EVMConfig) ValidateConfig() (err error) { } else if c.ChainID.String() == "" { err = multierr.Append(err, commonconfig.ErrEmpty{Name: "ChainID", Msg: "required for all chains"}) } else if must, ok := ChainTypeForID(c.ChainID); ok { // known chain id - if c.ChainType == nil && must != "" { - err = multierr.Append(err, commonconfig.ErrMissing{Name: "ChainType", - Msg: fmt.Sprintf("only %q can be used with this chain id", must)}) - } else if c.ChainType != nil && *c.ChainType != string(must) { - if *c.ChainType == "" { - err = multierr.Append(err, commonconfig.ErrEmpty{Name: "ChainType", - Msg: fmt.Sprintf("only %q can be used with this chain id", must)}) - } else if must == "" { - err = multierr.Append(err, commonconfig.ErrInvalid{Name: "ChainType", Value: *c.ChainType, + // Check if the parsed value matched the expected value + is := c.ChainType.ChainType() + if is != must { + if must == "" { + err = multierr.Append(err, commonconfig.ErrInvalid{Name: "ChainType", Value: c.ChainType.ChainType(), Msg: "must not be set with this chain id"}) } else { - err = multierr.Append(err, commonconfig.ErrInvalid{Name: "ChainType", Value: *c.ChainType, + err = multierr.Append(err, commonconfig.ErrInvalid{Name: "ChainType", Value: c.ChainType.ChainType(), Msg: fmt.Sprintf("only %q can be used with this chain id", must)}) } } @@ -345,7 +341,7 @@ type Chain struct { AutoCreateKey *bool BlockBackfillDepth *uint32 BlockBackfillSkip *bool - ChainType *string + ChainType *config.ChainTypeConfig FinalityDepth *uint32 FinalityTagEnabled *bool FlagsContractAddress *types.EIP55Address @@ -375,12 +371,8 @@ type Chain struct { } func (c *Chain) ValidateConfig() (err error) { - var chainType config.ChainType - if c.ChainType != nil { - chainType = config.ChainType(*c.ChainType) - } - if !chainType.IsValid() { - err = multierr.Append(err, commonconfig.ErrInvalid{Name: "ChainType", Value: *c.ChainType, + if !c.ChainType.ChainType().IsValid() { + err = multierr.Append(err, commonconfig.ErrInvalid{Name: "ChainType", Value: c.ChainType.ChainType(), Msg: config.ErrInvalidChainType.Error()}) } @@ -685,9 +677,11 @@ func (e *KeySpecificGasEstimator) setFrom(f *KeySpecificGasEstimator) { } type HeadTracker struct { - HistoryDepth *uint32 - MaxBufferSize *uint32 - SamplingInterval *commonconfig.Duration + HistoryDepth *uint32 + MaxBufferSize *uint32 + SamplingInterval *commonconfig.Duration + MaxAllowedFinalityDepth *uint32 + FinalityTagBypass *bool } func (t *HeadTracker) setFrom(f *HeadTracker) { @@ -700,6 +694,21 @@ func (t *HeadTracker) setFrom(f *HeadTracker) { if v := f.SamplingInterval; v != nil { t.SamplingInterval = v } + if v := f.MaxAllowedFinalityDepth; v != nil { + t.MaxAllowedFinalityDepth = v + } + if v := f.FinalityTagBypass; v != nil { + t.FinalityTagBypass = v + } +} + +func (t *HeadTracker) ValidateConfig() (err error) { + if *t.MaxAllowedFinalityDepth < 1 { + err = multierr.Append(err, commonconfig.ErrInvalid{Name: "MaxAllowedFinalityDepth", Value: *t.MaxAllowedFinalityDepth, + Msg: "must be greater than or equal to 1"}) + } + + return } type ClientErrors struct { diff --git a/core/chains/evm/config/toml/defaults.go b/core/chains/evm/config/toml/defaults.go index 951246eeb2..622ac132e1 100644 --- a/core/chains/evm/config/toml/defaults.go +++ b/core/chains/evm/config/toml/defaults.go @@ -94,10 +94,7 @@ func Defaults(chainID *big.Big, with ...*Chain) Chain { func ChainTypeForID(chainID *big.Big) (config.ChainType, bool) { s := chainID.String() if d, ok := defaults[s]; ok { - if d.ChainType == nil { - return "", true - } - return config.ChainType(*d.ChainType), true + return d.ChainType.ChainType(), true } return "", false } diff --git a/core/chains/evm/config/toml/defaults/Avalanche_Fuji.toml b/core/chains/evm/config/toml/defaults/Avalanche_Fuji.toml index b9fb2f2624..d48bfbea63 100644 --- a/core/chains/evm/config/toml/defaults/Avalanche_Fuji.toml +++ b/core/chains/evm/config/toml/defaults/Avalanche_Fuji.toml @@ -16,3 +16,6 @@ PriceMin = '25 gwei' [GasEstimator.BlockHistory] BlockHistorySize = 24 + +[HeadTracker] +FinalityTagBypass = false \ No newline at end of file diff --git a/core/chains/evm/config/toml/defaults/BSC_Testnet.toml b/core/chains/evm/config/toml/defaults/BSC_Testnet.toml index 52bce72653..6309f39c0d 100644 --- a/core/chains/evm/config/toml/defaults/BSC_Testnet.toml +++ b/core/chains/evm/config/toml/defaults/BSC_Testnet.toml @@ -28,6 +28,7 @@ BlockHistorySize = 24 [HeadTracker] HistoryDepth = 100 SamplingInterval = '1s' +FinalityTagBypass = false [OCR] DatabaseTimeout = '2s' diff --git a/core/chains/evm/config/toml/defaults/Ethereum_Sepolia.toml b/core/chains/evm/config/toml/defaults/Ethereum_Sepolia.toml index 82c71306e1..281dd51b50 100644 --- a/core/chains/evm/config/toml/defaults/Ethereum_Sepolia.toml +++ b/core/chains/evm/config/toml/defaults/Ethereum_Sepolia.toml @@ -13,3 +13,6 @@ TransactionPercentile = 50 [OCR2.Automation] GasLimit = 10500000 + +[HeadTracker] +FinalityTagBypass = false diff --git a/core/chains/evm/config/toml/defaults/Linea_Sepolia.toml b/core/chains/evm/config/toml/defaults/Linea_Sepolia.toml index 11a70bbf07..ac5e18a09b 100644 --- a/core/chains/evm/config/toml/defaults/Linea_Sepolia.toml +++ b/core/chains/evm/config/toml/defaults/Linea_Sepolia.toml @@ -10,4 +10,4 @@ PriceMin = '1 wei' ResendAfterThreshold = '3m' [HeadTracker] -HistoryDepth = 1000 +HistoryDepth = 1000 \ No newline at end of file diff --git a/core/chains/evm/config/toml/defaults/WeMix_Testnet.toml b/core/chains/evm/config/toml/defaults/WeMix_Testnet.toml index 76b4da4bdb..2c3fc606a7 100644 --- a/core/chains/evm/config/toml/defaults/WeMix_Testnet.toml +++ b/core/chains/evm/config/toml/defaults/WeMix_Testnet.toml @@ -13,3 +13,6 @@ ContractConfirmations = 1 [GasEstimator] EIP1559DynamicFees = true TipCapDefault = '100 gwei' + +[HeadTracker] +FinalityTagBypass = false diff --git a/core/chains/evm/config/toml/defaults/fallback.toml b/core/chains/evm/config/toml/defaults/fallback.toml index eb94ea4a75..a38bf5c901 100644 --- a/core/chains/evm/config/toml/defaults/fallback.toml +++ b/core/chains/evm/config/toml/defaults/fallback.toml @@ -55,6 +55,8 @@ TransactionPercentile = 60 HistoryDepth = 100 MaxBufferSize = 3 SamplingInterval = '1s' +FinalityTagBypass = true +MaxAllowedFinalityDepth = 10000 [NodePool] PollFailureThreshold = 5 diff --git a/core/chains/evm/forwarders/forwarder_manager.go b/core/chains/evm/forwarders/forwarder_manager.go index 7a7a274127..b8035d0a62 100644 --- a/core/chains/evm/forwarders/forwarder_manager.go +++ b/core/chains/evm/forwarders/forwarder_manager.go @@ -2,6 +2,8 @@ package forwarders import ( "context" + "errors" + "slices" "sync" "time" @@ -9,6 +11,7 @@ import ( "github.com/ethereum/go-ethereum/common" "github.com/ethereum/go-ethereum/core/types" pkgerrors "github.com/pkg/errors" + "github.com/smartcontractkit/libocr/gethwrappers2/ocr2aggregator" "github.com/smartcontractkit/chainlink-common/pkg/logger" "github.com/smartcontractkit/chainlink-common/pkg/services" @@ -109,9 +112,9 @@ func FilterName(addr common.Address) string { return evmlogpoller.FilterName("ForwarderManager AuthorizedSendersChanged", addr.String()) } -func (f *FwdMgr) ForwarderFor(addr common.Address) (forwarder common.Address, err error) { +func (f *FwdMgr) ForwarderFor(ctx context.Context, addr common.Address) (forwarder common.Address, err error) { // Gets forwarders for current chain. - fwdrs, err := f.ORM.FindForwardersByChain(f.ctx, big.Big(*f.evmClient.ConfiguredChainID())) + fwdrs, err := f.ORM.FindForwardersByChain(ctx, big.Big(*f.evmClient.ConfiguredChainID())) if err != nil { return common.Address{}, err } @@ -128,7 +131,46 @@ func (f *FwdMgr) ForwarderFor(addr common.Address) (forwarder common.Address, er } } } - return common.Address{}, pkgerrors.Errorf("Cannot find forwarder for given EOA") + return common.Address{}, ErrForwarderForEOANotFound +} + +// ErrForwarderForEOANotFound defines the error triggered when no valid forwarders were found for EOA +var ErrForwarderForEOANotFound = errors.New("cannot find forwarder for given EOA") + +func (f *FwdMgr) ForwarderForOCR2Feeds(ctx context.Context, eoa, ocr2Aggregator common.Address) (forwarder common.Address, err error) { + fwdrs, err := f.ORM.FindForwardersByChain(ctx, big.Big(*f.evmClient.ConfiguredChainID())) + if err != nil { + return common.Address{}, err + } + + offchainAggregator, err := ocr2aggregator.NewOCR2Aggregator(ocr2Aggregator, f.evmClient) + if err != nil { + return common.Address{}, err + } + + transmitters, err := offchainAggregator.GetTransmitters(&bind.CallOpts{Context: ctx}) + if err != nil { + return common.Address{}, pkgerrors.Errorf("failed to get ocr2 aggregator transmitters: %s", err.Error()) + } + + for _, fwdr := range fwdrs { + if !slices.Contains(transmitters, fwdr.Address) { + f.logger.Criticalw("Forwarder is not set as a transmitter", "forwarder", fwdr.Address, "ocr2Aggregator", ocr2Aggregator, "err", err) + continue + } + + eoas, err := f.getContractSenders(fwdr.Address) + if err != nil { + f.logger.Errorw("Failed to get forwarder senders", "forwarder", fwdr.Address, "err", err) + continue + } + for _, addr := range eoas { + if addr == eoa { + return fwdr.Address, nil + } + } + } + return common.Address{}, ErrForwarderForEOANotFound } func (f *FwdMgr) ConvertPayload(dest common.Address, origPayload []byte) ([]byte, error) { diff --git a/core/chains/evm/forwarders/forwarder_manager_test.go b/core/chains/evm/forwarders/forwarder_manager_test.go index 3a515e7ab3..be8513f592 100644 --- a/core/chains/evm/forwarders/forwarder_manager_test.go +++ b/core/chains/evm/forwarders/forwarder_manager_test.go @@ -2,20 +2,25 @@ package forwarders_test import ( "math/big" + "slices" "testing" "time" - "github.com/smartcontractkit/chainlink-common/pkg/sqlutil" - + "github.com/ethereum/go-ethereum/accounts/abi/bind" "github.com/ethereum/go-ethereum/accounts/abi/bind/backends" "github.com/ethereum/go-ethereum/common" "github.com/ethereum/go-ethereum/core" "github.com/stretchr/testify/assert" "github.com/stretchr/testify/require" + "github.com/smartcontractkit/libocr/gethwrappers2/testocr2aggregator" + "github.com/smartcontractkit/chainlink-common/pkg/logger" + "github.com/smartcontractkit/chainlink-common/pkg/sqlutil" "github.com/smartcontractkit/chainlink-common/pkg/utils" + "github.com/smartcontractkit/chainlink/v2/core/services/ocr2/testhelpers" + "github.com/smartcontractkit/chainlink/v2/core/chains/evm/client" "github.com/smartcontractkit/chainlink/v2/core/chains/evm/forwarders" "github.com/smartcontractkit/chainlink/v2/core/chains/evm/logpoller" @@ -82,7 +87,7 @@ func TestFwdMgr_MaybeForwardTransaction(t *testing.T) { require.Equal(t, lst[0].Address, forwarderAddr) require.NoError(t, fwdMgr.Start(testutils.Context(t))) - addr, err := fwdMgr.ForwarderFor(owner.From) + addr, err := fwdMgr.ForwarderFor(ctx, owner.From) require.NoError(t, err) require.Equal(t, addr.String(), forwarderAddr.String()) err = fwdMgr.Close() @@ -144,9 +149,111 @@ func TestFwdMgr_AccountUnauthorizedToForward_SkipsForwarding(t *testing.T) { err = fwdMgr.Start(testutils.Context(t)) require.NoError(t, err) - addr, err := fwdMgr.ForwarderFor(owner.From) - require.ErrorContains(t, err, "Cannot find forwarder for given EOA") + addr, err := fwdMgr.ForwarderFor(ctx, owner.From) + require.ErrorIs(t, err, forwarders.ErrForwarderForEOANotFound) require.True(t, utils.IsZero(addr)) err = fwdMgr.Close() require.NoError(t, err) } + +func TestFwdMgr_InvalidForwarderForOCR2FeedsStates(t *testing.T) { + lggr := logger.Test(t) + db := pgtest.NewSqlxDB(t) + ctx := testutils.Context(t) + cfg := configtest.NewTestGeneralConfig(t) + evmcfg := evmtest.NewChainScopedConfig(t, cfg) + owner := testutils.MustNewSimTransactor(t) + ec := backends.NewSimulatedBackend(map[common.Address]core.GenesisAccount{ + owner.From: { + Balance: big.NewInt(0).Mul(big.NewInt(10), big.NewInt(1e18)), + }, + }, 10e6) + t.Cleanup(func() { ec.Close() }) + linkAddr := common.HexToAddress("0x01BE23585060835E02B77ef475b0Cc51aA1e0709") + operatorAddr, _, _, err := operator_wrapper.DeployOperator(owner, ec, linkAddr, owner.From) + require.NoError(t, err) + + forwarderAddr, _, forwarder, err := authorized_forwarder.DeployAuthorizedForwarder(owner, ec, linkAddr, owner.From, operatorAddr, []byte{}) + require.NoError(t, err) + ec.Commit() + + accessAddress, _, _, err := testocr2aggregator.DeploySimpleWriteAccessController(owner, ec) + require.NoError(t, err, "failed to deploy test access controller contract") + ocr2Address, _, ocr2, err := testocr2aggregator.DeployOCR2Aggregator( + owner, + ec, + linkAddr, + big.NewInt(0), + big.NewInt(10), + accessAddress, + accessAddress, + 9, + "TEST", + ) + require.NoError(t, err, "failed to deploy ocr2 test aggregator") + ec.Commit() + + evmClient := client.NewSimulatedBackendClient(t, ec, testutils.FixtureChainID) + lpOpts := logpoller.Opts{ + PollPeriod: 100 * time.Millisecond, + FinalityDepth: 2, + BackfillBatchSize: 3, + RpcBatchSize: 2, + KeepFinalizedBlocksDepth: 1000, + } + lp := logpoller.NewLogPoller(logpoller.NewORM(testutils.FixtureChainID, db, lggr), evmClient, lggr, lpOpts) + fwdMgr := forwarders.NewFwdMgr(db, evmClient, lp, lggr, evmcfg.EVM()) + fwdMgr.ORM = forwarders.NewORM(db) + + _, err = fwdMgr.ORM.CreateForwarder(ctx, forwarderAddr, ubig.Big(*testutils.FixtureChainID)) + require.NoError(t, err) + lst, err := fwdMgr.ORM.FindForwardersByChain(ctx, ubig.Big(*testutils.FixtureChainID)) + require.NoError(t, err) + require.Equal(t, len(lst), 1) + require.Equal(t, lst[0].Address, forwarderAddr) + + fwdMgr = forwarders.NewFwdMgr(db, evmClient, lp, lggr, evmcfg.EVM()) + require.NoError(t, fwdMgr.Start(testutils.Context(t))) + // cannot find forwarder because it isn't authorized nor added as a transmitter + addr, err := fwdMgr.ForwarderForOCR2Feeds(ctx, owner.From, ocr2Address) + require.ErrorIs(t, err, forwarders.ErrForwarderForEOANotFound) + require.True(t, utils.IsZero(addr)) + + _, err = forwarder.SetAuthorizedSenders(owner, []common.Address{owner.From}) + require.NoError(t, err) + ec.Commit() + + authorizedSenders, err := forwarder.GetAuthorizedSenders(&bind.CallOpts{Context: ctx}) + require.NoError(t, err) + require.Equal(t, owner.From, authorizedSenders[0]) + + // cannot find forwarder because it isn't added as a transmitter + addr, err = fwdMgr.ForwarderForOCR2Feeds(ctx, owner.From, ocr2Address) + require.ErrorIs(t, err, forwarders.ErrForwarderForEOANotFound) + require.True(t, utils.IsZero(addr)) + + onchainConfig, err := testhelpers.GenerateDefaultOCR2OnchainConfig(big.NewInt(0), big.NewInt(10)) + require.NoError(t, err) + + _, err = ocr2.SetConfig(owner, + []common.Address{testutils.NewAddress(), testutils.NewAddress(), testutils.NewAddress(), testutils.NewAddress()}, + []common.Address{forwarderAddr, testutils.NewAddress(), testutils.NewAddress(), testutils.NewAddress()}, + 1, + onchainConfig, + 0, + []byte{}) + require.NoError(t, err) + ec.Commit() + + transmitters, err := ocr2.GetTransmitters(&bind.CallOpts{Context: ctx}) + require.NoError(t, err) + require.True(t, slices.Contains(transmitters, forwarderAddr)) + + // create new fwd to have an empty cache that has to fetch authorized forwarders from log poller + fwdMgr = forwarders.NewFwdMgr(db, evmClient, lp, lggr, evmcfg.EVM()) + require.NoError(t, fwdMgr.Start(testutils.Context(t))) + addr, err = fwdMgr.ForwarderForOCR2Feeds(ctx, owner.From, ocr2Address) + require.NoError(t, err, "forwarder should be valid and found because it is both authorized and set as a transmitter") + require.Equal(t, forwarderAddr, addr) + require.NoError(t, fwdMgr.Close()) +} diff --git a/core/chains/evm/gas/block_history_estimator_test.go b/core/chains/evm/gas/block_history_estimator_test.go index f93946d688..38e16dfd5d 100644 --- a/core/chains/evm/gas/block_history_estimator_test.go +++ b/core/chains/evm/gas/block_history_estimator_test.go @@ -994,11 +994,6 @@ func TestBlockHistoryEstimator_Recalculate_NoEIP1559(t *testing.T) { bhe.Recalculate(cltest.Head(0)) require.Equal(t, assets.NewWeiI(80), gas.GetGasPrice(bhe)) - // Same for xDai (deprecated) - cfg.ChainTypeF = string(config.ChainXDai) - bhe.Recalculate(cltest.Head(0)) - require.Equal(t, assets.NewWeiI(80), gas.GetGasPrice(bhe)) - // And for X Layer cfg.ChainTypeF = string(config.ChainXLayer) bhe.Recalculate(cltest.Head(0)) diff --git a/core/chains/evm/gas/chain_specific.go b/core/chains/evm/gas/chain_specific.go index 63477d157f..49551ed5be 100644 --- a/core/chains/evm/gas/chain_specific.go +++ b/core/chains/evm/gas/chain_specific.go @@ -9,7 +9,7 @@ import ( // chainSpecificIsUsable allows for additional logic specific to a particular // Config that determines whether a transaction should be used for gas estimation func chainSpecificIsUsable(tx evmtypes.Transaction, baseFee *assets.Wei, chainType config.ChainType, minGasPriceWei *assets.Wei) bool { - if chainType == config.ChainGnosis || chainType == config.ChainXDai || chainType == config.ChainXLayer { + if chainType == config.ChainGnosis || chainType == config.ChainXLayer { // GasPrice 0 on most chains is great since it indicates cheap/free transactions. // However, Gnosis and XLayer reserve a special type of "bridge" transaction with 0 gas // price that is always processed at top priority. Ordinary transactions diff --git a/core/chains/evm/gas/rollups/l1_oracle.go b/core/chains/evm/gas/rollups/l1_oracle.go index 05ceb720ab..4fc1453e9e 100644 --- a/core/chains/evm/gas/rollups/l1_oracle.go +++ b/core/chains/evm/gas/rollups/l1_oracle.go @@ -46,7 +46,7 @@ const ( PollPeriod = 6 * time.Second ) -var supportedChainTypes = []config.ChainType{config.ChainArbitrum, config.ChainOptimismBedrock, config.ChainKroma, config.ChainScroll} +var supportedChainTypes = []config.ChainType{config.ChainArbitrum, config.ChainOptimismBedrock, config.ChainKroma, config.ChainScroll, config.ChainZkSync} func IsRollupWithL1Support(chainType config.ChainType) bool { return slices.Contains(supportedChainTypes, chainType) @@ -62,6 +62,8 @@ func NewL1GasOracle(lggr logger.Logger, ethClient l1OracleClient, chainType conf l1Oracle = NewOpStackL1GasOracle(lggr, ethClient, chainType) case config.ChainArbitrum: l1Oracle = NewArbitrumL1GasOracle(lggr, ethClient) + case config.ChainZkSync: + l1Oracle = NewZkSyncL1GasOracle(lggr, ethClient) default: panic(fmt.Sprintf("Received unspported chaintype %s", chainType)) } diff --git a/core/chains/evm/gas/rollups/l1_oracle_test.go b/core/chains/evm/gas/rollups/l1_oracle_test.go index 6efdda6bcf..31db62a6f5 100644 --- a/core/chains/evm/gas/rollups/l1_oracle_test.go +++ b/core/chains/evm/gas/rollups/l1_oracle_test.go @@ -1,6 +1,7 @@ package rollups import ( + "encoding/hex" "errors" "math/big" "strings" @@ -187,6 +188,43 @@ func TestL1Oracle_GasPrice(t *testing.T) { assert.Equal(t, assets.NewWei(l1BaseFee), gasPrice) }) + + t.Run("Calling GasPrice on started zkSync L1Oracle returns ZkSync l1GasPrice", func(t *testing.T) { + gasPerPubByteL2 := big.NewInt(1100) + gasPriceL2 := big.NewInt(25000000) + ZksyncGasInfo_getGasPriceL2 := "0xfe173b97" + ZksyncGasInfo_getGasPerPubdataByteL2 := "0x7cb9357e" + ethClient := mocks.NewL1OracleClient(t) + + ethClient.On("CallContract", mock.Anything, mock.IsType(ethereum.CallMsg{}), mock.IsType(&big.Int{})).Run(func(args mock.Arguments) { + callMsg := args.Get(1).(ethereum.CallMsg) + blockNumber := args.Get(2).(*big.Int) + var payload []byte + payload, err := hex.DecodeString(ZksyncGasInfo_getGasPriceL2[2:]) + require.NoError(t, err) + require.Equal(t, payload, callMsg.Data) + assert.Nil(t, blockNumber) + }).Return(common.BigToHash(gasPriceL2).Bytes(), nil).Once() + + ethClient.On("CallContract", mock.Anything, mock.IsType(ethereum.CallMsg{}), mock.IsType(&big.Int{})).Run(func(args mock.Arguments) { + callMsg := args.Get(1).(ethereum.CallMsg) + blockNumber := args.Get(2).(*big.Int) + var payload []byte + payload, err := hex.DecodeString(ZksyncGasInfo_getGasPerPubdataByteL2[2:]) + require.NoError(t, err) + require.Equal(t, payload, callMsg.Data) + assert.Nil(t, blockNumber) + }).Return(common.BigToHash(gasPerPubByteL2).Bytes(), nil) + + oracle := NewL1GasOracle(logger.Test(t), ethClient, config.ChainZkSync) + require.NoError(t, oracle.Start(testutils.Context(t))) + t.Cleanup(func() { assert.NoError(t, oracle.Close()) }) + + gasPrice, err := oracle.GasPrice(testutils.Context(t)) + require.NoError(t, err) + + assert.Equal(t, assets.NewWei(new(big.Int).Mul(gasPriceL2, gasPerPubByteL2)), gasPrice) + }) } func TestL1Oracle_GetGasCost(t *testing.T) { diff --git a/core/chains/evm/gas/rollups/zkSync_l1_oracle.go b/core/chains/evm/gas/rollups/zkSync_l1_oracle.go new file mode 100644 index 0000000000..5067d01d46 --- /dev/null +++ b/core/chains/evm/gas/rollups/zkSync_l1_oracle.go @@ -0,0 +1,247 @@ +package rollups + +import ( + "context" + "encoding/hex" + "fmt" + "math/big" + "sync" + "time" + + "github.com/ethereum/go-ethereum" + "github.com/ethereum/go-ethereum/common" + + "github.com/smartcontractkit/chainlink-common/pkg/logger" + "github.com/smartcontractkit/chainlink-common/pkg/services" + "github.com/smartcontractkit/chainlink-common/pkg/utils" + + gethtypes "github.com/ethereum/go-ethereum/core/types" + + "github.com/smartcontractkit/chainlink/v2/common/config" + "github.com/smartcontractkit/chainlink/v2/core/chains/evm/assets" + evmclient "github.com/smartcontractkit/chainlink/v2/core/chains/evm/client" +) + +// Reads L2-specific precompiles and caches the l1GasPrice set by the L2. +type zkSyncL1Oracle struct { + services.StateMachine + client l1OracleClient + pollPeriod time.Duration + logger logger.SugaredLogger + chainType config.ChainType + + systemContextAddress string + gasPerPubdataMethod string + gasPerPubdataSelector string + l2GasPriceMethod string + l2GasPriceSelector string + + l1GasPriceMu sync.RWMutex + l1GasPrice priceEntry + + chInitialised chan struct{} + chStop services.StopChan + chDone chan struct{} +} + +const ( + // SystemContextAddress is the address of the "Precompiled contract that calls that holds the current gas per pubdata byte" + // https://sepolia.explorer.zksync.io/address/0x000000000000000000000000000000000000800b#contract + SystemContextAddress = "0x000000000000000000000000000000000000800B" + + // ZksyncGasInfo_GetL2GasPerPubDataBytes is the a hex encoded call to: + // function gasPerPubdataByte() external view returns (uint256 gasPerPubdataByte); + SystemContext_gasPerPubdataByteMethod = "gasPerPubdataByte" + ZksyncGasInfo_getGasPerPubdataByteL2 = "0x7cb9357e" + + // ZksyncGasInfo_GetL2GasPrice is the a hex encoded call to: + // `function gasPrice() external view returns (uint256);` + SystemContext_gasPriceMethod = "gasPrice" + ZksyncGasInfo_getGasPriceL2 = "0xfe173b97" +) + +func NewZkSyncL1GasOracle(lggr logger.Logger, ethClient l1OracleClient) *zkSyncL1Oracle { + return &zkSyncL1Oracle{ + client: ethClient, + pollPeriod: PollPeriod, + logger: logger.Sugared(logger.Named(lggr, "L1GasOracle(zkSync)")), + chainType: config.ChainZkSync, + + systemContextAddress: SystemContextAddress, + gasPerPubdataMethod: SystemContext_gasPerPubdataByteMethod, + gasPerPubdataSelector: ZksyncGasInfo_getGasPerPubdataByteL2, + l2GasPriceMethod: SystemContext_gasPriceMethod, + l2GasPriceSelector: ZksyncGasInfo_getGasPriceL2, + + chInitialised: make(chan struct{}), + chStop: make(chan struct{}), + chDone: make(chan struct{}), + } +} + +func (o *zkSyncL1Oracle) Name() string { + return o.logger.Name() +} + +func (o *zkSyncL1Oracle) Start(ctx context.Context) error { + return o.StartOnce(o.Name(), func() error { + go o.run() + <-o.chInitialised + return nil + }) +} +func (o *zkSyncL1Oracle) Close() error { + return o.StopOnce(o.Name(), func() error { + close(o.chStop) + <-o.chDone + return nil + }) +} + +func (o *zkSyncL1Oracle) HealthReport() map[string]error { + return map[string]error{o.Name(): o.Healthy()} +} + +func (o *zkSyncL1Oracle) run() { + defer close(o.chDone) + + t := o.refresh() + close(o.chInitialised) + + for { + select { + case <-o.chStop: + return + case <-t.C: + t = o.refresh() + } + } +} +func (o *zkSyncL1Oracle) refresh() (t *time.Timer) { + t, err := o.refreshWithError() + if err != nil { + o.SvcErrBuffer.Append(err) + } + return +} + +func (o *zkSyncL1Oracle) refreshWithError() (t *time.Timer, err error) { + t = time.NewTimer(utils.WithJitter(o.pollPeriod)) + + ctx, cancel := o.chStop.CtxCancel(evmclient.ContextWithDefaultTimeout()) + defer cancel() + + price, err := o.CalculateL1GasPrice(ctx) + if err != nil { + return t, err + } + + o.l1GasPriceMu.Lock() + defer o.l1GasPriceMu.Unlock() + o.l1GasPrice = priceEntry{price: assets.NewWei(price), timestamp: time.Now()} + return +} + +// For zkSync l2_gas_PerPubdataByte = (blob_byte_price_on_l1 + part_of_l1_verification_cost) / (gas_price_on_l2) +// l2_gas_PerPubdataByte = blob_gas_price_on_l1 * gas_per_byte / gas_price_on_l2 +// blob_gas_price_on_l1 * gas_per_byte ~= gas_price_on_l2 * l2_gas_PerPubdataByte +func (o *zkSyncL1Oracle) CalculateL1GasPrice(ctx context.Context) (price *big.Int, err error) { + l2GasPrice, err := o.GetL2GasPrice(ctx) + if err != nil { + return nil, err + } + l2GasPerPubDataByte, err := o.GetL2GasPerPubDataBytes(ctx) + if err != nil { + return nil, err + } + price = new(big.Int).Mul(l2GasPrice, l2GasPerPubDataByte) + return +} + +func (o *zkSyncL1Oracle) GasPrice(_ context.Context) (l1GasPrice *assets.Wei, err error) { + var timestamp time.Time + ok := o.IfStarted(func() { + o.l1GasPriceMu.RLock() + l1GasPrice = o.l1GasPrice.price + timestamp = o.l1GasPrice.timestamp + o.l1GasPriceMu.RUnlock() + }) + if !ok { + return l1GasPrice, fmt.Errorf("L1GasOracle is not started; cannot estimate gas") + } + if l1GasPrice == nil { + return l1GasPrice, fmt.Errorf("failed to get l1 gas price; gas price not set") + } + // Validate the price has been updated within the pollPeriod * 2 + // Allowing double the poll period before declaring the price stale to give ample time for the refresh to process + if time.Since(timestamp) > o.pollPeriod*2 { + return l1GasPrice, fmt.Errorf("gas price is stale") + } + return +} + +// Gets the L1 gas cost for the provided transaction at the specified block num +// If block num is not provided, the value on the latest block num is used +func (o *zkSyncL1Oracle) GetGasCost(ctx context.Context, tx *gethtypes.Transaction, blockNum *big.Int) (*assets.Wei, error) { + //Unused method, so not implemented + // And its not possible to know gas consumption of a transaction before its executed, since zkSync only posts the state difference + panic("unimplemented") +} + +// GetL2GasPrice calls SystemContract.gasPrice() on the zksync system precompile contract. +// +// @return (The current gasPrice on L2: same as tx.gasPrice) +// function gasPrice() external view returns (uint256); +// +// https://github.com/matter-labs/era-contracts/blob/12a7d3bc1777ae5663e7525b2628061502755cbd/system-contracts/contracts/interfaces/ISystemContext.sol#L34C4-L34C57 + +func (o *zkSyncL1Oracle) GetL2GasPrice(ctx context.Context) (gasPriceL2 *big.Int, err error) { + precompile := common.HexToAddress(o.systemContextAddress) + method, err := hex.DecodeString(ZksyncGasInfo_getGasPriceL2[2:]) + if err != nil { + return common.Big0, fmt.Errorf("cannot decode method: %w", err) + } + b, err := o.client.CallContract(ctx, ethereum.CallMsg{ + To: &precompile, + Data: method, + }, nil) + if err != nil { + return common.Big0, fmt.Errorf("cannot fetch l2GasPrice from zkSync SystemContract: %w", err) + } + + if len(b) != 1*32 { // uint256 gasPrice; + err = fmt.Errorf("return gasPrice (%d) different than expected (%d)", len(b), 3*32) + return + } + gasPriceL2 = new(big.Int).SetBytes(b) + return +} + +// GetL2GasPerPubDataBytes calls SystemContract.gasPerPubdataByte() on the zksync system precompile contract. +// +// @return (The current gas per pubdata byte on L2) +// function gasPerPubdataByte() external view returns (uint256 gasPerPubdataByte); +// +// https://github.com/matter-labs/era-contracts/blob/12a7d3bc1777ae5663e7525b2628061502755cbd/system-contracts/contracts/interfaces/ISystemContext.sol#L58C14-L58C31 + +func (o *zkSyncL1Oracle) GetL2GasPerPubDataBytes(ctx context.Context) (gasPerPubByteL2 *big.Int, err error) { + precompile := common.HexToAddress(o.systemContextAddress) + method, err := hex.DecodeString(ZksyncGasInfo_getGasPerPubdataByteL2[2:]) + if err != nil { + return common.Big0, fmt.Errorf("cannot decode method: %w", err) + } + b, err := o.client.CallContract(ctx, ethereum.CallMsg{ + To: &precompile, + Data: method, + }, nil) + if err != nil { + return common.Big0, fmt.Errorf("cannot fetch gasPerPubdataByte from zkSync SystemContract: %w", err) + } + + if len(b) != 1*32 { // uint256 gasPerPubdataByte; + err = fmt.Errorf("return data length (%d) different than expected (%d)", len(b), 3*32) + return + } + gasPerPubByteL2 = new(big.Int).SetBytes(b) + return +} diff --git a/core/chains/evm/headtracker/head_saver_test.go b/core/chains/evm/headtracker/head_saver_test.go index e53ea0cd62..2deeaa3528 100644 --- a/core/chains/evm/headtracker/head_saver_test.go +++ b/core/chains/evm/headtracker/head_saver_test.go @@ -36,6 +36,13 @@ func (h *headTrackerConfig) MaxBufferSize() uint32 { return uint32(0) } +func (h *headTrackerConfig) FinalityTagBypass() bool { + return false +} +func (h *headTrackerConfig) MaxAllowedFinalityDepth() uint32 { + return 10000 +} + type config struct { finalityDepth uint32 blockEmissionIdleWarningThreshold time.Duration diff --git a/core/chains/evm/headtracker/head_tracker_test.go b/core/chains/evm/headtracker/head_tracker_test.go index bf2b984b54..b0d9a50da5 100644 --- a/core/chains/evm/headtracker/head_tracker_test.go +++ b/core/chains/evm/headtracker/head_tracker_test.go @@ -205,11 +205,27 @@ func TestHeadTracker_Start(t *testing.T) { t.Parallel() const historyDepth = 100 - newHeadTracker := func(t *testing.T) *headTrackerUniverse { + const finalityDepth = 50 + type opts struct { + FinalityTagEnable *bool + MaxAllowedFinalityDepth *uint32 + FinalityTagBypass *bool + } + newHeadTracker := func(t *testing.T, opts opts) *headTrackerUniverse { db := pgtest.NewSqlxDB(t) gCfg := configtest.NewGeneralConfig(t, func(c *chainlink.Config, _ *chainlink.Secrets) { - c.EVM[0].FinalityTagEnabled = ptr[bool](true) + if opts.FinalityTagEnable != nil { + c.EVM[0].FinalityTagEnabled = opts.FinalityTagEnable + } c.EVM[0].HeadTracker.HistoryDepth = ptr[uint32](historyDepth) + c.EVM[0].FinalityDepth = ptr[uint32](finalityDepth) + if opts.MaxAllowedFinalityDepth != nil { + c.EVM[0].HeadTracker.MaxAllowedFinalityDepth = opts.MaxAllowedFinalityDepth + } + + if opts.FinalityTagBypass != nil { + c.EVM[0].HeadTracker.FinalityTagBypass = opts.FinalityTagBypass + } }) config := evmtest.NewChainScopedConfig(t, gCfg) orm := headtracker.NewORM(cltest.FixtureChainID, db) @@ -219,7 +235,7 @@ func TestHeadTracker_Start(t *testing.T) { t.Run("Fail start if context was canceled", func(t *testing.T) { ctx, cancel := context.WithCancel(testutils.Context(t)) - ht := newHeadTracker(t) + ht := newHeadTracker(t, opts{}) ht.ethClient.On("HeadByNumber", mock.Anything, (*big.Int)(nil)).Run(func(args mock.Arguments) { cancel() }).Return(cltest.Head(0), context.Canceled) @@ -227,19 +243,19 @@ func TestHeadTracker_Start(t *testing.T) { require.ErrorIs(t, err, context.Canceled) }) t.Run("Starts even if failed to get initialHead", func(t *testing.T) { - ht := newHeadTracker(t) + ht := newHeadTracker(t, opts{}) ht.ethClient.On("HeadByNumber", mock.Anything, (*big.Int)(nil)).Return(cltest.Head(0), errors.New("failed to get init head")) ht.Start(t) tests.AssertLogEventually(t, ht.observer, "Error handling initial head") }) t.Run("Starts even if received invalid head", func(t *testing.T) { - ht := newHeadTracker(t) + ht := newHeadTracker(t, opts{}) ht.ethClient.On("HeadByNumber", mock.Anything, (*big.Int)(nil)).Return(nil, nil) ht.Start(t) tests.AssertLogEventually(t, ht.observer, "Got nil initial head") }) t.Run("Starts even if fails to get finalizedHead", func(t *testing.T) { - ht := newHeadTracker(t) + ht := newHeadTracker(t, opts{FinalityTagEnable: ptr(true), FinalityTagBypass: ptr(false)}) head := cltest.Head(1000) ht.ethClient.On("HeadByNumber", mock.Anything, (*big.Int)(nil)).Return(head, nil).Once() ht.ethClient.On("LatestFinalizedBlock", mock.Anything).Return(nil, errors.New("failed to load latest finalized")).Once() @@ -247,16 +263,31 @@ func TestHeadTracker_Start(t *testing.T) { tests.AssertLogEventually(t, ht.observer, "Error handling initial head") }) t.Run("Starts even if latest finalizedHead is nil", func(t *testing.T) { - ht := newHeadTracker(t) + ht := newHeadTracker(t, opts{FinalityTagEnable: ptr(true), FinalityTagBypass: ptr(false)}) head := cltest.Head(1000) ht.ethClient.On("HeadByNumber", mock.Anything, (*big.Int)(nil)).Return(head, nil).Once() ht.ethClient.On("LatestFinalizedBlock", mock.Anything).Return(nil, nil).Once() + ht.ethClient.On("SubscribeNewHead", mock.Anything, mock.Anything).Return(nil, errors.New("failed to connect")).Maybe() ht.Start(t) tests.AssertLogEventually(t, ht.observer, "Error handling initial head") }) - t.Run("Happy path", func(t *testing.T) { + t.Run("Logs error if finality gap is too big", func(t *testing.T) { + ht := newHeadTracker(t, opts{FinalityTagEnable: ptr(true), FinalityTagBypass: ptr(false), MaxAllowedFinalityDepth: ptr(uint32(10))}) + head := cltest.Head(1000) + ht.ethClient.On("HeadByNumber", mock.Anything, (*big.Int)(nil)).Return(head, nil).Once() + ht.ethClient.On("LatestFinalizedBlock", mock.Anything).Return(cltest.Head(989), nil).Once() + ht.ethClient.On("SubscribeNewHead", mock.Anything, mock.Anything).Return(nil, errors.New("failed to connect")).Maybe() + ht.Start(t) + tests.AssertEventually(t, func() bool { + // must exactly match the error passed to logger + field := zap.String("err", "failed to calculate latest finalized head: gap between latest finalized block (989) and current head (1000) is too large (> 10)") + filtered := ht.observer.FilterMessage("Error handling initial head").FilterField(field) + return filtered.Len() > 0 + }) + }) + t.Run("Happy path (finality tag)", func(t *testing.T) { head := cltest.Head(1000) - ht := newHeadTracker(t) + ht := newHeadTracker(t, opts{FinalityTagEnable: ptr(true), FinalityTagBypass: ptr(false)}) ctx := testutils.Context(t) require.NoError(t, ht.orm.IdempotentInsertHead(ctx, cltest.Head(799))) ht.ethClient.On("HeadByNumber", mock.Anything, (*big.Int)(nil)).Return(head, nil).Once() @@ -265,9 +296,46 @@ func TestHeadTracker_Start(t *testing.T) { ht.ethClient.On("LatestFinalizedBlock", mock.Anything).Return(finalizedHead, nil).Once() // on backfill ht.ethClient.On("LatestFinalizedBlock", mock.Anything).Return(nil, errors.New("backfill call to finalized failed")).Maybe() + ht.ethClient.On("SubscribeNewHead", mock.Anything, mock.Anything).Return(nil, errors.New("failed to connect")).Maybe() ht.Start(t) tests.AssertLogEventually(t, ht.observer, "Loaded chain from DB") }) + happyPathFD := func(t *testing.T, opts opts) { + head := cltest.Head(1000) + ht := newHeadTracker(t, opts) + ht.ethClient.On("HeadByNumber", mock.Anything, (*big.Int)(nil)).Return(head, nil).Once() + finalizedHead := cltest.Head(head.Number - finalityDepth) + ht.ethClient.On("HeadByNumber", mock.Anything, big.NewInt(finalizedHead.Number)).Return(finalizedHead, nil).Once() + ctx := testutils.Context(t) + require.NoError(t, ht.orm.IdempotentInsertHead(ctx, cltest.Head(finalizedHead.Number-1))) + // on backfill + ht.ethClient.On("HeadByNumber", mock.Anything, mock.Anything).Return(nil, errors.New("backfill call to finalized failed")).Maybe() + ht.ethClient.On("SubscribeNewHead", mock.Anything, mock.Anything).Return(nil, errors.New("failed to connect")).Maybe() + ht.Start(t) + tests.AssertLogEventually(t, ht.observer, "Loaded chain from DB") + } + testCases := []struct { + Name string + Opts opts + }{ + { + Name: "Happy path (Chain FT is disabled & HeadTracker's FT is disabled)", + Opts: opts{FinalityTagEnable: ptr(false), FinalityTagBypass: ptr(true)}, + }, + { + Name: "Happy path (Chain FT is disabled & HeadTracker's FT is enabled, but ignored)", + Opts: opts{FinalityTagEnable: ptr(false), FinalityTagBypass: ptr(false)}, + }, + { + Name: "Happy path (Chain FT is enabled & HeadTracker's FT is disabled)", + Opts: opts{FinalityTagEnable: ptr(true), FinalityTagBypass: ptr(true)}, + }, + } + for _, tc := range testCases { + t.Run(tc.Name, func(t *testing.T) { + happyPathFD(t, tc.Opts) + }) + } } func TestHeadTracker_CallsHeadTrackableCallbacks(t *testing.T) { diff --git a/core/config/docs/chains-evm.toml b/core/config/docs/chains-evm.toml index 0039c9ac27..745a69b377 100644 --- a/core/config/docs/chains-evm.toml +++ b/core/config/docs/chains-evm.toml @@ -302,6 +302,14 @@ MaxBufferSize = 3 # Default # **ADVANCED** # SamplingInterval means that head tracker callbacks will at maximum be made once in every window of this duration. This is a performance optimisation for fast chains. Set to 0 to disable sampling entirely. SamplingInterval = '1s' # Default +# FinalityTagBypass disables FinalityTag support in HeadTracker and makes it track blocks up to FinalityDepth from the most recent head. +# It should only be used on chains with an extremely large actual finality depth (the number of blocks between the most recent head and the latest finalized block). +# Has no effect if `FinalityTagsEnabled` = false +FinalityTagBypass = true # Default +# MaxAllowedFinalityDepth - defines maximum number of blocks between the most recent head and the latest finalized block. +# If actual finality depth exceeds this number, HeadTracker aborts backfill and returns an error. +# Has no effect if `FinalityTagsEnabled` = false +MaxAllowedFinalityDepth = 10000 # Default [[EVM.KeySpecific]] # Key is the account to apply these settings to diff --git a/core/config/docs/docs_test.go b/core/config/docs/docs_test.go index 8c6429fd0d..8cb87976a4 100644 --- a/core/config/docs/docs_test.go +++ b/core/config/docs/docs_test.go @@ -15,6 +15,7 @@ import ( stkcfg "github.com/smartcontractkit/chainlink-starknet/relayer/pkg/chainlink/config" "github.com/smartcontractkit/chainlink-common/pkg/config" + commonconfig "github.com/smartcontractkit/chainlink/v2/common/config" "github.com/smartcontractkit/chainlink/v2/core/chains/evm/assets" evmcfg "github.com/smartcontractkit/chainlink/v2/core/chains/evm/config/toml" "github.com/smartcontractkit/chainlink/v2/core/chains/evm/types" @@ -46,7 +47,7 @@ func TestDoc(t *testing.T) { fallbackDefaults := evmcfg.Defaults(nil) docDefaults := defaults.EVM[0].Chain - require.Equal(t, "", *docDefaults.ChainType) + require.Equal(t, commonconfig.ChainType(""), docDefaults.ChainType.ChainType()) docDefaults.ChainType = nil // clean up KeySpecific as a special case diff --git a/core/scripts/go.mod b/core/scripts/go.mod index fd483e659c..97c3cec2e4 100644 --- a/core/scripts/go.mod +++ b/core/scripts/go.mod @@ -21,7 +21,7 @@ require ( github.com/pkg/errors v0.9.1 github.com/prometheus/client_golang v1.17.0 github.com/shopspring/decimal v1.3.1 - github.com/smartcontractkit/chain-selectors v1.0.16 + github.com/smartcontractkit/chain-selectors v1.0.17 github.com/smartcontractkit/chainlink-automation v1.0.3 github.com/smartcontractkit/chainlink-common v0.1.7-0.20240607202129-4cef984f109f github.com/smartcontractkit/chainlink-vrf v0.0.0-20240222010609-cd67d123c772 diff --git a/core/scripts/go.sum b/core/scripts/go.sum index bef064541b..8ef0d66d93 100644 --- a/core/scripts/go.sum +++ b/core/scripts/go.sum @@ -1176,8 +1176,8 @@ github.com/sirupsen/logrus v1.4.1/go.mod h1:ni0Sbl8bgC9z8RoU9G6nDWqqs/fq4eDPysMB github.com/sirupsen/logrus v1.4.2/go.mod h1:tLMulIdttU9McNUspp0xgXVQah82FyeX6MwdIuYE2rE= github.com/sirupsen/logrus v1.9.3 h1:dueUQJ1C2q9oE3F7wvmSGAaVtTmUizReu6fjN8uqzbQ= github.com/sirupsen/logrus v1.9.3/go.mod h1:naHLuLoDiP4jHNo9R0sCBMtWGeIprob74mVsIT4qYEQ= -github.com/smartcontractkit/chain-selectors v1.0.16 h1:uVoitoL5KVqGbU89b6W9gECwIvcdZh/w8MI/9JfEoy8= -github.com/smartcontractkit/chain-selectors v1.0.16/go.mod h1:d4Hi+E1zqjy9HqMkjBE5q1vcG9VGgxf5VxiRHfzi2kE= +github.com/smartcontractkit/chain-selectors v1.0.17 h1:otOlYUnutS8oQBEAi9RLQICqZP0Nxy0k8vOZuSMJa4w= +github.com/smartcontractkit/chain-selectors v1.0.17/go.mod h1:d4Hi+E1zqjy9HqMkjBE5q1vcG9VGgxf5VxiRHfzi2kE= github.com/smartcontractkit/chainlink-automation v1.0.3 h1:h/ijT0NiyV06VxYVgcNfsE3+8OEzT3Q0Z9au0z1BPWs= github.com/smartcontractkit/chainlink-automation v1.0.3/go.mod h1:RjboV0Qd7YP+To+OrzHGXaxUxoSONveCoAK2TQ1INLU= github.com/smartcontractkit/chainlink-common v0.1.7-0.20240607202129-4cef984f109f h1:S79gMmLymYWPZC/zAOIY3QhgyD2cqPO+FdPernwJq/M= diff --git a/core/services/chainlink/config.go b/core/services/chainlink/config.go index b77a54f39a..9ba83dced0 100644 --- a/core/services/chainlink/config.go +++ b/core/services/chainlink/config.go @@ -12,7 +12,6 @@ import ( "github.com/smartcontractkit/chainlink-solana/pkg/solana" stkcfg "github.com/smartcontractkit/chainlink-starknet/relayer/pkg/chainlink/config" - commoncfg "github.com/smartcontractkit/chainlink/v2/common/config" evmcfg "github.com/smartcontractkit/chainlink/v2/core/chains/evm/config/toml" "github.com/smartcontractkit/chainlink/v2/core/config/docs" "github.com/smartcontractkit/chainlink/v2/core/config/env" @@ -79,10 +78,10 @@ func (c *Config) valueWarnings() (err error) { func (c *Config) deprecationWarnings() (err error) { // ChainType xdai is deprecated and has been renamed to gnosis for _, evm := range c.EVM { - if evm.ChainType != nil && *evm.ChainType == string(commoncfg.ChainXDai) { + if evm.ChainType != nil && evm.ChainType.Slug() == "xdai" { err = multierr.Append(err, config.ErrInvalid{ Name: "EVM.ChainType", - Value: *evm.ChainType, + Value: evm.ChainType.Slug(), Msg: "deprecated and will be removed in v2.13.0, use 'gnosis' instead", }) } diff --git a/core/services/chainlink/config_test.go b/core/services/chainlink/config_test.go index 9b2b099353..98e36d764f 100644 --- a/core/services/chainlink/config_test.go +++ b/core/services/chainlink/config_test.go @@ -25,9 +25,10 @@ import ( "github.com/smartcontractkit/chainlink-solana/pkg/solana" solcfg "github.com/smartcontractkit/chainlink-solana/pkg/solana/config" stkcfg "github.com/smartcontractkit/chainlink-starknet/relayer/pkg/chainlink/config" - commonconfig "github.com/smartcontractkit/chainlink/v2/common/config" + commonconfig "github.com/smartcontractkit/chainlink/v2/common/config" "github.com/smartcontractkit/chainlink/v2/core/chains/evm/assets" + "github.com/smartcontractkit/chainlink/v2/core/chains/evm/client" evmcfg "github.com/smartcontractkit/chainlink/v2/core/chains/evm/config/toml" "github.com/smartcontractkit/chainlink/v2/core/chains/evm/types" @@ -496,7 +497,7 @@ func TestConfig_Marshal(t *testing.T) { }, BlockBackfillDepth: ptr[uint32](100), BlockBackfillSkip: ptr(true), - ChainType: ptr("Optimism"), + ChainType: commonconfig.NewChainTypeConfig("Optimism"), FinalityDepth: ptr[uint32](42), FinalityTagEnabled: ptr[bool](false), FlagsContractAddress: mustAddress("0xae4E781a6218A8031764928E88d457937A954fC3"), @@ -571,9 +572,11 @@ func TestConfig_Marshal(t *testing.T) { }, HeadTracker: evmcfg.HeadTracker{ - HistoryDepth: ptr[uint32](15), - MaxBufferSize: ptr[uint32](17), - SamplingInterval: &hour, + HistoryDepth: ptr[uint32](15), + MaxBufferSize: ptr[uint32](17), + SamplingInterval: &hour, + FinalityTagBypass: ptr[bool](false), + MaxAllowedFinalityDepth: ptr[uint32](1500), }, NodePool: evmcfg.NodePool{ @@ -1031,6 +1034,8 @@ TransactionPercentile = 15 HistoryDepth = 15 MaxBufferSize = 17 SamplingInterval = '1h0m0s' +MaxAllowedFinalityDepth = 1500 +FinalityTagBypass = false [[EVM.KeySpecific]] Key = '0x2a3e23c6f242F5345320814aC8a1b4E58707D292' @@ -1263,7 +1268,7 @@ func TestConfig_Validate(t *testing.T) { - WSURL: missing: required for primary nodes - HTTPURL: missing: required for all nodes - 1.HTTPURL: missing: required for all nodes - - 1: 6 errors: + - 1: 7 errors: - ChainType: invalid value (Foo): must not be set with this chain id - Nodes: missing: must have at least one node - ChainType: invalid value (Foo): must be one of arbitrum, celo, gnosis, kroma, metis, optimismBedrock, scroll, wemix, xlayer, zksync or omitted @@ -1271,6 +1276,7 @@ func TestConfig_Validate(t *testing.T) { - GasEstimator: 2 errors: - FeeCapDefault: invalid value (101 wei): must be equal to PriceMax (99 wei) since you are using FixedPrice estimation with gas bumping disabled in EIP1559 mode - PriceMax will be used as the FeeCap for transactions instead of FeeCapDefault - PriceMax: invalid value (1 gwei): must be greater than or equal to PriceDefault + - HeadTracker.MaxAllowedFinalityDepth: invalid value (0): must be greater than or equal to 1 - KeySpecific.Key: invalid value (0xde709f2102306220921060314715629080e2fb77): duplicate - must be unique - 2: 5 errors: - ChainType: invalid value (Arbitrum): only "optimismBedrock" can be used with this chain id @@ -1628,7 +1634,7 @@ func TestConfig_warnings(t *testing.T) { { name: "Value warning - ChainType=xdai is deprecated", config: Config{ - EVM: evmcfg.EVMConfigs{{Chain: evmcfg.Chain{ChainType: ptr(string(commonconfig.ChainXDai))}}}, + EVM: evmcfg.EVMConfigs{{Chain: evmcfg.Chain{ChainType: commonconfig.NewChainTypeConfig("xdai")}}}, }, expectedErrors: []string{"EVM.ChainType: invalid value (xdai): deprecated and will be removed in v2.13.0, use 'gnosis' instead"}, }, diff --git a/core/services/chainlink/testdata/config-full.toml b/core/services/chainlink/testdata/config-full.toml index 5fdef3d604..5db9eaef83 100644 --- a/core/services/chainlink/testdata/config-full.toml +++ b/core/services/chainlink/testdata/config-full.toml @@ -333,6 +333,8 @@ TransactionPercentile = 15 HistoryDepth = 15 MaxBufferSize = 17 SamplingInterval = '1h0m0s' +MaxAllowedFinalityDepth = 1500 +FinalityTagBypass = false [[EVM.KeySpecific]] Key = '0x2a3e23c6f242F5345320814aC8a1b4E58707D292' diff --git a/core/services/chainlink/testdata/config-invalid.toml b/core/services/chainlink/testdata/config-invalid.toml index 7d1ed17c3c..40a4419fa2 100644 --- a/core/services/chainlink/testdata/config-invalid.toml +++ b/core/services/chainlink/testdata/config-invalid.toml @@ -63,6 +63,7 @@ PriceMax = 99 [EVM.HeadTracker] HistoryDepth = 30 +MaxAllowedFinalityDepth = 0 [[EVM.KeySpecific]] Key = '0xde709f2102306220921060314715629080e2fb77' diff --git a/core/services/chainlink/testdata/config-multi-chain-effective.toml b/core/services/chainlink/testdata/config-multi-chain-effective.toml index 01beacb0c8..73ea1d075f 100644 --- a/core/services/chainlink/testdata/config-multi-chain-effective.toml +++ b/core/services/chainlink/testdata/config-multi-chain-effective.toml @@ -310,6 +310,8 @@ TransactionPercentile = 50 HistoryDepth = 100 MaxBufferSize = 3 SamplingInterval = '1s' +MaxAllowedFinalityDepth = 10000 +FinalityTagBypass = true [EVM.NodePool] PollFailureThreshold = 5 @@ -401,6 +403,8 @@ TransactionPercentile = 50 HistoryDepth = 100 MaxBufferSize = 3 SamplingInterval = '1s' +MaxAllowedFinalityDepth = 10000 +FinalityTagBypass = true [EVM.NodePool] PollFailureThreshold = 5 @@ -486,6 +490,8 @@ TransactionPercentile = 60 HistoryDepth = 2000 MaxBufferSize = 3 SamplingInterval = '1s' +MaxAllowedFinalityDepth = 10000 +FinalityTagBypass = true [EVM.NodePool] PollFailureThreshold = 5 diff --git a/core/services/keeper/delegate.go b/core/services/keeper/delegate.go index 91aacec266..a399ade1b0 100644 --- a/core/services/keeper/delegate.go +++ b/core/services/keeper/delegate.go @@ -94,7 +94,7 @@ func (d *Delegate) ServicesForSpec(ctx context.Context, spec job.Job) (services // In the case of forwarding, the keeper address is the forwarder contract deployed onchain between EOA and Registry. effectiveKeeperAddress := spec.KeeperSpec.FromAddress.Address() if spec.ForwardingAllowed { - fwdrAddress, fwderr := chain.TxManager().GetForwarderForEOA(spec.KeeperSpec.FromAddress.Address()) + fwdrAddress, fwderr := chain.TxManager().GetForwarderForEOA(ctx, spec.KeeperSpec.FromAddress.Address()) if fwderr == nil { effectiveKeeperAddress = fwdrAddress } else { diff --git a/core/services/keeper/integration_test.go b/core/services/keeper/integration_test.go index 9e4cf5f904..cbbe89b3f2 100644 --- a/core/services/keeper/integration_test.go +++ b/core/services/keeper/integration_test.go @@ -417,7 +417,7 @@ func TestKeeperForwarderEthIntegration(t *testing.T) { _, err = forwarderORM.CreateForwarder(ctx, fwdrAddress, chainID) require.NoError(t, err) - addr, err := app.GetRelayers().LegacyEVMChains().Slice()[0].TxManager().GetForwarderForEOA(nodeAddress) + addr, err := app.GetRelayers().LegacyEVMChains().Slice()[0].TxManager().GetForwarderForEOA(ctx, nodeAddress) require.NoError(t, err) require.Equal(t, addr, fwdrAddress) diff --git a/core/services/ocr/contract_tracker.go b/core/services/ocr/contract_tracker.go index 1d9076b832..94ad1237e9 100644 --- a/core/services/ocr/contract_tracker.go +++ b/core/services/ocr/contract_tracker.go @@ -400,7 +400,7 @@ func (t *OCRContractTracker) LatestBlockHeight(ctx context.Context) (blockheight // care about the block height; we have no way of getting the L1 block // height anyway return 0, nil - case "", config.ChainArbitrum, config.ChainCelo, config.ChainGnosis, config.ChainKroma, config.ChainOptimismBedrock, config.ChainScroll, config.ChainWeMix, config.ChainXDai, config.ChainXLayer, config.ChainZkSync: + case "", config.ChainArbitrum, config.ChainCelo, config.ChainGnosis, config.ChainKroma, config.ChainOptimismBedrock, config.ChainScroll, config.ChainWeMix, config.ChainXLayer, config.ChainZkSync: // continue } latestBlockHeight := t.getLatestBlockHeight() diff --git a/core/services/ocr/delegate.go b/core/services/ocr/delegate.go index 690e9ad7c7..a47e7ec9e7 100644 --- a/core/services/ocr/delegate.go +++ b/core/services/ocr/delegate.go @@ -14,7 +14,6 @@ import ( ocr "github.com/smartcontractkit/libocr/offchainreporting" ocrtypes "github.com/smartcontractkit/libocr/offchainreporting/types" - commonlogger "github.com/smartcontractkit/chainlink-common/pkg/logger" "github.com/smartcontractkit/chainlink-common/pkg/sqlutil" "github.com/smartcontractkit/chainlink-common/pkg/utils/mailbox" @@ -155,9 +154,10 @@ func (d *Delegate) ServicesForSpec(ctx context.Context, jb job.Job) (services [] v2Bootstrappers = peerWrapper.P2PConfig().V2().DefaultBootstrappers() } - ocrLogger := commonlogger.NewOCRWrapper(lggr, d.cfg.OCR().TraceLogging(), func(msg string) { + ocrLogger := ocrcommon.NewOCRWrapper(lggr, d.cfg.OCR().TraceLogging(), func(ctx context.Context, msg string) { d.jobORM.TryRecordError(ctx, jb.ID, msg) }) + services = append(services, ocrLogger) lc := toLocalConfig(chain.Config().EVM(), chain.Config().EVM().OCR(), d.cfg.Insecure(), *concreteSpec, d.cfg.OCR()) if err = ocr.SanityCheckLocalConfig(lc); err != nil { @@ -216,7 +216,7 @@ func (d *Delegate) ServicesForSpec(ctx context.Context, jb job.Job) (services [] // In the case of forwarding, the transmitter address is the forwarder contract deployed onchain between EOA and OCR contract. effectiveTransmitterAddress := concreteSpec.TransmitterAddress.Address() if jb.ForwardingAllowed { - fwdrAddress, fwderr := chain.TxManager().GetForwarderForEOA(effectiveTransmitterAddress) + fwdrAddress, fwderr := chain.TxManager().GetForwarderForEOA(ctx, effectiveTransmitterAddress) if fwderr == nil { effectiveTransmitterAddress = fwdrAddress } else { diff --git a/core/services/ocr2/delegate.go b/core/services/ocr2/delegate.go index a6ca04a3e3..398c6a94f7 100644 --- a/core/services/ocr2/delegate.go +++ b/core/services/ocr2/delegate.go @@ -401,7 +401,7 @@ func (d *Delegate) ServicesForSpec(ctx context.Context, jb job.Job) ([]job.Servi if err2 != nil { return nil, fmt.Errorf("ServicesForSpec: could not get EVM chain %s: %w", rid.ChainID, err2) } - effectiveTransmitterID, err2 = GetEVMEffectiveTransmitterID(&jb, chain, lggr) + effectiveTransmitterID, err2 = GetEVMEffectiveTransmitterID(ctx, &jb, chain, lggr) if err2 != nil { return nil, fmt.Errorf("ServicesForSpec failed to get evm transmitterID: %w", err2) } @@ -416,10 +416,6 @@ func (d *Delegate) ServicesForSpec(ctx context.Context, jb job.Job) ([]job.Servi return nil, errors.New("peerWrapper is not started. OCR2 jobs require a started and running p2p v2 peer") } - ocrLogger := commonlogger.NewOCRWrapper(lggr, d.cfg.OCR2().TraceLogging(), func(msg string) { - lggr.ErrorIf(d.jobORM.RecordError(ctx, jb.ID, msg), "unable to record error") - }) - lc, err := validate.ToLocalConfig(d.cfg.OCR2(), d.cfg.Insecure(), *spec) if err != nil { return nil, err @@ -457,22 +453,22 @@ func (d *Delegate) ServicesForSpec(ctx context.Context, jb job.Job) ([]job.Servi ctx = lggrCtx.ContextWithValues(ctx) switch spec.PluginType { case types.Mercury: - return d.newServicesMercury(ctx, lggr, jb, bootstrapPeers, kb, ocrDB, lc, ocrLogger) + return d.newServicesMercury(ctx, lggr, jb, bootstrapPeers, kb, ocrDB, lc) case types.LLO: - return d.newServicesLLO(ctx, lggr, jb, bootstrapPeers, kb, ocrDB, lc, ocrLogger) + return d.newServicesLLO(ctx, lggr, jb, bootstrapPeers, kb, ocrDB, lc) case types.Median: - return d.newServicesMedian(ctx, lggr, jb, bootstrapPeers, kb, kvStore, ocrDB, lc, ocrLogger) + return d.newServicesMedian(ctx, lggr, jb, bootstrapPeers, kb, kvStore, ocrDB, lc) case types.DKG: - return d.newServicesDKG(lggr, jb, bootstrapPeers, kb, ocrDB, lc, ocrLogger) + return d.newServicesDKG(lggr, jb, bootstrapPeers, kb, ocrDB, lc) case types.OCR2VRF: return d.newServicesOCR2VRF(ctx, lggr, jb, bootstrapPeers, kb, ocrDB, lc) case types.OCR2Keeper: - return d.newServicesOCR2Keepers(ctx, lggr, jb, bootstrapPeers, kb, ocrDB, lc, ocrLogger) + return d.newServicesOCR2Keepers(ctx, lggr, jb, bootstrapPeers, kb, ocrDB, lc) case types.Functions: const ( @@ -482,10 +478,10 @@ func (d *Delegate) ServicesForSpec(ctx context.Context, jb job.Job) ([]job.Servi ) thresholdPluginDB := NewDB(d.ds, spec.ID, thresholdPluginId, lggr) s4PluginDB := NewDB(d.ds, spec.ID, s4PluginId, lggr) - return d.newServicesOCR2Functions(ctx, lggr, jb, bootstrapPeers, kb, ocrDB, thresholdPluginDB, s4PluginDB, lc, ocrLogger) + return d.newServicesOCR2Functions(ctx, lggr, jb, bootstrapPeers, kb, ocrDB, thresholdPluginDB, s4PluginDB, lc) case types.GenericPlugin: - return d.newServicesGenericPlugin(ctx, lggr, jb, bootstrapPeers, kb, ocrDB, lc, ocrLogger, d.capabilitiesRegistry, + return d.newServicesGenericPlugin(ctx, lggr, jb, bootstrapPeers, kb, ocrDB, lc, d.capabilitiesRegistry, kvStore) case types.CCIPCommit: @@ -499,7 +495,7 @@ func (d *Delegate) ServicesForSpec(ctx context.Context, jb job.Job) ([]job.Servi } } -func GetEVMEffectiveTransmitterID(jb *job.Job, chain legacyevm.Chain, lggr logger.SugaredLogger) (string, error) { +func GetEVMEffectiveTransmitterID(ctx context.Context, jb *job.Job, chain legacyevm.Chain, lggr logger.SugaredLogger) (string, error) { spec := jb.OCR2OracleSpec if spec.PluginType == types.Mercury || spec.PluginType == types.LLO { return spec.TransmitterID.String, nil @@ -525,14 +521,22 @@ func GetEVMEffectiveTransmitterID(jb *job.Job, chain legacyevm.Chain, lggr logge if chain == nil { return "", fmt.Errorf("job forwarding requires non-nil chain") } - effectiveTransmitterID, err := chain.TxManager().GetForwarderForEOA(common.HexToAddress(spec.TransmitterID.String)) + + var err error + var effectiveTransmitterID common.Address + // Median forwarders need special handling because of OCR2Aggregator transmitters whitelist. + if spec.PluginType == types.Median { + effectiveTransmitterID, err = chain.TxManager().GetForwarderForEOAOCR2Feeds(ctx, common.HexToAddress(spec.TransmitterID.String), common.HexToAddress(spec.ContractID)) + } else { + effectiveTransmitterID, err = chain.TxManager().GetForwarderForEOA(ctx, common.HexToAddress(spec.TransmitterID.String)) + } + if err == nil { return effectiveTransmitterID.String(), nil } else if !spec.TransmitterID.Valid { return "", errors.New("failed to get forwarder address and transmitterID is not set") } lggr.Warnw("Skipping forwarding for job, will fallback to default behavior", "job", jb.Name, "err", err) - // this shouldn't happen unless behaviour above was changed } return spec.TransmitterID.String, nil @@ -550,7 +554,6 @@ func (d *Delegate) newServicesGenericPlugin( kb ocr2key.KeyBundle, ocrDB *db, lc ocrtypes.LocalConfig, - ocrLogger commontypes.Logger, capabilitiesRegistry core.CapabilitiesRegistry, keyValueStore core.KeyValueStore, ) (srvs []job.ServiceCtx, err error) { @@ -680,6 +683,11 @@ func (d *Delegate) newServicesGenericPlugin( synchronization.TelemetryType(pCfg.TelemetryType), ) + ocrLogger := ocrcommon.NewOCRWrapper(lggr, d.cfg.OCR2().TraceLogging(), func(ctx context.Context, msg string) { + lggr.ErrorIf(d.jobORM.RecordError(ctx, jb.ID, msg), "unable to record error") + }) + srvs = append(srvs, ocrLogger) + switch pCfg.OCRVersion { case 2: plugin := reportingplugins.NewLOOPPService(pluginLggr, grpcOpts, cmdFn, pluginConfig, providerClientConn, pr, ta, @@ -748,7 +756,6 @@ func (d *Delegate) newServicesMercury( kb ocr2key.KeyBundle, ocrDB *db, lc ocrtypes.LocalConfig, - ocrLogger commontypes.Logger, ) ([]job.ServiceCtx, error) { if jb.OCR2OracleSpec.FeedID == nil || (*jb.OCR2OracleSpec.FeedID == (common.Hash{})) { return nil, errors.Errorf("ServicesForSpec: mercury job type requires feedID") @@ -800,6 +807,10 @@ func (d *Delegate) newServicesMercury( // https://smartcontract-it.atlassian.net/browse/MERC-3386 lc.ContractConfigTrackerPollInterval = 1 * time.Second // Mercury requires a fast poll interval, this is the fastest that libocr supports. See: https://github.com/smartcontractkit/offchain-reporting/pull/520 + ocrLogger := ocrcommon.NewOCRWrapper(lggr, d.cfg.OCR2().TraceLogging(), func(ctx context.Context, msg string) { + lggr.ErrorIf(d.jobORM.RecordError(ctx, jb.ID, msg), "unable to record error") + }) + oracleArgsNoPlugin := libocr2.MercuryOracleArgs{ BinaryNetworkEndpointFactory: d.peerWrapper.Peer2, V2Bootstrappers: bootstrapPeers, @@ -828,6 +839,8 @@ func (d *Delegate) newServicesMercury( lggr.Infow("Enhanced telemetry is disabled for mercury job", "job", jb.Name) } + mercuryServices = append(mercuryServices, ocrLogger) + return mercuryServices, err2 } @@ -839,7 +852,6 @@ func (d *Delegate) newServicesLLO( kb ocr2key.KeyBundle, ocrDB *db, lc ocrtypes.LocalConfig, - ocrLogger commontypes.Logger, ) ([]job.ServiceCtx, error) { lggr = logger.Sugared(lggr.Named("LLO")) spec := jb.OCR2OracleSpec @@ -931,6 +943,10 @@ func (d *Delegate) newServicesLLO( lggr.Infof("Using on-chain signing keys for LLO job %d (%s): %v", jb.ID, jb.Name.ValueOrZero(), kbm) kr := llo.NewOnchainKeyring(lggr, kbm) + ocrLogger := ocrcommon.NewOCRWrapper(lggr, d.cfg.OCR2().TraceLogging(), func(ctx context.Context, msg string) { + lggr.ErrorIf(d.jobORM.RecordError(ctx, jb.ID, msg), "unable to record error") + }) + cfg := llo.DelegateConfig{ Logger: lggr, DataSource: d.ds, @@ -959,7 +975,7 @@ func (d *Delegate) newServicesLLO( if err != nil { return nil, err } - return []job.ServiceCtx{provider, oracle}, nil + return []job.ServiceCtx{provider, ocrLogger, oracle}, nil } func (d *Delegate) newServicesMedian( @@ -971,7 +987,6 @@ func (d *Delegate) newServicesMedian( kvStore job.KVStore, ocrDB *db, lc ocrtypes.LocalConfig, - ocrLogger commontypes.Logger, ) ([]job.ServiceCtx, error) { spec := jb.OCR2OracleSpec @@ -980,6 +995,10 @@ func (d *Delegate) newServicesMedian( return nil, ErrJobSpecNoRelayer{Err: err, PluginName: "median"} } + ocrLogger := ocrcommon.NewOCRWrapper(lggr, d.cfg.OCR2().TraceLogging(), func(ctx context.Context, msg string) { + lggr.ErrorIf(d.jobORM.RecordError(ctx, jb.ID, msg), "unable to record error") + }) + oracleArgsNoPlugin := libocr2.OCR2OracleArgs{ BinaryNetworkEndpointFactory: d.peerWrapper.Peer2, V2Bootstrappers: bootstrapPeers, @@ -1013,6 +1032,8 @@ func (d *Delegate) newServicesMedian( lggr.Infow("Enhanced telemetry is disabled for job", "job", jb.Name) } + medianServices = append(medianServices, ocrLogger) + return medianServices, err2 } @@ -1023,7 +1044,6 @@ func (d *Delegate) newServicesDKG( kb ocr2key.KeyBundle, ocrDB *db, lc ocrtypes.LocalConfig, - ocrLogger commontypes.Logger, ) ([]job.ServiceCtx, error) { spec := jb.OCR2OracleSpec rid, err := spec.RelayID() @@ -1053,6 +1073,9 @@ func (d *Delegate) newServicesDKG( if err2 != nil { return nil, err2 } + ocrLogger := ocrcommon.NewOCRWrapper(lggr, d.cfg.OCR2().TraceLogging(), func(ctx context.Context, msg string) { + lggr.ErrorIf(d.jobORM.RecordError(ctx, jb.ID, msg), "unable to record error") + }) noopMonitoringEndpoint := telemetry.NoopAgent{} oracleArgsNoPlugin := libocr2.OCR2OracleArgs{ BinaryNetworkEndpointFactory: d.peerWrapper.Peer2, @@ -1069,7 +1092,12 @@ func (d *Delegate) newServicesDKG( OnchainKeyring: kb, MetricsRegisterer: prometheus.WrapRegistererWith(map[string]string{"job_name": jb.Name.ValueOrZero()}, prometheus.DefaultRegisterer), } - return dkg.NewDKGServices(jb, dkgProvider, lggr, ocrLogger, d.dkgSignKs, d.dkgEncryptKs, chain.Client(), oracleArgsNoPlugin, d.ds, chain.ID(), spec.Relay) + services, err := dkg.NewDKGServices(jb, dkgProvider, lggr, ocrLogger, d.dkgSignKs, d.dkgEncryptKs, chain.Client(), oracleArgsNoPlugin, d.ds, chain.ID(), spec.Relay) + if err != nil { + return nil, err + } + services = append(services, ocrLogger) + return services, nil } func (d *Delegate) newServicesOCR2VRF( @@ -1192,12 +1220,10 @@ func (d *Delegate) newServicesOCR2VRF( "jobName", jb.Name.ValueOrZero(), "jobID", jb.ID, ) - vrfLogger := commonlogger.NewOCRWrapper(l.With( - "vrfContractID", spec.ContractID), d.cfg.OCR2().TraceLogging(), func(msg string) { + vrfLogger := ocrcommon.NewOCRWrapper(l.With("vrfContractID", spec.ContractID), d.cfg.OCR2().TraceLogging(), func(ctx context.Context, msg string) { lggr.ErrorIf(d.jobORM.RecordError(ctx, jb.ID, msg), "unable to record error") }) - dkgLogger := commonlogger.NewOCRWrapper(l.With( - "dkgContractID", cfg.DKGContractAddress), d.cfg.OCR2().TraceLogging(), func(msg string) { + dkgLogger := ocrcommon.NewOCRWrapper(l.With("dkgContractID", cfg.DKGContractAddress), d.cfg.OCR2().TraceLogging(), func(ctx context.Context, msg string) { lggr.ErrorIf(d.jobORM.RecordError(ctx, jb.ID, msg), "unable to record error") }) dkgReportingPluginFactoryDecorator := func(wrapped ocrtypes.ReportingPluginFactory) ocrtypes.ReportingPluginFactory { @@ -1258,7 +1284,6 @@ func (d *Delegate) newServicesOCR2Keepers( kb ocr2key.KeyBundle, ocrDB *db, lc ocrtypes.LocalConfig, - ocrLogger commontypes.Logger, ) ([]job.ServiceCtx, error) { spec := jb.OCR2OracleSpec var cfg ocr2keeper.PluginConfig @@ -1272,14 +1297,14 @@ func (d *Delegate) newServicesOCR2Keepers( switch cfg.ContractVersion { case "v2.1": - return d.newServicesOCR2Keepers21(ctx, lggr, jb, bootstrapPeers, kb, ocrDB, lc, ocrLogger, cfg, spec) + return d.newServicesOCR2Keepers21(ctx, lggr, jb, bootstrapPeers, kb, ocrDB, lc, cfg, spec) case "v2.1+": // Future contracts of v2.1 (v2.x) will use the same job spec as v2.1 - return d.newServicesOCR2Keepers21(ctx, lggr, jb, bootstrapPeers, kb, ocrDB, lc, ocrLogger, cfg, spec) + return d.newServicesOCR2Keepers21(ctx, lggr, jb, bootstrapPeers, kb, ocrDB, lc, cfg, spec) case "v2.0": - return d.newServicesOCR2Keepers20(ctx, lggr, jb, bootstrapPeers, kb, ocrDB, lc, ocrLogger, cfg, spec) + return d.newServicesOCR2Keepers20(ctx, lggr, jb, bootstrapPeers, kb, ocrDB, lc, cfg, spec) default: - return d.newServicesOCR2Keepers20(ctx, lggr, jb, bootstrapPeers, kb, ocrDB, lc, ocrLogger, cfg, spec) + return d.newServicesOCR2Keepers20(ctx, lggr, jb, bootstrapPeers, kb, ocrDB, lc, cfg, spec) } } @@ -1291,7 +1316,6 @@ func (d *Delegate) newServicesOCR2Keepers21( kb ocr2key.KeyBundle, ocrDB *db, lc ocrtypes.LocalConfig, - ocrLogger commontypes.Logger, cfg ocr2keeper.PluginConfig, spec *job.OCR2OracleSpec, ) ([]job.ServiceCtx, error) { @@ -1373,6 +1397,9 @@ func (d *Delegate) newServicesOCR2Keepers21( if cfg.ServiceQueueLength != 0 { conf.ServiceQueueLength = cfg.ServiceQueueLength } + ocrLogger := ocrcommon.NewOCRWrapper(lggr, d.cfg.OCR2().TraceLogging(), func(ctx context.Context, msg string) { + lggr.ErrorIf(d.jobORM.RecordError(ctx, jb.ID, msg), "unable to record error") + }) dConf := ocr2keepers21.DelegateConfig{ BinaryNetworkEndpointFactory: d.peerWrapper.Peer2, @@ -1419,6 +1446,7 @@ func (d *Delegate) newServicesOCR2Keepers21( keeperProvider.UpkeepStateStore(), keeperProvider.TransmitEventProvider(), pluginService, + ocrLogger, } if cfg.CaptureAutomationCustomTelemetry != nil && *cfg.CaptureAutomationCustomTelemetry || @@ -1447,7 +1475,6 @@ func (d *Delegate) newServicesOCR2Keepers20( kb ocr2key.KeyBundle, ocrDB *db, lc ocrtypes.LocalConfig, - ocrLogger commontypes.Logger, cfg ocr2keeper.PluginConfig, spec *job.OCR2OracleSpec, ) ([]job.ServiceCtx, error) { @@ -1523,6 +1550,10 @@ func (d *Delegate) newServicesOCR2Keepers20( CacheClean: conf.CacheEvictionInterval, } + ocrLogger := ocrcommon.NewOCRWrapper(lggr, d.cfg.OCR2().TraceLogging(), func(ctx context.Context, msg string) { + lggr.ErrorIf(d.jobORM.RecordError(ctx, jb.ID, msg), "unable to record error") + }) + dConf := ocr2keepers20.DelegateConfig{ BinaryNetworkEndpointFactory: d.peerWrapper.Peer2, V2Bootstrappers: bootstrapPeers, @@ -1557,6 +1588,7 @@ func (d *Delegate) newServicesOCR2Keepers20( keeperProvider, rgstry, logProvider, + ocrLogger, pluginService, }, nil } @@ -1571,7 +1603,6 @@ func (d *Delegate) newServicesOCR2Functions( thresholdOcrDB *db, s4OcrDB *db, lc ocrtypes.LocalConfig, - ocrLogger commontypes.Logger, ) ([]job.ServiceCtx, error) { spec := jb.OCR2OracleSpec @@ -1622,6 +1653,10 @@ func (d *Delegate) newServicesOCR2Functions( return nil, err } + ocrLogger := ocrcommon.NewOCRWrapper(lggr, d.cfg.OCR2().TraceLogging(), func(ctx context.Context, msg string) { + lggr.ErrorIf(d.jobORM.RecordError(ctx, jb.ID, msg), "unable to record error") + }) + functionsOracleArgs := libocr2.OCR2OracleArgs{ BinaryNetworkEndpointFactory: d.peerWrapper.Peer2, V2Bootstrappers: bootstrapPeers, @@ -1707,7 +1742,7 @@ func (d *Delegate) newServicesOCR2Functions( return nil, errors.Wrap(err, "error calling NewFunctionsServices") } - return append([]job.ServiceCtx{functionsProvider, thresholdProvider, s4Provider}, functionsServices...), nil + return append([]job.ServiceCtx{functionsProvider, thresholdProvider, s4Provider, ocrLogger}, functionsServices...), nil } func (d *Delegate) newServicesCCIPCommit(ctx context.Context, lggr logger.SugaredLogger, jb job.Job, bootstrapPeers []commontypes.BootstrapperLocator, kb ocr2key.KeyBundle, ocrDB *db, lc ocrtypes.LocalConfig, transmitterID string) ([]job.ServiceCtx, error) { diff --git a/core/services/ocr2/delegate_test.go b/core/services/ocr2/delegate_test.go index bae1f5f3e7..1e4be66c7d 100644 --- a/core/services/ocr2/delegate_test.go +++ b/core/services/ocr2/delegate_test.go @@ -5,10 +5,12 @@ import ( "github.com/ethereum/go-ethereum/common" "github.com/pkg/errors" + "github.com/stretchr/testify/mock" "github.com/stretchr/testify/require" "gopkg.in/guregu/null.v4" "github.com/smartcontractkit/chainlink-common/pkg/types" + evmcfg "github.com/smartcontractkit/chainlink/v2/core/chains/evm/config/toml" txmmocks "github.com/smartcontractkit/chainlink/v2/core/chains/evm/txmgr/mocks" "github.com/smartcontractkit/chainlink/v2/core/chains/evm/utils/big" @@ -27,7 +29,6 @@ import ( ) func TestGetEVMEffectiveTransmitterID(t *testing.T) { - ctx := testutils.Context(t) customChainID := big.New(testutils.NewRandomEVMChainID()) config := configtest.NewGeneralConfig(t, func(c *chainlink.Config, s *chainlink.Secrets) { @@ -41,7 +42,7 @@ func TestGetEVMEffectiveTransmitterID(t *testing.T) { }) db := pgtest.NewSqlxDB(t) keyStore := cltest.NewKeyStore(t, db) - require.NoError(t, keyStore.OCR2().Add(ctx, cltest.DefaultOCR2Key)) + require.NoError(t, keyStore.OCR2().Add(testutils.Context(t), cltest.DefaultOCR2Key)) lggr := logger.TestLogger(t) txManager := txmmocks.NewMockEvmTxManager(t) @@ -67,10 +68,17 @@ func TestGetEVMEffectiveTransmitterID(t *testing.T) { jb.OCR2OracleSpec.RelayConfig["sendingKeys"] = tc.sendingKeys jb.ForwardingAllowed = tc.forwardingEnabled + args := []interface{}{mock.Anything, tc.getForwarderForEOAArg} + getForwarderMethodName := "GetForwarderForEOA" + if tc.pluginType == types.Median { + getForwarderMethodName = "GetForwarderForEOAOCR2Feeds" + args = append(args, common.HexToAddress(jb.OCR2OracleSpec.ContractID)) + } + if tc.forwardingEnabled && tc.getForwarderForEOAErr { - txManager.Mock.On("GetForwarderForEOA", tc.getForwarderForEOAArg).Return(common.HexToAddress("0x0"), errors.New("random error")).Once() + txManager.Mock.On(getForwarderMethodName, args...).Return(common.HexToAddress("0x0"), errors.New("random error")).Once() } else if tc.forwardingEnabled { - txManager.Mock.On("GetForwarderForEOA", tc.getForwarderForEOAArg).Return(common.HexToAddress(tc.expectedTransmitterID), nil).Once() + txManager.Mock.On(getForwarderMethodName, args...).Return(common.HexToAddress(tc.expectedTransmitterID), nil).Once() } } @@ -137,13 +145,14 @@ func TestGetEVMEffectiveTransmitterID(t *testing.T) { } t.Run("when sending keys are not defined, the first one should be set to transmitterID", func(t *testing.T) { + ctx := testutils.Context(t) jb, err := ocr2validate.ValidatedOracleSpecToml(testutils.Context(t), config.OCR2(), config.Insecure(), testspecs.GetOCR2EVMSpecMinimal(), nil) require.NoError(t, err) jb.OCR2OracleSpec.TransmitterID = null.StringFrom("some transmitterID string") jb.OCR2OracleSpec.RelayConfig["sendingKeys"] = nil chain, err := legacyChains.Get(customChainID.String()) require.NoError(t, err) - effectiveTransmitterID, err := ocr2.GetEVMEffectiveTransmitterID(&jb, chain, lggr) + effectiveTransmitterID, err := ocr2.GetEVMEffectiveTransmitterID(ctx, &jb, chain, lggr) require.NoError(t, err) require.Equal(t, "some transmitterID string", effectiveTransmitterID) require.Equal(t, []string{"some transmitterID string"}, jb.OCR2OracleSpec.RelayConfig["sendingKeys"].([]string)) @@ -151,13 +160,14 @@ func TestGetEVMEffectiveTransmitterID(t *testing.T) { for _, tc := range testCases { t.Run(tc.name, func(t *testing.T) { + ctx := testutils.Context(t) jb, err := ocr2validate.ValidatedOracleSpecToml(testutils.Context(t), config.OCR2(), config.Insecure(), testspecs.GetOCR2EVMSpecMinimal(), nil) require.NoError(t, err) setTestCase(&jb, tc, txManager) chain, err := legacyChains.Get(customChainID.String()) require.NoError(t, err) - effectiveTransmitterID, err := ocr2.GetEVMEffectiveTransmitterID(&jb, chain, lggr) + effectiveTransmitterID, err := ocr2.GetEVMEffectiveTransmitterID(ctx, &jb, chain, lggr) if tc.expectedError { require.Error(t, err) } else { @@ -169,18 +179,18 @@ func TestGetEVMEffectiveTransmitterID(t *testing.T) { if !jb.ForwardingAllowed { require.Equal(t, jb.OCR2OracleSpec.TransmitterID.String, effectiveTransmitterID) } - }) } t.Run("when forwarders are enabled and chain retrieval fails, error should be handled", func(t *testing.T) { + ctx := testutils.Context(t) jb, err := ocr2validate.ValidatedOracleSpecToml(testutils.Context(t), config.OCR2(), config.Insecure(), testspecs.GetOCR2EVMSpecMinimal(), nil) require.NoError(t, err) jb.ForwardingAllowed = true jb.OCR2OracleSpec.TransmitterID = null.StringFrom("0x7e57000000000000000000000000000000000001") chain, err := legacyChains.Get("not an id") require.Error(t, err) - _, err = ocr2.GetEVMEffectiveTransmitterID(&jb, chain, lggr) + _, err = ocr2.GetEVMEffectiveTransmitterID(ctx, &jb, chain, lggr) require.Error(t, err) }) } diff --git a/core/services/ocr2/plugins/ccip/ccipcommit/factory.go b/core/services/ocr2/plugins/ccip/ccipcommit/factory.go index 03a301ce67..79fd9f1ec8 100644 --- a/core/services/ocr2/plugins/ccip/ccipcommit/factory.go +++ b/core/services/ocr2/plugins/ccip/ccipcommit/factory.go @@ -6,6 +6,7 @@ import ( "sync" "github.com/ethereum/go-ethereum/common" + "github.com/smartcontractkit/chainlink/v2/core/services/ocr2/plugins/ccip/internal/ccipcommon" "github.com/smartcontractkit/libocr/offchainreporting2plus/types" cciptypes "github.com/smartcontractkit/chainlink-common/pkg/types/ccip" @@ -64,35 +65,60 @@ func (rf *CommitReportingPluginFactory) UpdateDynamicReaders(ctx context.Context return nil } +type reportingPluginAndInfo struct { + plugin types.ReportingPlugin + pluginInfo types.ReportingPluginInfo +} + // NewReportingPlugin returns the ccip CommitReportingPlugin and satisfies the ReportingPluginFactory interface. func (rf *CommitReportingPluginFactory) NewReportingPlugin(config types.ReportingPluginConfig) (types.ReportingPlugin, types.ReportingPluginInfo, error) { - ctx := context.Background() // todo: consider adding some timeout - - destPriceReg, err := rf.config.commitStore.ChangeConfig(ctx, config.OnchainConfig, config.OffchainConfig) + initialRetryDelay := rf.config.newReportingPluginRetryConfig.InitialDelay + maxDelay := rf.config.newReportingPluginRetryConfig.MaxDelay + + pluginAndInfo, err := ccipcommon.RetryUntilSuccess( + rf.NewReportingPluginFn(config), + initialRetryDelay, + maxDelay, + ) if err != nil { return nil, types.ReportingPluginInfo{}, err } + return pluginAndInfo.plugin, pluginAndInfo.pluginInfo, nil +} - priceRegEvmAddr, err := ccipcalc.GenericAddrToEvm(destPriceReg) - if err != nil { - return nil, types.ReportingPluginInfo{}, err - } - if err = rf.UpdateDynamicReaders(ctx, priceRegEvmAddr); err != nil { - return nil, types.ReportingPluginInfo{}, err - } +// NewReportingPluginFn implements the NewReportingPlugin logic. It is defined as a function so that it can easily be +// retried via RetryUntilSuccess. NewReportingPlugin must return successfully in order for the Commit plugin to +// function, hence why we can only keep retrying it until it succeeds. +func (rf *CommitReportingPluginFactory) NewReportingPluginFn(config types.ReportingPluginConfig) func() (reportingPluginAndInfo, error) { + return func() (reportingPluginAndInfo, error) { + ctx := context.Background() // todo: consider adding some timeout - pluginOffChainConfig, err := rf.config.commitStore.OffchainConfig(ctx) - if err != nil { - return nil, types.ReportingPluginInfo{}, err - } + destPriceReg, err := rf.config.commitStore.ChangeConfig(ctx, config.OnchainConfig, config.OffchainConfig) + if err != nil { + return reportingPluginAndInfo{}, err + } - gasPriceEstimator, err := rf.config.commitStore.GasPriceEstimator(ctx) - if err != nil { - return nil, types.ReportingPluginInfo{}, err - } + priceRegEvmAddr, err := ccipcalc.GenericAddrToEvm(destPriceReg) + if err != nil { + return reportingPluginAndInfo{}, err + } + if err = rf.UpdateDynamicReaders(ctx, priceRegEvmAddr); err != nil { + return reportingPluginAndInfo{}, err + } + + pluginOffChainConfig, err := rf.config.commitStore.OffchainConfig(ctx) + if err != nil { + return reportingPluginAndInfo{}, err + } + + gasPriceEstimator, err := rf.config.commitStore.GasPriceEstimator(ctx) + if err != nil { + return reportingPluginAndInfo{}, err + } + + lggr := rf.config.lggr.Named("CommitReportingPlugin") - lggr := rf.config.lggr.Named("CommitReportingPlugin") - return &CommitReportingPlugin{ + plugin := &CommitReportingPlugin{ sourceChainSelector: rf.config.sourceChainSelector, sourceNative: rf.config.sourceNative, onRampReader: rf.config.onRampReader, @@ -106,8 +132,9 @@ func (rf *CommitReportingPluginFactory) NewReportingPlugin(config types.Reportin offchainConfig: pluginOffChainConfig, metricsCollector: rf.config.metricsCollector, chainHealthcheck: rf.config.chainHealthcheck, - }, - types.ReportingPluginInfo{ + } + + pluginInfo := types.ReportingPluginInfo{ Name: "CCIPCommit", UniqueReports: false, // See comment in CommitStore constructor. Limits: types.ReportingPluginLimits{ @@ -115,5 +142,8 @@ func (rf *CommitReportingPluginFactory) NewReportingPlugin(config types.Reportin MaxObservationLength: ccip.MaxObservationLength, MaxReportLength: MaxCommitReportLength, }, - }, nil + } + + return reportingPluginAndInfo{plugin, pluginInfo}, nil + } } diff --git a/core/services/ocr2/plugins/ccip/ccipcommit/factory_test.go b/core/services/ocr2/plugins/ccip/ccipcommit/factory_test.go new file mode 100644 index 0000000000..0fe268035a --- /dev/null +++ b/core/services/ocr2/plugins/ccip/ccipcommit/factory_test.go @@ -0,0 +1,87 @@ +package ccipcommit + +import ( + "errors" + "testing" + "time" + + "github.com/smartcontractkit/libocr/offchainreporting2plus/types" + "github.com/stretchr/testify/assert" + "github.com/stretchr/testify/mock" + + "github.com/smartcontractkit/chainlink-common/pkg/types/ccip" + "github.com/smartcontractkit/chainlink/v2/core/logger" + "github.com/smartcontractkit/chainlink/v2/core/services/ocr2/plugins/ccip/internal/ccipdata" + ccipdataprovidermocks "github.com/smartcontractkit/chainlink/v2/core/services/ocr2/plugins/ccip/internal/ccipdata/ccipdataprovider/mocks" + "github.com/smartcontractkit/chainlink/v2/core/services/ocr2/plugins/ccip/internal/ccipdata/mocks" +) + +// Assert that NewReportingPlugin keeps retrying until it succeeds. +// +// NewReportingPlugin makes several calls (e.g. CommitStoreReader.ChangeConfig) that can fail. We use mocks to cause the +// first call to each of these functions to fail, then all subsequent calls succeed. We assert that NewReportingPlugin +// retries a sufficient number of times to get through the transient errors and eventually succeed. +func TestNewReportingPluginRetriesUntilSuccess(t *testing.T) { + commitConfig := CommitPluginStaticConfig{} + + // For this unit test, ensure that there is no delay between retries + commitConfig.newReportingPluginRetryConfig = ccipdata.RetryConfig{ + InitialDelay: 0 * time.Nanosecond, + MaxDelay: 0 * time.Nanosecond, + } + + // Set up the OffRampReader mock + mockCommitStore := new(mocks.CommitStoreReader) + + // The first call is set to return an error, the following calls return a nil error + mockCommitStore. + On("ChangeConfig", mock.Anything, mock.Anything, mock.Anything). + Return(ccip.Address(""), errors.New("")). + Once() + mockCommitStore. + On("ChangeConfig", mock.Anything, mock.Anything, mock.Anything). + Return(ccip.Address("0x7c6e4F0BDe29f83BC394B75a7f313B7E5DbD2d77"), nil). + Times(5) + + mockCommitStore. + On("OffchainConfig", mock.Anything). + Return(ccip.CommitOffchainConfig{}, errors.New("")). + Once() + mockCommitStore. + On("OffchainConfig", mock.Anything). + Return(ccip.CommitOffchainConfig{}, nil). + Times(3) + + mockCommitStore. + On("GasPriceEstimator", mock.Anything). + Return(nil, errors.New("")). + Once() + mockCommitStore. + On("GasPriceEstimator", mock.Anything). + Return(nil, nil). + Times(2) + + commitConfig.commitStore = mockCommitStore + + priceRegistryProvider := new(ccipdataprovidermocks.PriceRegistry) + priceRegistryProvider. + On("NewPriceRegistryReader", mock.Anything, mock.Anything). + Return(nil, errors.New("")). + Once() + priceRegistryProvider. + On("NewPriceRegistryReader", mock.Anything, mock.Anything). + Return(nil, nil). + Once() + commitConfig.priceRegistryProvider = priceRegistryProvider + + commitConfig.lggr, _ = logger.NewLogger() + + factory := NewCommitReportingPluginFactory(commitConfig) + reportingConfig := types.ReportingPluginConfig{} + reportingConfig.OnchainConfig = []byte{1, 2, 3} + reportingConfig.OffchainConfig = []byte{1, 2, 3} + + // Assert that NewReportingPlugin succeeds despite many transient internal failures (mocked out above) + _, _, err := factory.NewReportingPlugin(reportingConfig) + assert.Equal(t, nil, err) +} diff --git a/core/services/ocr2/plugins/ccip/ccipcommit/initializers.go b/core/services/ocr2/plugins/ccip/ccipcommit/initializers.go index 84af2749dc..8e4c4cb5ce 100644 --- a/core/services/ocr2/plugins/ccip/ccipcommit/initializers.go +++ b/core/services/ocr2/plugins/ccip/ccipcommit/initializers.go @@ -6,6 +6,7 @@ import ( "fmt" "math/big" "strings" + "time" "github.com/Masterminds/semver/v3" "github.com/ethereum/go-ethereum/accounts/abi/bind" @@ -43,6 +44,11 @@ import ( "github.com/smartcontractkit/chainlink/v2/core/services/pipeline" ) +var defaultNewReportingPluginRetryConfig = ccipdata.RetryConfig{ + InitialDelay: time.Second, + MaxDelay: 5 * time.Minute, +} + func NewCommitServices(ctx context.Context, lggr logger.Logger, jb job.Job, chainSet legacyevm.LegacyChainContainer, new bool, pr pipeline.Runner, argsNoPlugin libocr2.OCR2OracleArgs, logError func(string)) ([]job.ServiceCtx, error) { pluginConfig, backfillArgs, chainHealthcheck, err := jobSpecToCommitPluginConfig(ctx, lggr, jb, pr, chainSet) if err != nil { @@ -235,21 +241,22 @@ func jobSpecToCommitPluginConfig(ctx context.Context, lggr logger.Logger, jb job "pluginConfig", params.pluginConfig, "staticConfig", params.commitStoreStaticCfg, // TODO bring back - //"dynamicOnRampConfig", dynamicOnRampConfig, + // "dynamicOnRampConfig", dynamicOnRampConfig, "sourceNative", sourceNative, "sourceRouter", sourceRouter.Address()) return &CommitPluginStaticConfig{ - lggr: commitLggr, - onRampReader: onRampReader, - offRamp: offRampReader, - sourceNative: ccipcalc.EvmAddrToGeneric(sourceNative), - priceGetter: priceGetter, - sourceChainSelector: params.commitStoreStaticCfg.SourceChainSelector, - destChainSelector: params.commitStoreStaticCfg.ChainSelector, - commitStore: commitStoreReader, - priceRegistryProvider: ccipdataprovider.NewEvmPriceRegistry(params.destChain.LogPoller(), params.destChain.Client(), commitLggr, ccip.CommitPluginLabel), - metricsCollector: metricsCollector, - chainHealthcheck: chainHealthcheck, + lggr: commitLggr, + newReportingPluginRetryConfig: defaultNewReportingPluginRetryConfig, + onRampReader: onRampReader, + offRamp: offRampReader, + sourceNative: ccipcalc.EvmAddrToGeneric(sourceNative), + priceGetter: priceGetter, + sourceChainSelector: params.commitStoreStaticCfg.SourceChainSelector, + destChainSelector: params.commitStoreStaticCfg.ChainSelector, + commitStore: commitStoreReader, + priceRegistryProvider: ccipdataprovider.NewEvmPriceRegistry(params.destChain.LogPoller(), params.destChain.Client(), commitLggr, ccip.CommitPluginLabel), + metricsCollector: metricsCollector, + chainHealthcheck: chainHealthcheck, }, &ccipcommon.BackfillArgs{ SourceLP: params.sourceChain.LogPoller(), DestLP: params.destChain.LogPoller(), diff --git a/core/services/ocr2/plugins/ccip/ccipcommit/ocr2.go b/core/services/ocr2/plugins/ccip/ccipcommit/ocr2.go index 67f713a544..65357adb8b 100644 --- a/core/services/ocr2/plugins/ccip/ccipcommit/ocr2.go +++ b/core/services/ocr2/plugins/ccip/ccipcommit/ocr2.go @@ -51,7 +51,8 @@ type update struct { } type CommitPluginStaticConfig struct { - lggr logger.Logger + lggr logger.Logger + newReportingPluginRetryConfig ccipdata.RetryConfig // Source onRampReader ccipdata.OnRampReader sourceChainSelector uint64 diff --git a/core/services/ocr2/plugins/ccip/internal/cache/commit_roots.go b/core/services/ocr2/plugins/ccip/internal/cache/commit_roots.go index 9c859dc5f6..201fc9b5bd 100644 --- a/core/services/ocr2/plugins/ccip/internal/cache/commit_roots.go +++ b/core/services/ocr2/plugins/ccip/internal/cache/commit_roots.go @@ -131,38 +131,41 @@ func (s *commitRootsCache) Snooze(merkleRoot [32]byte) { } func (s *commitRootsCache) OldestRootTimestamp() time.Time { - permissionlessExecWindow := time.Now().Add(-s.permissionLessExecutionThresholdDuration) - timestamp, ok := s.pickOldestRootBlockTimestamp(permissionlessExecWindow) - - if ok { - return timestamp - } - - s.rootsQueueMu.Lock() - defer s.rootsQueueMu.Unlock() - - // If rootsSearchFilter is before permissionlessExecWindow, it means that we have roots that are stuck forever and will never be executed - // In that case, we wipe out the entire queue. Next round should start from the permissionlessExecThreshold and rebuild cache from scratch. - s.unexecutedRootsQueue = orderedmap.New[string, time.Time]() - return permissionlessExecWindow + return time.Now().Add(-s.permissionLessExecutionThresholdDuration) + // TODO we can't rely on block timestamps, because in case of re-org they can change and therefore affect + // the logic in the case. In the meantime, always fallback to the default behaviour and use permissionlessThresholdWindow + //timestamp, ok := s.pickOldestRootBlockTimestamp(messageVisibilityInterval) + // + //if ok { + // return timestamp + //} + // + //s.rootsQueueMu.Lock() + //defer s.rootsQueueMu.Unlock() + // + //// If rootsSearchFilter is before messageVisibilityInterval, it means that we have roots that are stuck forever and will never be executed + //// In that case, we wipe out the entire queue. Next round should start from the messageVisibilityInterval and rebuild cache from scratch. + //s.unexecutedRootsQueue = orderedmap.New[string, time.Time]() + //return messageVisibilityInterval } -func (s *commitRootsCache) pickOldestRootBlockTimestamp(permissionlessExecWindow time.Time) (time.Time, bool) { - s.rootsQueueMu.RLock() - defer s.rootsQueueMu.RUnlock() - - // If there are no roots in the queue, we can return the permissionlessExecWindow - if s.oldestRootTimestamp.IsZero() { - return permissionlessExecWindow, true - } +//func (s *commitRootsCache) pickOldestRootBlockTimestamp(permissionlessExecWindow time.Time) (time.Time, bool) { +// s.rootsQueueMu.RLock() +// defer s.rootsQueueMu.RUnlock() +// +// // If there are no roots in the queue, we can return the permissionlessExecWindow +// if s.oldestRootTimestamp.IsZero() { +// return permissionlessExecWindow, true +// } +// +// if s.oldestRootTimestamp.After(messageVisibilityInterval) { +// // Query used for fetching roots from the database is exclusive (block_timestamp > :timestamp) +// // so we need to subtract 1 second from the head timestamp to make sure that this root is included in the results +// return s.oldestRootTimestamp.Add(-time.Second), true +// } +// return time.Time{}, false +//} - if s.oldestRootTimestamp.After(permissionlessExecWindow) { - // Query used for fetching roots from the database is exclusive (block_timestamp > :timestamp) - // so we need to subtract 1 second from the head timestamp to make sure that this root is included in the results - return s.oldestRootTimestamp.Add(-time.Second), true - } - return time.Time{}, false -} func (s *commitRootsCache) AppendUnexecutedRoot(merkleRoot [32]byte, blockTimestamp time.Time) { prettyMerkleRoot := merkleRootToString(merkleRoot) diff --git a/core/services/ocr2/plugins/ccip/internal/cache/commit_roots_test.go b/core/services/ocr2/plugins/ccip/internal/cache/commit_roots_test.go index bcb81b3a18..a6be2c98e8 100644 --- a/core/services/ocr2/plugins/ccip/internal/cache/commit_roots_test.go +++ b/core/services/ocr2/plugins/ccip/internal/cache/commit_roots_test.go @@ -67,60 +67,60 @@ func Test_UnexecutedRoots(t *testing.T) { roots: []rootWithTs{}, permissionLessThreshold: 1 * time.Hour, }, - { - name: "returns first root when all are not executed", - roots: []rootWithTs{ - {r1, t1}, - {r2, t2}, - {r3, t3}, - }, - permissionLessThreshold: 10 * time.Hour, - expectedTimestamp: t1, - }, - { - name: "returns first root when tail of queue is executed", - roots: []rootWithTs{ - {r1, t1}, - {r2, t2}, - {r3, t3}, - }, - executedRoots: [][32]byte{r2, r3}, - permissionLessThreshold: 10 * time.Hour, - expectedTimestamp: t1, - }, - { - name: "returns first not executed root", - roots: []rootWithTs{ - {r1, t1}, - {r2, t2}, - {r3, t3}, - }, - executedRoots: [][32]byte{r1, r2}, - permissionLessThreshold: 10 * time.Hour, - expectedTimestamp: t3, - }, - { - name: "returns r2 timestamp when r1 and r3 are executed", - roots: []rootWithTs{ - {r1, t1}, - {r2, t2}, - {r3, t3}, - }, - executedRoots: [][32]byte{r1, r3}, - permissionLessThreshold: 10 * time.Hour, - expectedTimestamp: t2, - }, - { - name: "returns oldest root even when all are executed", - roots: []rootWithTs{ - {r1, t1}, - {r2, t2}, - {r3, t3}, - }, - executedRoots: [][32]byte{r1, r2, r3}, - permissionLessThreshold: 10 * time.Hour, - expectedTimestamp: t3, - }, + //{ + // name: "returns first root when all are not executed", + // roots: []rootWithTs{ + // {r1, t1}, + // {r2, t2}, + // {r3, t3}, + // }, + // permissionLessThreshold: 10 * time.Hour, + // expectedTimestamp: t1, + //}, + //{ + // name: "returns first root when tail of queue is executed", + // roots: []rootWithTs{ + // {r1, t1}, + // {r2, t2}, + // {r3, t3}, + // }, + // executedRoots: [][32]byte{r2, r3}, + // permissionLessThreshold: 10 * time.Hour, + // expectedTimestamp: t1, + //}, + //{ + // name: "returns first not executed root", + // roots: []rootWithTs{ + // {r1, t1}, + // {r2, t2}, + // {r3, t3}, + // }, + // executedRoots: [][32]byte{r1, r2}, + // permissionLessThreshold: 10 * time.Hour, + // expectedTimestamp: t3, + //}, + //{ + // name: "returns r2 timestamp when r1 and r3 are executed", + // roots: []rootWithTs{ + // {r1, t1}, + // {r2, t2}, + // {r3, t3}, + // }, + // executedRoots: [][32]byte{r1, r3}, + // permissionLessThreshold: 10 * time.Hour, + // expectedTimestamp: t2, + //}, + //{ + // name: "returns oldest root even when all are executed", + // roots: []rootWithTs{ + // {r1, t1}, + // {r2, t2}, + // {r3, t3}, + // }, + // executedRoots: [][32]byte{r1, r2, r3}, + // permissionLessThreshold: 10 * time.Hour, + // expectedTimestamp: t3, + //}, { name: "returns permissionLessThreshold when all roots ale older that threshold", roots: []rootWithTs{ @@ -161,12 +161,12 @@ func Test_UnexecutedRootsScenario(t *testing.T) { k1 := [32]byte{1} k2 := [32]byte{2} k3 := [32]byte{3} - k4 := [32]byte{4} + //k4 := [32]byte{4} t1 := time.Now().Add(-4 * time.Hour) t2 := time.Now().Add(-3 * time.Hour) t3 := time.Now().Add(-2 * time.Hour) - t4 := time.Now().Add(-1 * time.Hour) + //t4 := time.Now().Add(-1 * time.Hour) // First check should return permissionLessThreshold window commitTs := c.OldestRootTimestamp() @@ -176,42 +176,47 @@ func Test_UnexecutedRootsScenario(t *testing.T) { c.AppendUnexecutedRoot(k2, t2) c.AppendUnexecutedRoot(k3, t3) - // After loading roots it should return the first one - commitTs = c.OldestRootTimestamp() - assert.Equal(t, t1.Add(-time.Second), commitTs) - - // Marking root in the middle as executed shouldn't change the commitTs - c.MarkAsExecuted(k2) - commitTs = c.OldestRootTimestamp() - assert.Equal(t, t1.Add(-time.Second), commitTs) - - // Marking k1 as executed when k2 is already executed should return timestamp of k3 - c.MarkAsExecuted(k1) - commitTs = c.OldestRootTimestamp() - assert.Equal(t, t3.Add(-time.Second), commitTs) - - // Marking all as executed should return timestamp of the latest - c.MarkAsExecuted(k3) commitTs = c.OldestRootTimestamp() - assert.Equal(t, t3.Add(-time.Second), commitTs) - - // Adding k4 should return timestamp of k4 - c.AppendUnexecutedRoot(k4, t4) - commitTs = c.OldestRootTimestamp() - assert.Equal(t, t4.Add(-time.Second), commitTs) - - c.MarkAsExecuted(k4) - commitTs = c.OldestRootTimestamp() - assert.Equal(t, t4.Add(-time.Second), commitTs) + assert.True(t, commitTs.Before(time.Now().Add(-permissionLessThreshold))) - // Appending already executed roots should be ignored - c.AppendUnexecutedRoot(k1, t1) - c.AppendUnexecutedRoot(k2, t2) - commitTs = c.OldestRootTimestamp() - assert.Equal(t, t4.Add(-time.Second), commitTs) + //// After loading roots it should return the first one + //commitTs = c.OldestRootTimestamp() + //assert.Equal(t, t1.Add(-time.Second), commitTs) + // + //// Marking root in the middle as executed shouldn't change the commitTs + //c.MarkAsExecuted(k2) + //commitTs = c.OldestRootTimestamp() + //assert.Equal(t, t1.Add(-time.Second), commitTs) + // + //// Marking k1 as executed when k2 is already executed should return timestamp of k3 + //c.MarkAsExecuted(k1) + //commitTs = c.OldestRootTimestamp() + //assert.Equal(t, t3.Add(-time.Second), commitTs) + // + //// Marking all as executed should return timestamp of the latest + //c.MarkAsExecuted(k3) + //commitTs = c.OldestRootTimestamp() + //assert.Equal(t, t3.Add(-time.Second), commitTs) + // + //// Adding k4 should return timestamp of k4 + //c.AppendUnexecutedRoot(k4, t4) + //commitTs = c.OldestRootTimestamp() + //assert.Equal(t, t4.Add(-time.Second), commitTs) + // + //c.MarkAsExecuted(k4) + //commitTs = c.OldestRootTimestamp() + //assert.Equal(t, t4.Add(-time.Second), commitTs) + // + //// Appending already executed roots should be ignored + //c.AppendUnexecutedRoot(k1, t1) + //c.AppendUnexecutedRoot(k2, t2) + //commitTs = c.OldestRootTimestamp() + //assert.Equal(t, t4.Add(-time.Second), commitTs) } func Test_UnexecutedRootsStaleQueue(t *testing.T) { + t.Skip("This test needs caching to properly handle re-orgs") + permissionLessThreshold := 5 * time.Hour c := newCommitRootsCache(logger.TestLogger(t), permissionLessThreshold, 1*time.Hour, 1*time.Millisecond, 1*time.Millisecond) diff --git a/core/services/ocr2/plugins/ccip/internal/pricegetter/evm.go b/core/services/ocr2/plugins/ccip/internal/pricegetter/evm.go index 2a537b5d40..a03d86b194 100644 --- a/core/services/ocr2/plugins/ccip/internal/pricegetter/evm.go +++ b/core/services/ocr2/plugins/ccip/internal/pricegetter/evm.go @@ -143,6 +143,9 @@ func (d *DynamicPriceGetter) performBatchCall(ctx context.Context, chainID uint6 calls = append(calls, batchCalls.latestRoundDataCalls...) results, err := evmCaller.BatchCall(ctx, 0, calls) + if err != nil { + return fmt.Errorf("batch call on chain %d failed: %w", chainID, err) + } // Extract results. decimals := make([]uint8, 0, nbDecimalCalls) diff --git a/core/services/ocr2/plugins/ccip/internal/pricegetter/evm_test.go b/core/services/ocr2/plugins/ccip/internal/pricegetter/evm_test.go index 8d1bf67ab1..673b9776c7 100644 --- a/core/services/ocr2/plugins/ccip/internal/pricegetter/evm_test.go +++ b/core/services/ocr2/plugins/ccip/internal/pricegetter/evm_test.go @@ -25,6 +25,7 @@ type testParameters struct { evmClients map[uint64]DynamicPriceGetterClient tokens []common.Address expectedTokenPrices map[common.Address]big.Int + evmCallErr bool invalidConfigErrorExpected bool priceResolutionErrorExpected bool } @@ -58,6 +59,10 @@ func TestDynamicPriceGetter(t *testing.T) { name: "no_aggregator_for_token", param: testParamNoAggregatorForToken(t), }, + { + name: "batchCall_returns_err", + param: testParamBatchCallReturnsErr(t), + }, } for _, test := range tests { @@ -82,6 +87,12 @@ func TestDynamicPriceGetter(t *testing.T) { tokens = append(tokens, tokenAddr) } prices, err := pg.TokenPricesUSD(ctx, tokens) + + if test.param.evmCallErr { + require.Error(t, err) + return + } + if test.param.priceResolutionErrorExpected { require.Error(t, err) return @@ -454,6 +465,50 @@ func testParamNoAggregatorForToken(t *testing.T) testParameters { } } +func testParamBatchCallReturnsErr(t *testing.T) testParameters { + tk1 := utils.RandomAddress() + tk2 := utils.RandomAddress() + tk3 := utils.RandomAddress() + cfg := config.DynamicPriceGetterConfig{ + AggregatorPrices: map[common.Address]config.AggregatorPriceConfig{ + tk1: { + ChainID: 101, + AggregatorContractAddress: utils.RandomAddress(), + }, + tk2: { + ChainID: 102, + AggregatorContractAddress: utils.RandomAddress(), + }, + }, + StaticPrices: map[common.Address]config.StaticPriceConfig{ + tk3: { + ChainID: 103, + Price: big.NewInt(1_234_000), + }, + }, + } + // Real LINK/USD example from OP. + round1 := aggregator_v3_interface.LatestRoundData{ + RoundId: big.NewInt(1000), + Answer: big.NewInt(1396818990), + StartedAt: big.NewInt(1704896575), + UpdatedAt: big.NewInt(1704896575), + AnsweredInRound: big.NewInt(1000), + } + evmClients := map[uint64]DynamicPriceGetterClient{ + uint64(101): mockClient(t, []uint8{8}, []aggregator_v3_interface.LatestRoundData{round1}), + uint64(102): { + BatchCaller: mockErrCaller(t), + }, + } + return testParameters{ + cfg: cfg, + evmClients: evmClients, + tokens: []common.Address{tk1, tk2, tk3}, + evmCallErr: true, + } +} + func mockClient(t *testing.T, decimals []uint8, rounds []aggregator_v3_interface.LatestRoundData) DynamicPriceGetterClient { return DynamicPriceGetterClient{ BatchCaller: mockCaller(t, decimals, rounds), @@ -479,6 +534,12 @@ func mockCaller(t *testing.T, decimals []uint8, rounds []aggregator_v3_interface return caller } +func mockErrCaller(t *testing.T) *rpclibmocks.EvmBatchCaller { + caller := rpclibmocks.NewEvmBatchCaller(t) + caller.On("BatchCall", mock.Anything, uint64(0), mock.Anything).Return(nil, assert.AnError).Maybe() + return caller +} + // multExp returns the result of multiplying x by 10^e. func multExp(x *big.Int, e int64) *big.Int { return big.NewInt(0).Mul(x, big.NewInt(0).Exp(big.NewInt(10), big.NewInt(e), nil)) diff --git a/core/services/ocr2/plugins/liquiditymanager/bridge/bridge.go b/core/services/ocr2/plugins/liquiditymanager/bridge/bridge.go index 74823941a4..f4469c7fab 100644 --- a/core/services/ocr2/plugins/liquiditymanager/bridge/bridge.go +++ b/core/services/ocr2/plugins/liquiditymanager/bridge/bridge.go @@ -220,7 +220,7 @@ func (f *factory) initBridge(source, dest models.NetworkSelector) (Bridge, error models.NetworkSelector(chainsel.TEST_90000001.Selector), models.NetworkSelector(chainsel.TEST_90000002.Selector), models.NetworkSelector(chainsel.TEST_90000003.Selector), - models.NetworkSelector(chainsel.TEST_2337.Selector): + models.NetworkSelector(chainsel.GETH_DEVNET_2.Selector): // these chains are only ever used for tests // in tests we only ever deploy the MockL1Bridge adapter // so this is an "L1 to L1" bridge setup, but not really diff --git a/core/services/ocr2/plugins/ocr2keeper/integration_test.go b/core/services/ocr2/plugins/ocr2keeper/integration_test.go index 1054c59dd1..c27a1a9dbe 100644 --- a/core/services/ocr2/plugins/ocr2keeper/integration_test.go +++ b/core/services/ocr2/plugins/ocr2keeper/integration_test.go @@ -427,7 +427,7 @@ func setupForwarderForNode( backend *backends.SimulatedBackend, recipient common.Address, linkAddr common.Address) common.Address { - + ctx := testutils.Context(t) faddr, _, authorizedForwarder, err := authorized_forwarder.DeployAuthorizedForwarder(caller, backend, linkAddr, caller.From, recipient, []byte{}) require.NoError(t, err) @@ -444,7 +444,7 @@ func setupForwarderForNode( chain, err := app.GetRelayers().LegacyEVMChains().Get((*big.Int)(&chainID).String()) require.NoError(t, err) - fwdr, err := chain.TxManager().GetForwarderForEOA(recipient) + fwdr, err := chain.TxManager().GetForwarderForEOA(ctx, recipient) require.NoError(t, err) require.Equal(t, faddr, fwdr) diff --git a/core/services/ocrbootstrap/delegate.go b/core/services/ocrbootstrap/delegate.go index 4f927faa00..fdcb68ceec 100644 --- a/core/services/ocrbootstrap/delegate.go +++ b/core/services/ocrbootstrap/delegate.go @@ -9,7 +9,6 @@ import ( ocr "github.com/smartcontractkit/libocr/offchainreporting2plus" - commonlogger "github.com/smartcontractkit/chainlink-common/pkg/logger" "github.com/smartcontractkit/chainlink-common/pkg/loop" "github.com/smartcontractkit/chainlink-common/pkg/sqlutil" "github.com/smartcontractkit/chainlink-common/pkg/types" @@ -163,14 +162,15 @@ func (d *Delegate) ServicesForSpec(ctx context.Context, jb job.Job) (services [] "ContractTransmitterTransmitTimeout", lc.ContractTransmitterTransmitTimeout, "DatabaseTimeout", lc.DatabaseTimeout, ) + ocrLogger := ocrcommon.NewOCRWrapper(lggr.Named("OCRBootstrap"), d.ocr2Cfg.TraceLogging(), func(ctx context.Context, msg string) { + logger.Sugared(lggr).ErrorIf(d.jobORM.RecordError(ctx, jb.ID, msg), "unable to record error") + }) bootstrapNodeArgs := ocr.BootstrapperArgs{ - BootstrapperFactory: d.peerWrapper.Peer2, - ContractConfigTracker: configProvider.ContractConfigTracker(), - Database: NewDB(d.ds, spec.ID, lggr), - LocalConfig: lc, - Logger: commonlogger.NewOCRWrapper(lggr.Named("OCRBootstrap"), d.ocr2Cfg.TraceLogging(), func(msg string) { - logger.Sugared(lggr).ErrorIf(d.jobORM.RecordError(ctx, jb.ID, msg), "unable to record error") - }), + BootstrapperFactory: d.peerWrapper.Peer2, + ContractConfigTracker: configProvider.ContractConfigTracker(), + Database: NewDB(d.ds, spec.ID, lggr), + LocalConfig: lc, + Logger: ocrLogger, OffchainConfigDigester: configProvider.OffchainConfigDigester(), } lggr.Debugw("Launching new bootstrap node", "args", bootstrapNodeArgs) @@ -178,7 +178,7 @@ func (d *Delegate) ServicesForSpec(ctx context.Context, jb job.Job) (services [] if err != nil { return nil, errors.Wrap(err, "error calling NewBootstrapNode") } - return []job.ServiceCtx{configProvider, job.NewServiceAdapter(bootstrapper)}, nil + return []job.ServiceCtx{configProvider, ocrLogger, job.NewServiceAdapter(bootstrapper)}, nil } // AfterJobCreated satisfies the job.Delegate interface. diff --git a/core/services/ocrcommon/block_translator.go b/core/services/ocrcommon/block_translator.go index 6ef64499fa..06fd994199 100644 --- a/core/services/ocrcommon/block_translator.go +++ b/core/services/ocrcommon/block_translator.go @@ -21,7 +21,7 @@ func NewBlockTranslator(cfg Config, client evmclient.Client, lggr logger.Logger) switch cfg.ChainType() { case config.ChainArbitrum: return NewArbitrumBlockTranslator(client, lggr) - case "", config.ChainCelo, config.ChainGnosis, config.ChainKroma, config.ChainMetis, config.ChainOptimismBedrock, config.ChainScroll, config.ChainWeMix, config.ChainXDai, config.ChainXLayer, config.ChainZkSync: + case "", config.ChainCelo, config.ChainGnosis, config.ChainKroma, config.ChainMetis, config.ChainOptimismBedrock, config.ChainScroll, config.ChainWeMix, config.ChainXLayer, config.ChainZkSync: fallthrough default: return &l1BlockTranslator{} diff --git a/core/services/ocrcommon/ocr_logger.go b/core/services/ocrcommon/ocr_logger.go new file mode 100644 index 0000000000..50a8c9adc7 --- /dev/null +++ b/core/services/ocrcommon/ocr_logger.go @@ -0,0 +1,30 @@ +package ocrcommon + +import ( + "context" + + ocrtypes "github.com/smartcontractkit/libocr/commontypes" + + "github.com/smartcontractkit/chainlink-common/pkg/logger" + "github.com/smartcontractkit/chainlink-common/pkg/services" +) + +type ocrLoggerService struct { + stopCh services.StopChan + ocrtypes.Logger +} + +func NewOCRWrapper(l logger.Logger, trace bool, saveError func(context.Context, string)) *ocrLoggerService { + stopCh := make(services.StopChan) + return &ocrLoggerService{ + stopCh: stopCh, + Logger: logger.NewOCRWrapper(l, trace, func(s string) { + ctx, cancel := stopCh.NewCtx() + defer cancel() + saveError(ctx, s) + }), + } +} + +func (*ocrLoggerService) Start(context.Context) error { return nil } +func (s *ocrLoggerService) Close() error { close(s.stopCh); return nil } diff --git a/core/services/ocrcommon/transmitter.go b/core/services/ocrcommon/transmitter.go index 423db2316a..f73b6393b9 100644 --- a/core/services/ocrcommon/transmitter.go +++ b/core/services/ocrcommon/transmitter.go @@ -3,11 +3,13 @@ package ocrcommon import ( "context" "math/big" + "slices" "github.com/ethereum/go-ethereum/common" "github.com/pkg/errors" "github.com/smartcontractkit/chainlink/v2/common/txmgr/types" + "github.com/smartcontractkit/chainlink/v2/core/chains/evm/forwarders" "github.com/smartcontractkit/chainlink/v2/core/chains/evm/txmgr" ) @@ -64,6 +66,51 @@ func NewTransmitter( }, nil } +type txManagerOCR2 interface { + CreateTransaction(ctx context.Context, txRequest txmgr.TxRequest) (tx txmgr.Tx, err error) + GetForwarderForEOAOCR2Feeds(ctx context.Context, eoa, ocr2AggregatorID common.Address) (forwarder common.Address, err error) +} + +type ocr2FeedsTransmitter struct { + ocr2Aggregator common.Address + txManagerOCR2 + transmitter +} + +// NewOCR2FeedsTransmitter creates a new eth transmitter that handles OCR2 Feeds specific logic surrounding forwarders. +// ocr2FeedsTransmitter validates forwarders before every transmission, enabling smooth onchain config changes without job restarts. +func NewOCR2FeedsTransmitter( + txm txManagerOCR2, + fromAddresses []common.Address, + ocr2Aggregator common.Address, + gasLimit uint64, + effectiveTransmitterAddress common.Address, + strategy types.TxStrategy, + checker txmgr.TransmitCheckerSpec, + chainID *big.Int, + keystore roundRobinKeystore, +) (Transmitter, error) { + // Ensure that a keystore is provided. + if keystore == nil { + return nil, errors.New("nil keystore provided to transmitter") + } + + return &ocr2FeedsTransmitter{ + ocr2Aggregator: ocr2Aggregator, + txManagerOCR2: txm, + transmitter: transmitter{ + txm: txm, + fromAddresses: fromAddresses, + gasLimit: gasLimit, + effectiveTransmitterAddress: effectiveTransmitterAddress, + strategy: strategy, + checker: checker, + chainID: chainID, + keystore: keystore, + }, + }, nil +} + func (t *transmitter) CreateEthTransaction(ctx context.Context, toAddress common.Address, payload []byte, txMeta *txmgr.TxMeta) error { roundRobinFromAddress, err := t.keystore.GetRoundRobinAddress(ctx, t.chainID, t.fromAddresses...) @@ -96,3 +143,65 @@ func (t *transmitter) forwarderAddress() common.Address { } return t.effectiveTransmitterAddress } + +func (t *ocr2FeedsTransmitter) CreateEthTransaction(ctx context.Context, toAddress common.Address, payload []byte, txMeta *txmgr.TxMeta) error { + roundRobinFromAddress, err := t.keystore.GetRoundRobinAddress(ctx, t.chainID, t.fromAddresses...) + if err != nil { + return errors.Wrap(err, "skipped OCR transmission, error getting round-robin address") + } + + forwarderAddress, err := t.forwarderAddress(ctx, roundRobinFromAddress, toAddress) + if err != nil { + return err + } + + _, err = t.txm.CreateTransaction(ctx, txmgr.TxRequest{ + FromAddress: roundRobinFromAddress, + ToAddress: toAddress, + EncodedPayload: payload, + FeeLimit: t.gasLimit, + ForwarderAddress: forwarderAddress, + Strategy: t.strategy, + Checker: t.checker, + Meta: txMeta, + }) + + return errors.Wrap(err, "skipped OCR transmission") +} + +// FromAddress for ocr2FeedsTransmitter returns valid forwarder or effectiveTransmitterAddress if forwarders are not set. +func (t *ocr2FeedsTransmitter) FromAddress() common.Address { + roundRobinFromAddress, err := t.keystore.GetRoundRobinAddress(context.Background(), t.chainID, t.fromAddresses...) + if err != nil { + return t.effectiveTransmitterAddress + } + + forwarderAddress, err := t.GetForwarderForEOAOCR2Feeds(context.Background(), roundRobinFromAddress, t.ocr2Aggregator) + if errors.Is(err, forwarders.ErrForwarderForEOANotFound) { + // if there are no valid forwarders try to fallback to eoa + return roundRobinFromAddress + } else if err != nil { + return t.effectiveTransmitterAddress + } + + return forwarderAddress +} + +func (t *ocr2FeedsTransmitter) forwarderAddress(ctx context.Context, eoa, ocr2Aggregator common.Address) (common.Address, error) { + // If effectiveTransmitterAddress is in fromAddresses, then forwarders aren't set. + if slices.Contains(t.fromAddresses, t.effectiveTransmitterAddress) { + return common.Address{}, nil + } + + forwarderAddress, err := t.GetForwarderForEOAOCR2Feeds(ctx, eoa, ocr2Aggregator) + if err != nil { + return common.Address{}, err + } + + // if forwarder address is in fromAddresses, then none of the forwarders are valid + if slices.Contains(t.fromAddresses, forwarderAddress) { + forwarderAddress = common.Address{} + } + + return forwarderAddress, nil +} diff --git a/core/services/pipeline/orm.go b/core/services/pipeline/orm.go index 0a96a7e08d..b81dfd133e 100644 --- a/core/services/pipeline/orm.go +++ b/core/services/pipeline/orm.go @@ -744,7 +744,7 @@ func (o *orm) prune(tx sqlutil.DataSource, jobID int32) { } func (o *orm) execPrune(ctx context.Context, jobID int32) { - res, err := o.ds.ExecContext(o.ctx, `DELETE FROM pipeline_runs WHERE pruning_key = $1 AND state = $2 AND id NOT IN ( + res, err := o.ds.ExecContext(ctx, `DELETE FROM pipeline_runs WHERE pruning_key = $1 AND state = $2 AND id NOT IN ( SELECT id FROM pipeline_runs WHERE pruning_key = $1 AND state = $2 ORDER BY id DESC diff --git a/core/services/pipeline/runner.go b/core/services/pipeline/runner.go index 2de27b3d00..1e2f6509a5 100644 --- a/core/services/pipeline/runner.go +++ b/core/services/pipeline/runner.go @@ -232,6 +232,23 @@ func init() { } } +// overtimeContext returns a modified context for overtime work, since tasks are expected to keep running and return +// results, even after context cancellation. +func overtimeContext(ctx context.Context) (context.Context, context.CancelFunc) { + if d, ok := ctx.Deadline(); ok { + // We do not use context.WithDeadline/Timeout in order to prevent the monitor hook from logging noisily, since + // we expect and want these operations to use most of their allotted time. + // TODO replace with custom thresholds: https://smartcontract-it.atlassian.net/browse/BCF-3252 + var cancel context.CancelFunc + ctx, cancel = context.WithCancel(context.WithoutCancel(ctx)) + t := time.AfterFunc(time.Until(d.Add(overtime)), cancel) + stop := context.AfterFunc(ctx, func() { t.Stop() }) + return ctx, func() { cancel(); stop() } + } + // do not propagate cancellation in any case + return context.WithoutCancel(ctx), func() {} +} + func (r *runner) ExecuteRun( ctx context.Context, spec Spec, @@ -457,18 +474,14 @@ func (r *runner) run(ctx context.Context, pipeline *Pipeline, run *Run, vars Var "run.Inputs", run.Inputs, ) } - if run.HasFatalErrors() { - l = l.With("run.FatalErrors", run.FatalErrors) - } - if run.HasErrors() { - l = l.With("run.AllErrors", run.AllErrors) - } l = l.With("run.State", run.State, "fatal", run.HasFatalErrors(), "runTime", runTime) if run.HasFatalErrors() { // This will also log at error level in OCR if it fails Observe so the // level is appropriate - l.Errorw("Completed pipeline run with fatal errors") + l = l.With("run.FatalErrors", run.FatalErrors) + l.Debugw("Completed pipeline run with fatal errors") } else if run.HasErrors() { + l = l.With("run.AllErrors", run.AllErrors) l.Debugw("Completed pipeline run with errors") } else { l.Debugw("Completed pipeline run successfully") diff --git a/core/services/pipeline/task.bridge.go b/core/services/pipeline/task.bridge.go index 7995cf9929..103e566466 100644 --- a/core/services/pipeline/task.bridge.go +++ b/core/services/pipeline/task.bridge.go @@ -109,7 +109,10 @@ func (t *BridgeTask) Run(ctx context.Context, lggr logger.Logger, vars Vars, inp return Result{Error: errors.Errorf("headers must have an even number of elements")}, runInfo } - url, err := t.getBridgeURLFromName(ctx, name) + overtimeCtx, cancel := overtimeContext(ctx) + defer cancel() + + url, err := t.getBridgeURLFromName(overtimeCtx, name) if err != nil { return Result{Error: err}, runInfo } @@ -181,7 +184,7 @@ func (t *BridgeTask) Run(ctx context.Context, lggr logger.Logger, vars Vars, inp } var cacheErr error - responseBytes, cacheErr = t.orm.GetCachedResponse(ctx, t.dotID, t.specId, cacheDuration) + responseBytes, cacheErr = t.orm.GetCachedResponse(overtimeCtx, t.dotID, t.specId, cacheDuration) if cacheErr != nil { promBridgeCacheErrors.WithLabelValues(t.Name).Inc() if !errors.Is(cacheErr, sql.ErrNoRows) { @@ -217,7 +220,7 @@ func (t *BridgeTask) Run(ctx context.Context, lggr logger.Logger, vars Vars, inp } if !cachedResponse && cacheTTL > 0 { - err := t.orm.UpsertBridgeResponse(ctx, t.dotID, t.specId, responseBytes) + err := t.orm.UpsertBridgeResponse(overtimeCtx, t.dotID, t.specId, responseBytes) if err != nil { lggr.Errorw("Bridge task: failed to upsert response in bridge cache", "err", err) } @@ -241,7 +244,7 @@ func (t *BridgeTask) Run(ctx context.Context, lggr logger.Logger, vars Vars, inp return result, runInfo } -func (t BridgeTask) getBridgeURLFromName(ctx context.Context, name StringParam) (URLParam, error) { +func (t *BridgeTask) getBridgeURLFromName(ctx context.Context, name StringParam) (URLParam, error) { bt, err := t.orm.FindBridge(ctx, bridges.BridgeName(name)) if err != nil { return URLParam{}, errors.Wrapf(err, "could not find bridge with name '%s'", name) diff --git a/core/services/pipeline/task.bridge_test.go b/core/services/pipeline/task.bridge_test.go index e95aef4984..b707aa62dc 100644 --- a/core/services/pipeline/task.bridge_test.go +++ b/core/services/pipeline/task.bridge_test.go @@ -1,6 +1,7 @@ package pipeline_test import ( + "context" "encoding/json" "fmt" "io" @@ -1139,3 +1140,74 @@ func TestBridgeTask_AdapterResponseStatusFailure(t *testing.T) { require.False(t, runInfo.IsRetryable) require.False(t, runInfo.IsPending) } + +func TestBridgeTask_AdapterTimeout(t *testing.T) { + t.Parallel() + ctx := testutils.Context(t) + + db := pgtest.NewSqlxDB(t) + cfg := configtest.NewGeneralConfig(t, func(c *chainlink.Config, s *chainlink.Secrets) { + c.WebServer.BridgeCacheTTL = commonconfig.MustNewDuration(1 * time.Minute) + }) + + s1 := httptest.NewServer( + http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) { + time.Sleep(time.Second) // delay enough to time-out + })) + defer s1.Close() + + feedURL, err := url.ParseRequestURI(s1.URL) + require.NoError(t, err) + + orm := bridges.NewORM(db) + _, bridge := cltest.MustCreateBridge(t, db, cltest.BridgeOpts{URL: feedURL.String()}) + + task := pipeline.BridgeTask{ + BaseTask: pipeline.NewBaseTask(0, "bridge", nil, nil, 0), + Name: bridge.Name.String(), + RequestData: btcUSDPairing, + } + c := clhttptest.NewTestLocalOnlyHTTPClient() + trORM := pipeline.NewORM(db, logger.TestLogger(t), cfg.JobPipeline().MaxSuccessfulRuns()) + specID, err := trORM.CreateSpec(ctx, pipeline.Pipeline{}, *models.NewInterval(5 * time.Minute)) + require.NoError(t, err) + task.HelperSetDependencies(cfg.JobPipeline(), cfg.WebServer(), orm, specID, uuid.UUID{}, c) + + // Insert entry 1m in the past, stale value, should not be used in case of EA failure. + _, err = db.ExecContext(ctx, `INSERT INTO bridge_last_value(dot_id, spec_id, value, finished_at) + VALUES($1, $2, $3, $4) ON CONFLICT ON CONSTRAINT bridge_last_value_pkey + DO UPDATE SET value = $3, finished_at = $4;`, task.DotID(), specID, big.NewInt(9700).Bytes(), time.Now()) + require.NoError(t, err) + + vars := pipeline.NewVarsFrom( + map[string]interface{}{ + "jobRun": map[string]interface{}{ + "meta": map[string]interface{}{ + "shouldFail": true, + }, + }, + }, + ) + + t.Run("pre-cancelled", func(t *testing.T) { + ctx, cancel := context.WithCancel(testutils.Context(t)) + cancel() // pre-cancelled + result, runInfo := task.Run(ctx, logger.TestLogger(t), vars, nil) + + require.NoError(t, result.Error) + require.NotNil(t, result.Value) + require.False(t, runInfo.IsRetryable) + require.False(t, runInfo.IsPending) + }) + + t.Run("short", func(t *testing.T) { + ctx, cancel := context.WithTimeout(testutils.Context(t), time.Millisecond) + t.Cleanup(cancel) + result, runInfo := task.Run(ctx, logger.TestLogger(t), vars, nil) + + require.NoError(t, result.Error) + require.NotNil(t, result.Value) + require.False(t, runInfo.IsRetryable) + require.False(t, runInfo.IsPending) + }) +} diff --git a/core/services/pipeline/task.eth_tx.go b/core/services/pipeline/task.eth_tx.go index 354651acbb..964591cacd 100644 --- a/core/services/pipeline/task.eth_tx.go +++ b/core/services/pipeline/task.eth_tx.go @@ -140,7 +140,7 @@ func (t *ETHTxTask) Run(ctx context.Context, lggr logger.Logger, vars Vars, inpu var forwarderAddress common.Address if t.forwardingAllowed { var fwderr error - forwarderAddress, fwderr = chain.TxManager().GetForwarderForEOA(fromAddr) + forwarderAddress, fwderr = chain.TxManager().GetForwarderForEOA(ctx, fromAddr) if fwderr != nil { lggr.Warnw("Skipping forwarding for job, will fallback to default behavior", "err", fwderr) } diff --git a/core/services/relay/evm/evm.go b/core/services/relay/evm/evm.go index a1ba6ed520..5d5b1fa04e 100644 --- a/core/services/relay/evm/evm.go +++ b/core/services/relay/evm/evm.go @@ -522,16 +522,34 @@ func newOnChainContractTransmitter(ctx context.Context, lggr logger.Logger, rarg gasLimit = uint64(*opts.pluginGasLimit) } - transmitter, err := ocrcommon.NewTransmitter( - configWatcher.chain.TxManager(), - fromAddresses, - gasLimit, - effectiveTransmitterAddress, - strategy, - checker, - configWatcher.chain.ID(), - ethKeystore, - ) + var transmitter Transmitter + var err error + + switch commontypes.OCR2PluginType(rargs.ProviderType) { + case commontypes.Median: + transmitter, err = ocrcommon.NewOCR2FeedsTransmitter( + configWatcher.chain.TxManager(), + fromAddresses, + common.HexToAddress(rargs.ContractID), + gasLimit, + effectiveTransmitterAddress, + strategy, + checker, + configWatcher.chain.ID(), + ethKeystore, + ) + default: + transmitter, err = ocrcommon.NewTransmitter( + configWatcher.chain.TxManager(), + fromAddresses, + gasLimit, + effectiveTransmitterAddress, + strategy, + checker, + configWatcher.chain.ID(), + ethKeystore, + ) + } if err != nil { return nil, pkgerrors.Wrap(err, "failed to create transmitter") } diff --git a/core/services/relay/evm/mercury/queue.go b/core/services/relay/evm/mercury/queue.go index 8a89f47302..30a6e5e6ea 100644 --- a/core/services/relay/evm/mercury/queue.go +++ b/core/services/relay/evm/mercury/queue.go @@ -25,7 +25,7 @@ type asyncDeleter interface { AsyncDelete(req *pb.TransmitRequest) } -var _ services.Service = (*TransmitQueue)(nil) +var _ services.Service = (*transmitQueue)(nil) var transmitQueueLoad = promauto.NewGaugeVec(prometheus.GaugeOpts{ Name: "mercury_transmit_queue_load", @@ -40,7 +40,7 @@ const promInterval = 6500 * time.Millisecond // TransmitQueue is the high-level package that everything outside of this file should be using // It stores pending transmissions, yielding the latest (highest priority) first to the caller -type TransmitQueue struct { +type transmitQueue struct { services.StateMachine cond sync.Cond @@ -62,11 +62,20 @@ type Transmission struct { ReportCtx ocrtypes.ReportContext // contains priority information (latest epoch/round wins) } +type TransmitQueue interface { + services.Service + + BlockingPop() (t *Transmission) + Push(req *pb.TransmitRequest, reportCtx ocrtypes.ReportContext) (ok bool) + Init(transmissions []*Transmission) + IsEmpty() bool +} + // maxlen controls how many items will be stored in the queue // 0 means unlimited - be careful, this can cause memory leaks -func NewTransmitQueue(lggr logger.Logger, serverURL, feedID string, maxlen int, asyncDeleter asyncDeleter) *TransmitQueue { +func NewTransmitQueue(lggr logger.Logger, serverURL, feedID string, maxlen int, asyncDeleter asyncDeleter) TransmitQueue { mu := new(sync.RWMutex) - return &TransmitQueue{ + return &transmitQueue{ services.StateMachine{}, sync.Cond{L: mu}, lggr.Named("TransmitQueue"), @@ -80,13 +89,13 @@ func NewTransmitQueue(lggr logger.Logger, serverURL, feedID string, maxlen int, } } -func (tq *TransmitQueue) Init(transmissions []*Transmission) { +func (tq *transmitQueue) Init(transmissions []*Transmission) { pq := priorityQueue(transmissions) heap.Init(&pq) // ensure the heap is ordered tq.pq = &pq } -func (tq *TransmitQueue) Push(req *pb.TransmitRequest, reportCtx ocrtypes.ReportContext) (ok bool) { +func (tq *transmitQueue) Push(req *pb.TransmitRequest, reportCtx ocrtypes.ReportContext) (ok bool) { tq.cond.L.Lock() defer tq.cond.L.Unlock() @@ -111,7 +120,7 @@ func (tq *TransmitQueue) Push(req *pb.TransmitRequest, reportCtx ocrtypes.Report // BlockingPop will block until at least one item is in the heap, and then return it // If the queue is closed, it will immediately return nil -func (tq *TransmitQueue) BlockingPop() (t *Transmission) { +func (tq *transmitQueue) BlockingPop() (t *Transmission) { tq.cond.L.Lock() defer tq.cond.L.Unlock() if tq.closed { @@ -126,13 +135,13 @@ func (tq *TransmitQueue) BlockingPop() (t *Transmission) { return t } -func (tq *TransmitQueue) IsEmpty() bool { +func (tq *transmitQueue) IsEmpty() bool { tq.mu.RLock() defer tq.mu.RUnlock() return tq.pq.Len() == 0 } -func (tq *TransmitQueue) Start(context.Context) error { +func (tq *transmitQueue) Start(context.Context) error { return tq.StartOnce("TransmitQueue", func() error { t := time.NewTicker(utils.WithJitter(promInterval)) wg := new(sync.WaitGroup) @@ -148,7 +157,7 @@ func (tq *TransmitQueue) Start(context.Context) error { }) } -func (tq *TransmitQueue) Close() error { +func (tq *transmitQueue) Close() error { return tq.StopOnce("TransmitQueue", func() error { tq.cond.L.Lock() tq.closed = true @@ -159,7 +168,7 @@ func (tq *TransmitQueue) Close() error { }) } -func (tq *TransmitQueue) monitorLoop(c <-chan time.Time, chStop <-chan struct{}, wg *sync.WaitGroup) { +func (tq *transmitQueue) monitorLoop(c <-chan time.Time, chStop <-chan struct{}, wg *sync.WaitGroup) { defer wg.Done() for { @@ -172,25 +181,25 @@ func (tq *TransmitQueue) monitorLoop(c <-chan time.Time, chStop <-chan struct{}, } } -func (tq *TransmitQueue) report() { +func (tq *transmitQueue) report() { tq.mu.RLock() length := tq.pq.Len() tq.mu.RUnlock() tq.transmitQueueLoad.Set(float64(length)) } -func (tq *TransmitQueue) Ready() error { +func (tq *transmitQueue) Ready() error { return nil } -func (tq *TransmitQueue) Name() string { return tq.lggr.Name() } -func (tq *TransmitQueue) HealthReport() map[string]error { +func (tq *transmitQueue) Name() string { return tq.lggr.Name() } +func (tq *transmitQueue) HealthReport() map[string]error { report := map[string]error{tq.Name(): errors.Join( tq.status(), )} return report } -func (tq *TransmitQueue) status() (merr error) { +func (tq *transmitQueue) status() (merr error) { tq.mu.RLock() length := tq.pq.Len() closed := tq.closed @@ -206,7 +215,7 @@ func (tq *TransmitQueue) status() (merr error) { // pop latest Transmission from the heap // Not thread-safe -func (tq *TransmitQueue) pop() *Transmission { +func (tq *transmitQueue) pop() *Transmission { if tq.pq.Len() == 0 { return nil } diff --git a/core/services/relay/evm/mercury/transmitter.go b/core/services/relay/evm/mercury/transmitter.go index 82a76450e5..d8ef981ff0 100644 --- a/core/services/relay/evm/mercury/transmitter.go +++ b/core/services/relay/evm/mercury/transmitter.go @@ -147,10 +147,12 @@ type server struct { c wsrpc.Client pm *PersistenceManager - q *TransmitQueue + q TransmitQueue deleteQueue chan *pb.TransmitRequest + url string + transmitSuccessCount prometheus.Counter transmitDuplicateCount prometheus.Counter transmitConnectionErrorCount prometheus.Counter @@ -262,7 +264,7 @@ func (s *server) runQueueLoop(stopCh services.StopChan, wg *sync.WaitGroup, feed s.transmitDuplicateCount.Inc() s.lggr.Debugw("Transmit report success; duplicate report", "payload", hexutil.Encode(t.Req.Payload), "response", res, "reportCtx", t.ReportCtx) default: - transmitServerErrorCount.WithLabelValues(feedIDHex, fmt.Sprintf("%d", res.Code)).Inc() + transmitServerErrorCount.WithLabelValues(feedIDHex, s.url, fmt.Sprintf("%d", res.Code)).Inc() s.lggr.Errorw("Transmit report failed; mercury server returned error", "response", res, "reportCtx", t.ReportCtx, "err", res.Error, "code", res.Code) } } @@ -275,26 +277,31 @@ func (s *server) runQueueLoop(stopCh services.StopChan, wg *sync.WaitGroup, feed } } +func newServer(lggr logger.Logger, cfg TransmitterConfig, client wsrpc.Client, pm *PersistenceManager, serverURL, feedIDHex string) *server { + return &server{ + lggr, + cfg.TransmitTimeout().Duration(), + client, + pm, + NewTransmitQueue(lggr, serverURL, feedIDHex, int(cfg.TransmitQueueMaxSize()), pm), + make(chan *pb.TransmitRequest, int(cfg.TransmitQueueMaxSize())), + serverURL, + transmitSuccessCount.WithLabelValues(feedIDHex, serverURL), + transmitDuplicateCount.WithLabelValues(feedIDHex, serverURL), + transmitConnectionErrorCount.WithLabelValues(feedIDHex, serverURL), + transmitQueueDeleteErrorCount.WithLabelValues(feedIDHex, serverURL), + transmitQueueInsertErrorCount.WithLabelValues(feedIDHex, serverURL), + transmitQueuePushErrorCount.WithLabelValues(feedIDHex, serverURL), + } +} + func NewTransmitter(lggr logger.Logger, cfg TransmitterConfig, clients map[string]wsrpc.Client, fromAccount ed25519.PublicKey, jobID int32, feedID [32]byte, orm ORM, codec TransmitterReportDecoder) *mercuryTransmitter { feedIDHex := fmt.Sprintf("0x%x", feedID[:]) servers := make(map[string]*server, len(clients)) for serverURL, client := range clients { cLggr := lggr.Named(serverURL).With("serverURL", serverURL) pm := NewPersistenceManager(cLggr, serverURL, orm, jobID, int(cfg.TransmitQueueMaxSize()), flushDeletesFrequency, pruneFrequency) - servers[serverURL] = &server{ - cLggr, - cfg.TransmitTimeout().Duration(), - client, - pm, - NewTransmitQueue(cLggr, serverURL, feedIDHex, int(cfg.TransmitQueueMaxSize()), pm), - make(chan *pb.TransmitRequest, int(cfg.TransmitQueueMaxSize())), - transmitSuccessCount.WithLabelValues(feedIDHex, serverURL), - transmitDuplicateCount.WithLabelValues(feedIDHex, serverURL), - transmitConnectionErrorCount.WithLabelValues(feedIDHex, serverURL), - transmitQueueDeleteErrorCount.WithLabelValues(feedIDHex, serverURL), - transmitQueueInsertErrorCount.WithLabelValues(feedIDHex, serverURL), - transmitQueuePushErrorCount.WithLabelValues(feedIDHex, serverURL), - } + servers[serverURL] = newServer(cLggr, cfg, client, pm, serverURL, feedIDHex) } return &mercuryTransmitter{ services.StateMachine{}, diff --git a/core/services/relay/evm/mercury/transmitter_test.go b/core/services/relay/evm/mercury/transmitter_test.go index b0da9bea63..8224adc0d7 100644 --- a/core/services/relay/evm/mercury/transmitter_test.go +++ b/core/services/relay/evm/mercury/transmitter_test.go @@ -3,6 +3,7 @@ package mercury import ( "context" "math/big" + "sync" "testing" "time" @@ -14,6 +15,7 @@ import ( ocrtypes "github.com/smartcontractkit/libocr/offchainreporting2plus/types" commonconfig "github.com/smartcontractkit/chainlink-common/pkg/config" + "github.com/smartcontractkit/chainlink/v2/core/chains/evm/utils" "github.com/smartcontractkit/chainlink/v2/core/internal/testutils" "github.com/smartcontractkit/chainlink/v2/core/internal/testutils/pgtest" "github.com/smartcontractkit/chainlink/v2/core/logger" @@ -55,8 +57,8 @@ func Test_MercuryTransmitter_Transmit(t *testing.T) { require.NoError(t, err) // ensure it was added to the queue - require.Equal(t, mt.servers[sURL].q.pq.Len(), 1) - assert.Subset(t, mt.servers[sURL].q.pq.Pop().(*Transmission).Req.Payload, report) + require.Equal(t, mt.servers[sURL].q.(*transmitQueue).pq.Len(), 1) + assert.Subset(t, mt.servers[sURL].q.(*transmitQueue).pq.Pop().(*Transmission).Req.Payload, report) }) t.Run("v2 report transmission successfully enqueued", func(t *testing.T) { report := sampleV2Report @@ -69,8 +71,8 @@ func Test_MercuryTransmitter_Transmit(t *testing.T) { require.NoError(t, err) // ensure it was added to the queue - require.Equal(t, mt.servers[sURL].q.pq.Len(), 1) - assert.Subset(t, mt.servers[sURL].q.pq.Pop().(*Transmission).Req.Payload, report) + require.Equal(t, mt.servers[sURL].q.(*transmitQueue).pq.Len(), 1) + assert.Subset(t, mt.servers[sURL].q.(*transmitQueue).pq.Pop().(*Transmission).Req.Payload, report) }) t.Run("v3 report transmission successfully enqueued", func(t *testing.T) { report := sampleV3Report @@ -83,8 +85,8 @@ func Test_MercuryTransmitter_Transmit(t *testing.T) { require.NoError(t, err) // ensure it was added to the queue - require.Equal(t, mt.servers[sURL].q.pq.Len(), 1) - assert.Subset(t, mt.servers[sURL].q.pq.Pop().(*Transmission).Req.Payload, report) + require.Equal(t, mt.servers[sURL].q.(*transmitQueue).pq.Len(), 1) + assert.Subset(t, mt.servers[sURL].q.(*transmitQueue).pq.Pop().(*Transmission).Req.Payload, report) }) }) @@ -105,12 +107,12 @@ func Test_MercuryTransmitter_Transmit(t *testing.T) { require.NoError(t, err) // ensure it was added to the queue - require.Equal(t, mt.servers[sURL].q.pq.Len(), 1) - assert.Subset(t, mt.servers[sURL].q.pq.Pop().(*Transmission).Req.Payload, report) - require.Equal(t, mt.servers[sURL2].q.pq.Len(), 1) - assert.Subset(t, mt.servers[sURL2].q.pq.Pop().(*Transmission).Req.Payload, report) - require.Equal(t, mt.servers[sURL3].q.pq.Len(), 1) - assert.Subset(t, mt.servers[sURL3].q.pq.Pop().(*Transmission).Req.Payload, report) + require.Equal(t, mt.servers[sURL].q.(*transmitQueue).pq.Len(), 1) + assert.Subset(t, mt.servers[sURL].q.(*transmitQueue).pq.Pop().(*Transmission).Req.Payload, report) + require.Equal(t, mt.servers[sURL2].q.(*transmitQueue).pq.Len(), 1) + assert.Subset(t, mt.servers[sURL2].q.(*transmitQueue).pq.Pop().(*Transmission).Req.Payload, report) + require.Equal(t, mt.servers[sURL3].q.(*transmitQueue).pq.Len(), 1) + assert.Subset(t, mt.servers[sURL3].q.(*transmitQueue).pq.Pop().(*Transmission).Req.Payload, report) }) } @@ -395,3 +397,166 @@ func Test_sortReportsLatestFirst(t *testing.T) { assert.Nil(t, reports[6]) assert.Nil(t, reports[7]) } + +type mockQ struct { + ch chan *Transmission +} + +func newMockQ() *mockQ { + return &mockQ{make(chan *Transmission, 100)} +} + +func (m *mockQ) Start(context.Context) error { return nil } +func (m *mockQ) Close() error { + m.ch <- nil + return nil +} +func (m *mockQ) Ready() error { return nil } +func (m *mockQ) HealthReport() map[string]error { return nil } +func (m *mockQ) Name() string { return "" } +func (m *mockQ) BlockingPop() (t *Transmission) { + val := <-m.ch + return val +} +func (m *mockQ) Push(req *pb.TransmitRequest, reportCtx ocrtypes.ReportContext) (ok bool) { + m.ch <- &Transmission{Req: req, ReportCtx: reportCtx} + return true +} +func (m *mockQ) Init(transmissions []*Transmission) {} +func (m *mockQ) IsEmpty() bool { return false } + +func Test_MercuryTransmitter_runQueueLoop(t *testing.T) { + feedIDHex := utils.NewHash().Hex() + lggr := logger.TestLogger(t) + c := &mocks.MockWSRPCClient{} + db := pgtest.NewSqlxDB(t) + orm := NewORM(db) + pm := NewPersistenceManager(lggr, sURL, orm, 0, 0, 0, 0) + cfg := mockCfg{} + + s := newServer(lggr, cfg, c, pm, sURL, feedIDHex) + + req := &pb.TransmitRequest{ + Payload: []byte{1, 2, 3}, + ReportFormat: 32, + } + + t.Run("pulls from queue and transmits successfully", func(t *testing.T) { + transmit := make(chan *pb.TransmitRequest, 1) + c.TransmitF = func(ctx context.Context, in *pb.TransmitRequest) (*pb.TransmitResponse, error) { + transmit <- in + return &pb.TransmitResponse{Code: 0, Error: ""}, nil + } + q := newMockQ() + s.q = q + wg := &sync.WaitGroup{} + wg.Add(1) + + go s.runQueueLoop(nil, wg, feedIDHex) + + q.Push(req, sampleReportContext) + + select { + case tr := <-transmit: + assert.Equal(t, []byte{1, 2, 3}, tr.Payload) + assert.Equal(t, 32, int(tr.ReportFormat)) + // case <-time.After(testutils.WaitTimeout(t)): + case <-time.After(1 * time.Second): + t.Fatal("expected a transmit request to be sent") + } + + q.Close() + wg.Wait() + }) + + t.Run("on duplicate, success", func(t *testing.T) { + transmit := make(chan *pb.TransmitRequest, 1) + c.TransmitF = func(ctx context.Context, in *pb.TransmitRequest) (*pb.TransmitResponse, error) { + transmit <- in + return &pb.TransmitResponse{Code: DuplicateReport, Error: ""}, nil + } + q := newMockQ() + s.q = q + wg := &sync.WaitGroup{} + wg.Add(1) + + go s.runQueueLoop(nil, wg, feedIDHex) + + q.Push(req, sampleReportContext) + + select { + case tr := <-transmit: + assert.Equal(t, []byte{1, 2, 3}, tr.Payload) + assert.Equal(t, 32, int(tr.ReportFormat)) + // case <-time.After(testutils.WaitTimeout(t)): + case <-time.After(1 * time.Second): + t.Fatal("expected a transmit request to be sent") + } + + q.Close() + wg.Wait() + }) + t.Run("on server-side error, does not retry", func(t *testing.T) { + transmit := make(chan *pb.TransmitRequest, 1) + c.TransmitF = func(ctx context.Context, in *pb.TransmitRequest) (*pb.TransmitResponse, error) { + transmit <- in + return &pb.TransmitResponse{Code: DuplicateReport, Error: ""}, nil + } + q := newMockQ() + s.q = q + wg := &sync.WaitGroup{} + wg.Add(1) + + go s.runQueueLoop(nil, wg, feedIDHex) + + q.Push(req, sampleReportContext) + + select { + case tr := <-transmit: + assert.Equal(t, []byte{1, 2, 3}, tr.Payload) + assert.Equal(t, 32, int(tr.ReportFormat)) + // case <-time.After(testutils.WaitTimeout(t)): + case <-time.After(1 * time.Second): + t.Fatal("expected a transmit request to be sent") + } + + q.Close() + wg.Wait() + }) + t.Run("on transmit error, retries", func(t *testing.T) { + transmit := make(chan *pb.TransmitRequest, 1) + c.TransmitF = func(ctx context.Context, in *pb.TransmitRequest) (*pb.TransmitResponse, error) { + transmit <- in + return &pb.TransmitResponse{}, errors.New("transmission error") + } + q := newMockQ() + s.q = q + wg := &sync.WaitGroup{} + wg.Add(1) + stopCh := make(chan struct{}, 1) + + go s.runQueueLoop(stopCh, wg, feedIDHex) + + q.Push(req, sampleReportContext) + + cnt := 0 + Loop: + for { + select { + case tr := <-transmit: + assert.Equal(t, []byte{1, 2, 3}, tr.Payload) + assert.Equal(t, 32, int(tr.ReportFormat)) + if cnt > 2 { + break Loop + } + cnt++ + // case <-time.After(testutils.WaitTimeout(t)): + case <-time.After(1 * time.Second): + t.Fatal("expected 3 transmit requests to be sent") + } + } + + close(stopCh) + wg.Wait() + }) +} diff --git a/core/store/migrate/migrations/0233_log_poller_word_topic_indexes.sql b/core/store/migrate/migrations/0233_log_poller_word_topic_indexes.sql index e155e20799..31f222dd7f 100644 --- a/core/store/migrate/migrations/0233_log_poller_word_topic_indexes.sql +++ b/core/store/migrate/migrations/0233_log_poller_word_topic_indexes.sql @@ -1,65 +1,8 @@ -- +goose Up -drop index if exists evm.evm_logs_idx_data_word_one; -drop index if exists evm.evm_logs_idx_data_word_two; -drop index if exists evm.evm_logs_idx_data_word_three; -drop index if exists evm.evm_logs_idx_data_word_four; -drop index if exists evm.evm_logs_idx_topic_two; -drop index if exists evm.evm_logs_idx_topic_three; -drop index if exists evm.evm_logs_idx_topic_four; - -create index evm_logs_idx_data_word_one - on evm.logs (address, event_sig, evm_chain_id, "substring"(data, 1, 32)); - -create index evm_logs_idx_data_word_two - on evm.logs (address, event_sig, evm_chain_id, "substring"(data, 33, 32)); - -create index evm_logs_idx_data_word_three - on evm.logs (address, event_sig, evm_chain_id, "substring"(data, 65, 32)); - -create index evm_logs_idx_data_word_four - on evm.logs (address, event_sig, evm_chain_id, "substring"(data, 97, 32)); - create index evm_logs_idx_data_word_five on evm.logs (address, event_sig, evm_chain_id, "substring"(data, 129, 32)); -create index evm_logs_idx_topic_two - on evm.logs (address, event_sig, evm_chain_id, (topics[2])); - -create index evm_logs_idx_topic_three - on evm.logs (address, event_sig, evm_chain_id, (topics[3])); - -create index evm_logs_idx_topic_four - on evm.logs (address, event_sig, evm_chain_id, (topics[4])); - -- +goose Down -drop index if exists evm.evm_logs_idx_data_word_one; -drop index if exists evm.evm_logs_idx_data_word_two; -drop index if exists evm.evm_logs_idx_data_word_three; -drop index if exists evm.evm_logs_idx_data_word_four; drop index if exists evm.evm_logs_idx_data_word_five; -drop index if exists evm.evm_logs_idx_topic_two; -drop index if exists evm.evm_logs_idx_topic_three; -drop index if exists evm.evm_logs_idx_topic_four; - -create index evm_logs_idx_data_word_one - on evm.logs ("substring"(data, 1, 32)); - -create index evm_logs_idx_data_word_two - on evm.logs ("substring"(data, 33, 32)); - -create index evm_logs_idx_data_word_three - on evm.logs ("substring"(data, 65, 32)); - -create index evm_logs_idx_data_word_four - on evm.logs ("substring"(data, 97, 32)); - -create index evm_logs_idx_topic_two - on evm.logs ((topics[2])); - -create index evm_logs_idx_topic_three - on evm.logs ((topics[3])); - -create index evm_logs_idx_topic_four - on evm.logs ((topics[4])); \ No newline at end of file diff --git a/core/web/resolver/testdata/config-full.toml b/core/web/resolver/testdata/config-full.toml index 5c27817556..067b0d0b09 100644 --- a/core/web/resolver/testdata/config-full.toml +++ b/core/web/resolver/testdata/config-full.toml @@ -332,6 +332,8 @@ TransactionPercentile = 15 HistoryDepth = 15 MaxBufferSize = 17 SamplingInterval = '1h0m0s' +MaxAllowedFinalityDepth = 10000 +FinalityTagBypass = true [[EVM.KeySpecific]] Key = '0x2a3e23c6f242F5345320814aC8a1b4E58707D292' diff --git a/core/web/resolver/testdata/config-multi-chain-effective.toml b/core/web/resolver/testdata/config-multi-chain-effective.toml index 01beacb0c8..73ea1d075f 100644 --- a/core/web/resolver/testdata/config-multi-chain-effective.toml +++ b/core/web/resolver/testdata/config-multi-chain-effective.toml @@ -310,6 +310,8 @@ TransactionPercentile = 50 HistoryDepth = 100 MaxBufferSize = 3 SamplingInterval = '1s' +MaxAllowedFinalityDepth = 10000 +FinalityTagBypass = true [EVM.NodePool] PollFailureThreshold = 5 @@ -401,6 +403,8 @@ TransactionPercentile = 50 HistoryDepth = 100 MaxBufferSize = 3 SamplingInterval = '1s' +MaxAllowedFinalityDepth = 10000 +FinalityTagBypass = true [EVM.NodePool] PollFailureThreshold = 5 @@ -486,6 +490,8 @@ TransactionPercentile = 60 HistoryDepth = 2000 MaxBufferSize = 3 SamplingInterval = '1s' +MaxAllowedFinalityDepth = 10000 +FinalityTagBypass = true [EVM.NodePool] PollFailureThreshold = 5 diff --git a/docs/CONFIG.md b/docs/CONFIG.md index c886d54db4..56990e284a 100644 --- a/docs/CONFIG.md +++ b/docs/CONFIG.md @@ -1796,6 +1796,8 @@ TransactionPercentile = 50 HistoryDepth = 100 MaxBufferSize = 3 SamplingInterval = '1s' +MaxAllowedFinalityDepth = 10000 +FinalityTagBypass = true [NodePool] PollFailureThreshold = 5 @@ -1881,6 +1883,8 @@ TransactionPercentile = 50 HistoryDepth = 100 MaxBufferSize = 3 SamplingInterval = '1s' +MaxAllowedFinalityDepth = 10000 +FinalityTagBypass = true [NodePool] PollFailureThreshold = 5 @@ -1966,6 +1970,8 @@ TransactionPercentile = 50 HistoryDepth = 100 MaxBufferSize = 3 SamplingInterval = '1s' +MaxAllowedFinalityDepth = 10000 +FinalityTagBypass = true [NodePool] PollFailureThreshold = 5 @@ -2051,6 +2057,8 @@ TransactionPercentile = 50 HistoryDepth = 100 MaxBufferSize = 3 SamplingInterval = '1s' +MaxAllowedFinalityDepth = 10000 +FinalityTagBypass = true [NodePool] PollFailureThreshold = 5 @@ -2137,6 +2145,8 @@ TransactionPercentile = 60 HistoryDepth = 300 MaxBufferSize = 3 SamplingInterval = '1s' +MaxAllowedFinalityDepth = 10000 +FinalityTagBypass = true [NodePool] PollFailureThreshold = 5 @@ -2222,6 +2232,8 @@ TransactionPercentile = 60 HistoryDepth = 100 MaxBufferSize = 3 SamplingInterval = '1s' +MaxAllowedFinalityDepth = 10000 +FinalityTagBypass = true [NodePool] PollFailureThreshold = 5 @@ -2307,6 +2319,8 @@ TransactionPercentile = 60 HistoryDepth = 100 MaxBufferSize = 3 SamplingInterval = '1s' +MaxAllowedFinalityDepth = 10000 +FinalityTagBypass = true [NodePool] PollFailureThreshold = 5 @@ -2393,6 +2407,8 @@ TransactionPercentile = 50 HistoryDepth = 100 MaxBufferSize = 3 SamplingInterval = '1s' +MaxAllowedFinalityDepth = 10000 +FinalityTagBypass = true [NodePool] PollFailureThreshold = 5 @@ -2478,6 +2494,8 @@ TransactionPercentile = 60 HistoryDepth = 100 MaxBufferSize = 3 SamplingInterval = '1s' +MaxAllowedFinalityDepth = 10000 +FinalityTagBypass = true [NodePool] PollFailureThreshold = 5 @@ -2562,6 +2580,8 @@ TransactionPercentile = 60 HistoryDepth = 100 MaxBufferSize = 3 SamplingInterval = '1s' +MaxAllowedFinalityDepth = 10000 +FinalityTagBypass = true [NodePool] PollFailureThreshold = 5 @@ -2646,6 +2666,8 @@ TransactionPercentile = 60 HistoryDepth = 100 MaxBufferSize = 3 SamplingInterval = '1s' +MaxAllowedFinalityDepth = 10000 +FinalityTagBypass = true [NodePool] PollFailureThreshold = 5 @@ -2731,6 +2753,8 @@ TransactionPercentile = 60 HistoryDepth = 100 MaxBufferSize = 3 SamplingInterval = '1s' +MaxAllowedFinalityDepth = 10000 +FinalityTagBypass = false [NodePool] PollFailureThreshold = 5 @@ -2817,6 +2841,8 @@ TransactionPercentile = 60 HistoryDepth = 100 MaxBufferSize = 3 SamplingInterval = '1s' +MaxAllowedFinalityDepth = 10000 +FinalityTagBypass = true [NodePool] PollFailureThreshold = 5 @@ -2902,6 +2928,8 @@ TransactionPercentile = 60 HistoryDepth = 100 MaxBufferSize = 3 SamplingInterval = '1s' +MaxAllowedFinalityDepth = 10000 +FinalityTagBypass = true [NodePool] PollFailureThreshold = 5 @@ -2987,6 +3015,8 @@ TransactionPercentile = 60 HistoryDepth = 2000 MaxBufferSize = 3 SamplingInterval = '1s' +MaxAllowedFinalityDepth = 10000 +FinalityTagBypass = true [NodePool] PollFailureThreshold = 5 @@ -3072,6 +3102,8 @@ TransactionPercentile = 60 HistoryDepth = 2000 MaxBufferSize = 3 SamplingInterval = '1s' +MaxAllowedFinalityDepth = 10000 +FinalityTagBypass = true [NodePool] PollFailureThreshold = 5 @@ -3157,6 +3189,8 @@ TransactionPercentile = 60 HistoryDepth = 2000 MaxBufferSize = 3 SamplingInterval = '1s' +MaxAllowedFinalityDepth = 10000 +FinalityTagBypass = true [NodePool] PollFailureThreshold = 5 @@ -3242,6 +3276,8 @@ TransactionPercentile = 60 HistoryDepth = 100 MaxBufferSize = 3 SamplingInterval = '1s' +MaxAllowedFinalityDepth = 10000 +FinalityTagBypass = true [NodePool] PollFailureThreshold = 5 @@ -3327,6 +3363,8 @@ TransactionPercentile = 60 HistoryDepth = 400 MaxBufferSize = 3 SamplingInterval = '1s' +MaxAllowedFinalityDepth = 10000 +FinalityTagBypass = true [NodePool] PollFailureThreshold = 5 @@ -3412,6 +3450,8 @@ TransactionPercentile = 60 HistoryDepth = 5 MaxBufferSize = 3 SamplingInterval = '1s' +MaxAllowedFinalityDepth = 10000 +FinalityTagBypass = true [NodePool] PollFailureThreshold = 5 @@ -3497,6 +3537,8 @@ TransactionPercentile = 60 HistoryDepth = 5 MaxBufferSize = 3 SamplingInterval = '1s' +MaxAllowedFinalityDepth = 10000 +FinalityTagBypass = true [NodePool] PollFailureThreshold = 5 @@ -3582,6 +3624,8 @@ TransactionPercentile = 60 HistoryDepth = 5 MaxBufferSize = 3 SamplingInterval = '1s' +MaxAllowedFinalityDepth = 10000 +FinalityTagBypass = true [NodePool] PollFailureThreshold = 5 @@ -3668,6 +3712,8 @@ TransactionPercentile = 60 HistoryDepth = 300 MaxBufferSize = 3 SamplingInterval = '1s' +MaxAllowedFinalityDepth = 10000 +FinalityTagBypass = true [NodePool] PollFailureThreshold = 5 @@ -3753,6 +3799,8 @@ TransactionPercentile = 60 HistoryDepth = 100 MaxBufferSize = 3 SamplingInterval = '1s' +MaxAllowedFinalityDepth = 10000 +FinalityTagBypass = true [NodePool] PollFailureThreshold = 5 @@ -3839,6 +3887,8 @@ TransactionPercentile = 60 HistoryDepth = 300 MaxBufferSize = 3 SamplingInterval = '1s' +MaxAllowedFinalityDepth = 10000 +FinalityTagBypass = true [NodePool] PollFailureThreshold = 2 @@ -3923,6 +3973,8 @@ TransactionPercentile = 60 HistoryDepth = 100 MaxBufferSize = 3 SamplingInterval = '1s' +MaxAllowedFinalityDepth = 10000 +FinalityTagBypass = true [NodePool] PollFailureThreshold = 5 @@ -4008,6 +4060,8 @@ TransactionPercentile = 60 HistoryDepth = 100 MaxBufferSize = 3 SamplingInterval = '1s' +MaxAllowedFinalityDepth = 10000 +FinalityTagBypass = true [NodePool] PollFailureThreshold = 5 @@ -4092,6 +4146,8 @@ TransactionPercentile = 60 HistoryDepth = 2000 MaxBufferSize = 3 SamplingInterval = '1s' +MaxAllowedFinalityDepth = 10000 +FinalityTagBypass = true [NodePool] PollFailureThreshold = 5 @@ -4177,6 +4233,8 @@ TransactionPercentile = 60 HistoryDepth = 100 MaxBufferSize = 3 SamplingInterval = '1s' +MaxAllowedFinalityDepth = 10000 +FinalityTagBypass = true [NodePool] PollFailureThreshold = 5 @@ -4262,6 +4320,8 @@ TransactionPercentile = 60 HistoryDepth = 100 MaxBufferSize = 3 SamplingInterval = '1s' +MaxAllowedFinalityDepth = 10000 +FinalityTagBypass = false [NodePool] PollFailureThreshold = 5 @@ -4346,6 +4406,8 @@ TransactionPercentile = 60 HistoryDepth = 10 MaxBufferSize = 100 SamplingInterval = '0s' +MaxAllowedFinalityDepth = 10000 +FinalityTagBypass = true [NodePool] PollFailureThreshold = 5 @@ -4430,6 +4492,8 @@ TransactionPercentile = 60 HistoryDepth = 2000 MaxBufferSize = 3 SamplingInterval = '1s' +MaxAllowedFinalityDepth = 10000 +FinalityTagBypass = true [NodePool] PollFailureThreshold = 5 @@ -4515,6 +4579,8 @@ TransactionPercentile = 60 HistoryDepth = 400 MaxBufferSize = 3 SamplingInterval = '1s' +MaxAllowedFinalityDepth = 10000 +FinalityTagBypass = true [NodePool] PollFailureThreshold = 5 @@ -4599,6 +4665,8 @@ TransactionPercentile = 60 HistoryDepth = 2000 MaxBufferSize = 3 SamplingInterval = '1s' +MaxAllowedFinalityDepth = 10000 +FinalityTagBypass = true [NodePool] PollFailureThreshold = 5 @@ -4684,6 +4752,8 @@ TransactionPercentile = 60 HistoryDepth = 100 MaxBufferSize = 3 SamplingInterval = '1s' +MaxAllowedFinalityDepth = 10000 +FinalityTagBypass = true [NodePool] PollFailureThreshold = 5 @@ -4770,6 +4840,8 @@ TransactionPercentile = 60 HistoryDepth = 600 MaxBufferSize = 3 SamplingInterval = '1s' +MaxAllowedFinalityDepth = 10000 +FinalityTagBypass = true [NodePool] PollFailureThreshold = 5 @@ -4854,6 +4926,8 @@ TransactionPercentile = 60 HistoryDepth = 100 MaxBufferSize = 3 SamplingInterval = '1s' +MaxAllowedFinalityDepth = 10000 +FinalityTagBypass = true [NodePool] PollFailureThreshold = 5 @@ -4939,6 +5013,8 @@ TransactionPercentile = 60 HistoryDepth = 300 MaxBufferSize = 3 SamplingInterval = '1s' +MaxAllowedFinalityDepth = 10000 +FinalityTagBypass = true [NodePool] PollFailureThreshold = 5 @@ -5024,6 +5100,8 @@ TransactionPercentile = 60 HistoryDepth = 100 MaxBufferSize = 3 SamplingInterval = '1s' +MaxAllowedFinalityDepth = 10000 +FinalityTagBypass = true [NodePool] PollFailureThreshold = 5 @@ -5110,6 +5188,8 @@ TransactionPercentile = 60 HistoryDepth = 300 MaxBufferSize = 3 SamplingInterval = '1s' +MaxAllowedFinalityDepth = 10000 +FinalityTagBypass = true [NodePool] PollFailureThreshold = 2 @@ -5196,6 +5276,8 @@ TransactionPercentile = 60 HistoryDepth = 100 MaxBufferSize = 3 SamplingInterval = '1s' +MaxAllowedFinalityDepth = 10000 +FinalityTagBypass = true [NodePool] PollFailureThreshold = 5 @@ -5281,6 +5363,8 @@ TransactionPercentile = 60 HistoryDepth = 50 MaxBufferSize = 3 SamplingInterval = '1s' +MaxAllowedFinalityDepth = 10000 +FinalityTagBypass = true [NodePool] PollFailureThreshold = 5 @@ -5366,6 +5450,8 @@ TransactionPercentile = 60 HistoryDepth = 100 MaxBufferSize = 3 SamplingInterval = '1s' +MaxAllowedFinalityDepth = 10000 +FinalityTagBypass = false [NodePool] PollFailureThreshold = 5 @@ -5451,6 +5537,8 @@ TransactionPercentile = 60 HistoryDepth = 100 MaxBufferSize = 3 SamplingInterval = '1s' +MaxAllowedFinalityDepth = 10000 +FinalityTagBypass = true [NodePool] PollFailureThreshold = 5 @@ -5536,6 +5624,8 @@ TransactionPercentile = 60 HistoryDepth = 50 MaxBufferSize = 3 SamplingInterval = '1s' +MaxAllowedFinalityDepth = 10000 +FinalityTagBypass = true [NodePool] PollFailureThreshold = 5 @@ -5620,6 +5710,8 @@ TransactionPercentile = 60 HistoryDepth = 100 MaxBufferSize = 3 SamplingInterval = '1s' +MaxAllowedFinalityDepth = 10000 +FinalityTagBypass = true [NodePool] PollFailureThreshold = 5 @@ -5704,6 +5796,8 @@ TransactionPercentile = 60 HistoryDepth = 1000 MaxBufferSize = 3 SamplingInterval = '1s' +MaxAllowedFinalityDepth = 10000 +FinalityTagBypass = true [NodePool] PollFailureThreshold = 5 @@ -5788,6 +5882,8 @@ TransactionPercentile = 60 HistoryDepth = 350 MaxBufferSize = 3 SamplingInterval = '1s' +MaxAllowedFinalityDepth = 10000 +FinalityTagBypass = true [NodePool] PollFailureThreshold = 5 @@ -5873,6 +5969,8 @@ TransactionPercentile = 60 HistoryDepth = 100 MaxBufferSize = 3 SamplingInterval = '1s' +MaxAllowedFinalityDepth = 10000 +FinalityTagBypass = true [NodePool] PollFailureThreshold = 5 @@ -5958,6 +6056,8 @@ TransactionPercentile = 60 HistoryDepth = 100 MaxBufferSize = 3 SamplingInterval = '1s' +MaxAllowedFinalityDepth = 10000 +FinalityTagBypass = true [NodePool] PollFailureThreshold = 5 @@ -6043,6 +6143,8 @@ TransactionPercentile = 60 HistoryDepth = 2000 MaxBufferSize = 3 SamplingInterval = '1s' +MaxAllowedFinalityDepth = 10000 +FinalityTagBypass = true [NodePool] PollFailureThreshold = 5 @@ -6127,6 +6229,8 @@ TransactionPercentile = 60 HistoryDepth = 2000 MaxBufferSize = 3 SamplingInterval = '1s' +MaxAllowedFinalityDepth = 10000 +FinalityTagBypass = true [NodePool] PollFailureThreshold = 5 @@ -6213,6 +6317,8 @@ TransactionPercentile = 60 HistoryDepth = 300 MaxBufferSize = 3 SamplingInterval = '1s' +MaxAllowedFinalityDepth = 10000 +FinalityTagBypass = true [NodePool] PollFailureThreshold = 4 @@ -6298,6 +6404,8 @@ TransactionPercentile = 60 HistoryDepth = 300 MaxBufferSize = 3 SamplingInterval = '1s' +MaxAllowedFinalityDepth = 10000 +FinalityTagBypass = true [NodePool] PollFailureThreshold = 5 @@ -6384,6 +6492,8 @@ TransactionPercentile = 60 HistoryDepth = 300 MaxBufferSize = 3 SamplingInterval = '1s' +MaxAllowedFinalityDepth = 10000 +FinalityTagBypass = true [NodePool] PollFailureThreshold = 5 @@ -6470,6 +6580,8 @@ TransactionPercentile = 60 HistoryDepth = 100 MaxBufferSize = 3 SamplingInterval = '1s' +MaxAllowedFinalityDepth = 10000 +FinalityTagBypass = true [NodePool] PollFailureThreshold = 5 @@ -6556,6 +6668,8 @@ TransactionPercentile = 60 HistoryDepth = 100 MaxBufferSize = 3 SamplingInterval = '1s' +MaxAllowedFinalityDepth = 10000 +FinalityTagBypass = true [NodePool] PollFailureThreshold = 5 @@ -6642,6 +6756,8 @@ TransactionPercentile = 60 HistoryDepth = 100 MaxBufferSize = 3 SamplingInterval = '1s' +MaxAllowedFinalityDepth = 10000 +FinalityTagBypass = true [NodePool] PollFailureThreshold = 5 @@ -6727,6 +6843,8 @@ TransactionPercentile = 60 HistoryDepth = 50 MaxBufferSize = 3 SamplingInterval = '1s' +MaxAllowedFinalityDepth = 10000 +FinalityTagBypass = true [NodePool] PollFailureThreshold = 5 @@ -6812,6 +6930,8 @@ TransactionPercentile = 60 HistoryDepth = 50 MaxBufferSize = 3 SamplingInterval = '1s' +MaxAllowedFinalityDepth = 10000 +FinalityTagBypass = true [NodePool] PollFailureThreshold = 5 @@ -6897,6 +7017,8 @@ TransactionPercentile = 50 HistoryDepth = 100 MaxBufferSize = 3 SamplingInterval = '1s' +MaxAllowedFinalityDepth = 10000 +FinalityTagBypass = false [NodePool] PollFailureThreshold = 5 @@ -6982,6 +7104,8 @@ TransactionPercentile = 60 HistoryDepth = 300 MaxBufferSize = 3 SamplingInterval = '1s' +MaxAllowedFinalityDepth = 10000 +FinalityTagBypass = true [NodePool] PollFailureThreshold = 5 @@ -7068,6 +7192,8 @@ TransactionPercentile = 60 HistoryDepth = 300 MaxBufferSize = 3 SamplingInterval = '1s' +MaxAllowedFinalityDepth = 10000 +FinalityTagBypass = true [NodePool] PollFailureThreshold = 4 @@ -7153,6 +7279,8 @@ TransactionPercentile = 60 HistoryDepth = 100 MaxBufferSize = 3 SamplingInterval = '1s' +MaxAllowedFinalityDepth = 10000 +FinalityTagBypass = true [NodePool] PollFailureThreshold = 5 @@ -7238,6 +7366,8 @@ TransactionPercentile = 60 HistoryDepth = 100 MaxBufferSize = 3 SamplingInterval = '1s' +MaxAllowedFinalityDepth = 10000 +FinalityTagBypass = true [NodePool] PollFailureThreshold = 5 @@ -7833,6 +7963,8 @@ Setting it lower will tend to set lower gas prices. HistoryDepth = 100 # Default MaxBufferSize = 3 # Default SamplingInterval = '1s' # Default +FinalityTagBypass = true # Default +MaxAllowedFinalityDepth = 10000 # Default ``` The head tracker continually listens for new heads from the chain. @@ -7863,6 +7995,22 @@ SamplingInterval = '1s' # Default ``` SamplingInterval means that head tracker callbacks will at maximum be made once in every window of this duration. This is a performance optimisation for fast chains. Set to 0 to disable sampling entirely. +### FinalityTagBypass +```toml +FinalityTagBypass = true # Default +``` +FinalityTagBypass disables FinalityTag support in HeadTracker and makes it track blocks up to FinalityDepth from the most recent head. +It should only be used on chains with an extremely large actual finality depth (the number of blocks between the most recent head and the latest finalized block). +Has no effect if `FinalityTagsEnabled` = false + +### MaxAllowedFinalityDepth +```toml +MaxAllowedFinalityDepth = 10000 # Default +``` +MaxAllowedFinalityDepth - defines maximum number of blocks between the most recent head and the latest finalized block. +If actual finality depth exceeds this number, HeadTracker aborts backfill and returns an error. +Has no effect if `FinalityTagsEnabled` = false + ## EVM.KeySpecific ```toml [[EVM.KeySpecific]] diff --git a/go.mod b/go.mod index 6099f85299..e0d06a943e 100644 --- a/go.mod +++ b/go.mod @@ -72,7 +72,7 @@ require ( github.com/scylladb/go-reflectx v1.0.1 github.com/shirou/gopsutil/v3 v3.24.3 github.com/shopspring/decimal v1.3.1 - github.com/smartcontractkit/chain-selectors v1.0.16 + github.com/smartcontractkit/chain-selectors v1.0.17 github.com/smartcontractkit/chainlink-automation v1.0.3 github.com/smartcontractkit/chainlink-common v0.1.7-0.20240607202129-4cef984f109f github.com/smartcontractkit/chainlink-cosmos v0.4.1-0.20240419213354-ea34a948e2ee diff --git a/go.sum b/go.sum index 06758320cd..d068680f97 100644 --- a/go.sum +++ b/go.sum @@ -1171,8 +1171,8 @@ github.com/sirupsen/logrus v1.4.1/go.mod h1:ni0Sbl8bgC9z8RoU9G6nDWqqs/fq4eDPysMB github.com/sirupsen/logrus v1.4.2/go.mod h1:tLMulIdttU9McNUspp0xgXVQah82FyeX6MwdIuYE2rE= github.com/sirupsen/logrus v1.9.3 h1:dueUQJ1C2q9oE3F7wvmSGAaVtTmUizReu6fjN8uqzbQ= github.com/sirupsen/logrus v1.9.3/go.mod h1:naHLuLoDiP4jHNo9R0sCBMtWGeIprob74mVsIT4qYEQ= -github.com/smartcontractkit/chain-selectors v1.0.16 h1:uVoitoL5KVqGbU89b6W9gECwIvcdZh/w8MI/9JfEoy8= -github.com/smartcontractkit/chain-selectors v1.0.16/go.mod h1:d4Hi+E1zqjy9HqMkjBE5q1vcG9VGgxf5VxiRHfzi2kE= +github.com/smartcontractkit/chain-selectors v1.0.17 h1:otOlYUnutS8oQBEAi9RLQICqZP0Nxy0k8vOZuSMJa4w= +github.com/smartcontractkit/chain-selectors v1.0.17/go.mod h1:d4Hi+E1zqjy9HqMkjBE5q1vcG9VGgxf5VxiRHfzi2kE= github.com/smartcontractkit/chainlink-automation v1.0.3 h1:h/ijT0NiyV06VxYVgcNfsE3+8OEzT3Q0Z9au0z1BPWs= github.com/smartcontractkit/chainlink-automation v1.0.3/go.mod h1:RjboV0Qd7YP+To+OrzHGXaxUxoSONveCoAK2TQ1INLU= github.com/smartcontractkit/chainlink-common v0.1.7-0.20240607202129-4cef984f109f h1:S79gMmLymYWPZC/zAOIY3QhgyD2cqPO+FdPernwJq/M= diff --git a/integration-tests/ccip-tests/testsetups/lm_setup.go b/integration-tests/ccip-tests/testsetups/lm_setup.go index 9f8141a240..5dbc8e068f 100644 --- a/integration-tests/ccip-tests/testsetups/lm_setup.go +++ b/integration-tests/ccip-tests/testsetups/lm_setup.go @@ -35,16 +35,18 @@ import ( "github.com/pkg/errors" "github.com/rs/zerolog" "github.com/rs/zerolog/log" - integrationactions "github.com/smartcontractkit/ccip/integration-tests/actions" chainselectors "github.com/smartcontractkit/chain-selectors" "github.com/stretchr/testify/require" "go.uber.org/zap/zapcore" "golang.org/x/sync/errgroup" + integrationactions "github.com/smartcontractkit/ccip/integration-tests/actions" + "github.com/smartcontractkit/chainlink-testing-framework/blockchain" ctfClient "github.com/smartcontractkit/chainlink-testing-framework/client" "github.com/smartcontractkit/chainlink-testing-framework/k8s/config" "github.com/smartcontractkit/chainlink-testing-framework/k8s/environment" + "github.com/smartcontractkit/chainlink/integration-tests/ccip-tests/actions" "github.com/smartcontractkit/chainlink/integration-tests/ccip-tests/contracts" "github.com/smartcontractkit/chainlink/integration-tests/docker/test_env" @@ -348,7 +350,7 @@ func (o *LMTestSetupOutputs) DeployLMChainContracts( } lggr.Info().Str("Address", bridgeAdapter.EthAddress.String()).Msg("Deployed Mock L1 Bridge Adapter contract") lmCommon.BridgeAdapterAddr = bridgeAdapter.EthAddress - case chainselectors.TEST_2337.Selector: + case chainselectors.GETH_DEVNET_2.Selector: lggr.Info().Msg("Deploying Mock L2 Bridge Adapter contract") bridgeAdapter, err := cd.DeployMockL2BridgeAdapter() if err != nil { diff --git a/integration-tests/go.mod b/integration-tests/go.mod index 42089d7ffa..db5f6103d0 100644 --- a/integration-tests/go.mod +++ b/integration-tests/go.mod @@ -28,7 +28,7 @@ require ( github.com/segmentio/ksuid v1.0.4 github.com/shopspring/decimal v1.3.1 github.com/slack-go/slack v0.12.2 - github.com/smartcontractkit/chain-selectors v1.0.16 + github.com/smartcontractkit/chain-selectors v1.0.17 github.com/smartcontractkit/chainlink-automation v1.0.3 github.com/smartcontractkit/chainlink-common v0.1.7-0.20240607202129-4cef984f109f github.com/smartcontractkit/chainlink-testing-framework v1.29.1 diff --git a/integration-tests/go.sum b/integration-tests/go.sum index c162462e8c..d1f4c2b4d2 100644 --- a/integration-tests/go.sum +++ b/integration-tests/go.sum @@ -1509,8 +1509,8 @@ github.com/sirupsen/logrus v1.9.3 h1:dueUQJ1C2q9oE3F7wvmSGAaVtTmUizReu6fjN8uqzbQ github.com/sirupsen/logrus v1.9.3/go.mod h1:naHLuLoDiP4jHNo9R0sCBMtWGeIprob74mVsIT4qYEQ= github.com/slack-go/slack v0.12.2 h1:x3OppyMyGIbbiyFhsBmpf9pwkUzMhthJMRNmNlA4LaQ= github.com/slack-go/slack v0.12.2/go.mod h1:hlGi5oXA+Gt+yWTPP0plCdRKmjsDxecdHxYQdlMQKOw= -github.com/smartcontractkit/chain-selectors v1.0.16 h1:uVoitoL5KVqGbU89b6W9gECwIvcdZh/w8MI/9JfEoy8= -github.com/smartcontractkit/chain-selectors v1.0.16/go.mod h1:d4Hi+E1zqjy9HqMkjBE5q1vcG9VGgxf5VxiRHfzi2kE= +github.com/smartcontractkit/chain-selectors v1.0.17 h1:otOlYUnutS8oQBEAi9RLQICqZP0Nxy0k8vOZuSMJa4w= +github.com/smartcontractkit/chain-selectors v1.0.17/go.mod h1:d4Hi+E1zqjy9HqMkjBE5q1vcG9VGgxf5VxiRHfzi2kE= github.com/smartcontractkit/chainlink-automation v1.0.3 h1:h/ijT0NiyV06VxYVgcNfsE3+8OEzT3Q0Z9au0z1BPWs= github.com/smartcontractkit/chainlink-automation v1.0.3/go.mod h1:RjboV0Qd7YP+To+OrzHGXaxUxoSONveCoAK2TQ1INLU= github.com/smartcontractkit/chainlink-common v0.1.7-0.20240607202129-4cef984f109f h1:S79gMmLymYWPZC/zAOIY3QhgyD2cqPO+FdPernwJq/M= diff --git a/integration-tests/load/go.mod b/integration-tests/load/go.mod index 2642b0eaba..fb138669ec 100644 --- a/integration-tests/load/go.mod +++ b/integration-tests/load/go.mod @@ -366,7 +366,7 @@ require ( github.com/shoenig/go-m1cpu v0.1.6 // indirect github.com/shopspring/decimal v1.3.1 // indirect github.com/sirupsen/logrus v1.9.3 // indirect - github.com/smartcontractkit/chain-selectors v1.0.16 // indirect + github.com/smartcontractkit/chain-selectors v1.0.17 // indirect github.com/smartcontractkit/chainlink-cosmos v0.4.1-0.20240419213354-ea34a948e2ee // indirect github.com/smartcontractkit/chainlink-data-streams v0.0.0-20240220203239-09be0ea34540 // indirect github.com/smartcontractkit/chainlink-feeds v0.0.0-20240422130241-13c17a91b2ab // indirect diff --git a/integration-tests/load/go.sum b/integration-tests/load/go.sum index fe814de0d4..545b215770 100644 --- a/integration-tests/load/go.sum +++ b/integration-tests/load/go.sum @@ -1492,8 +1492,8 @@ github.com/sirupsen/logrus v1.9.3 h1:dueUQJ1C2q9oE3F7wvmSGAaVtTmUizReu6fjN8uqzbQ github.com/sirupsen/logrus v1.9.3/go.mod h1:naHLuLoDiP4jHNo9R0sCBMtWGeIprob74mVsIT4qYEQ= github.com/slack-go/slack v0.12.2 h1:x3OppyMyGIbbiyFhsBmpf9pwkUzMhthJMRNmNlA4LaQ= github.com/slack-go/slack v0.12.2/go.mod h1:hlGi5oXA+Gt+yWTPP0plCdRKmjsDxecdHxYQdlMQKOw= -github.com/smartcontractkit/chain-selectors v1.0.16 h1:uVoitoL5KVqGbU89b6W9gECwIvcdZh/w8MI/9JfEoy8= -github.com/smartcontractkit/chain-selectors v1.0.16/go.mod h1:d4Hi+E1zqjy9HqMkjBE5q1vcG9VGgxf5VxiRHfzi2kE= +github.com/smartcontractkit/chain-selectors v1.0.17 h1:otOlYUnutS8oQBEAi9RLQICqZP0Nxy0k8vOZuSMJa4w= +github.com/smartcontractkit/chain-selectors v1.0.17/go.mod h1:d4Hi+E1zqjy9HqMkjBE5q1vcG9VGgxf5VxiRHfzi2kE= github.com/smartcontractkit/chainlink-automation v1.0.3 h1:h/ijT0NiyV06VxYVgcNfsE3+8OEzT3Q0Z9au0z1BPWs= github.com/smartcontractkit/chainlink-automation v1.0.3/go.mod h1:RjboV0Qd7YP+To+OrzHGXaxUxoSONveCoAK2TQ1INLU= github.com/smartcontractkit/chainlink-common v0.1.7-0.20240607202129-4cef984f109f h1:S79gMmLymYWPZC/zAOIY3QhgyD2cqPO+FdPernwJq/M= diff --git a/integration-tests/smoke/forwarders_ocr2_test.go b/integration-tests/smoke/forwarders_ocr2_test.go index d1d7d15a38..b90cde7f1f 100644 --- a/integration-tests/smoke/forwarders_ocr2_test.go +++ b/integration-tests/smoke/forwarders_ocr2_test.go @@ -92,9 +92,6 @@ func TestForwarderOCR2Basic(t *testing.T) { ocrInstances, err := actions_seth.DeployOCRv2Contracts(l, sethClient, 1, common.HexToAddress(lt.Address()), transmitters, ocrOffchainOptions) require.NoError(t, err, "Error deploying OCRv2 contracts with forwarders") - err = actions.CreateOCRv2JobsLocal(ocrInstances, bootstrapNode, workerNodes, env.MockAdapter, "ocr2", 5, uint64(sethClient.ChainID), true, false) - require.NoError(t, err, "Error creating OCRv2 jobs with forwarders") - ocrv2Config, err := actions.BuildMedianOCR2ConfigLocal(workerNodes, ocrOffchainOptions) require.NoError(t, err, "Error building OCRv2 config") ocrv2Config.Transmitters = authorizedForwarders @@ -102,6 +99,9 @@ func TestForwarderOCR2Basic(t *testing.T) { err = actions_seth.ConfigureOCRv2AggregatorContracts(ocrv2Config, ocrInstances) require.NoError(t, err, "Error configuring OCRv2 aggregator contracts") + err = actions.CreateOCRv2JobsLocal(ocrInstances, bootstrapNode, workerNodes, env.MockAdapter, "ocr2", 5, uint64(sethClient.ChainID), true, false) + require.NoError(t, err, "Error creating OCRv2 jobs with forwarders") + err = actions_seth.WatchNewOCRRound(l, sethClient, 1, contracts.V2OffChainAgrregatorToOffChainAggregatorWithRounds(ocrInstances), time.Duration(10*time.Minute)) require.NoError(t, err, "error watching for new OCRv2 round") diff --git a/package.json b/package.json index dcfc53950b..b9a2411583 100644 --- a/package.json +++ b/package.json @@ -1,6 +1,6 @@ { "name": "ccip", - "version": "2.11.0-ccip1.4.13", + "version": "2.12.0-ccip1.4.19", "description": "node of the decentralized oracle network, bridging on and off-chain computation", "main": "index.js", "scripts": { diff --git a/testdata/scripts/node/validate/disk-based-logging-disabled.txtar b/testdata/scripts/node/validate/disk-based-logging-disabled.txtar index 49805a75da..718dc8acc4 100644 --- a/testdata/scripts/node/validate/disk-based-logging-disabled.txtar +++ b/testdata/scripts/node/validate/disk-based-logging-disabled.txtar @@ -366,6 +366,8 @@ TransactionPercentile = 50 HistoryDepth = 100 MaxBufferSize = 3 SamplingInterval = '1s' +MaxAllowedFinalityDepth = 10000 +FinalityTagBypass = true [EVM.NodePool] PollFailureThreshold = 5 diff --git a/testdata/scripts/node/validate/disk-based-logging-no-dir.txtar b/testdata/scripts/node/validate/disk-based-logging-no-dir.txtar index 0ce375997f..4401b3bbc8 100644 --- a/testdata/scripts/node/validate/disk-based-logging-no-dir.txtar +++ b/testdata/scripts/node/validate/disk-based-logging-no-dir.txtar @@ -366,6 +366,8 @@ TransactionPercentile = 50 HistoryDepth = 100 MaxBufferSize = 3 SamplingInterval = '1s' +MaxAllowedFinalityDepth = 10000 +FinalityTagBypass = true [EVM.NodePool] PollFailureThreshold = 5 diff --git a/testdata/scripts/node/validate/disk-based-logging.txtar b/testdata/scripts/node/validate/disk-based-logging.txtar index f2195ed7a5..6a0d1cab3a 100644 --- a/testdata/scripts/node/validate/disk-based-logging.txtar +++ b/testdata/scripts/node/validate/disk-based-logging.txtar @@ -366,6 +366,8 @@ TransactionPercentile = 50 HistoryDepth = 100 MaxBufferSize = 3 SamplingInterval = '1s' +MaxAllowedFinalityDepth = 10000 +FinalityTagBypass = true [EVM.NodePool] PollFailureThreshold = 5 diff --git a/testdata/scripts/node/validate/invalid.txtar b/testdata/scripts/node/validate/invalid.txtar index b655c9b3b6..efd5392237 100644 --- a/testdata/scripts/node/validate/invalid.txtar +++ b/testdata/scripts/node/validate/invalid.txtar @@ -356,6 +356,8 @@ TransactionPercentile = 50 HistoryDepth = 100 MaxBufferSize = 3 SamplingInterval = '1s' +MaxAllowedFinalityDepth = 10000 +FinalityTagBypass = true [EVM.NodePool] PollFailureThreshold = 5 diff --git a/testdata/scripts/node/validate/valid.txtar b/testdata/scripts/node/validate/valid.txtar index d4b9a038a8..a8c53aa405 100644 --- a/testdata/scripts/node/validate/valid.txtar +++ b/testdata/scripts/node/validate/valid.txtar @@ -363,6 +363,8 @@ TransactionPercentile = 50 HistoryDepth = 100 MaxBufferSize = 3 SamplingInterval = '1s' +MaxAllowedFinalityDepth = 10000 +FinalityTagBypass = true [EVM.NodePool] PollFailureThreshold = 5 diff --git a/testdata/scripts/node/validate/warnings.txtar b/testdata/scripts/node/validate/warnings.txtar index a1fa45b2e7..5fe319f935 100644 --- a/testdata/scripts/node/validate/warnings.txtar +++ b/testdata/scripts/node/validate/warnings.txtar @@ -9,6 +9,15 @@ CollectorTarget = 'otel-collector:4317' TLSCertPath = 'something' Mode = 'unencrypted' +[[EVM]] +ChainID = '10200' +ChainType = 'xdai' + +[[EVM.Nodes]] +Name = 'fake' +WSURL = 'wss://foo.bar/ws' +HTTPURL = 'https://foo.bar' + -- secrets.toml -- [Database] URL = 'postgresql://user:pass1234567890abcd@localhost:5432/dbname?sslmode=disable' @@ -32,6 +41,15 @@ CollectorTarget = 'otel-collector:4317' Mode = 'unencrypted' TLSCertPath = 'something' +[[EVM]] +ChainID = '10200' +ChainType = 'xdai' + +[[EVM.Nodes]] +Name = 'fake' +WSURL = 'wss://foo.bar/ws' +HTTPURL = 'https://foo.bar' + # Effective Configuration, with defaults applied: InsecureFastScrypt = false RootDir = '~/.chainlink' @@ -285,6 +303,96 @@ DeltaDial = '15s' DeltaReconcile = '1m0s' ListenAddresses = [] +[[EVM]] +ChainID = '10200' +AutoCreateKey = true +BlockBackfillDepth = 10 +BlockBackfillSkip = false +ChainType = 'xdai' +FinalityDepth = 100 +FinalityTagEnabled = false +LogBackfillBatchSize = 1000 +LogPollInterval = '5s' +LogKeepBlocksDepth = 100000 +LogPrunePageSize = 10000 +BackupLogPollerBlockDelay = 100 +MinIncomingConfirmations = 3 +MinContractPayment = '0.00001 link' +NonceAutoSync = true +NoNewHeadsThreshold = '3m0s' +RPCDefaultBatchSize = 250 +RPCBlockQueryDelay = 1 + +[EVM.Transactions] +ForwardersEnabled = false +MaxInFlight = 16 +MaxQueued = 250 +ReaperInterval = '1h0m0s' +ReaperThreshold = '168h0m0s' +ResendAfterThreshold = '1m0s' + +[EVM.BalanceMonitor] +Enabled = true + +[EVM.GasEstimator] +Mode = 'BlockHistory' +PriceDefault = '20 gwei' +PriceMax = '500 gwei' +PriceMin = '1 gwei' +LimitDefault = 500000 +LimitMax = 500000 +LimitMultiplier = '1' +LimitTransfer = 21000 +BumpMin = '5 gwei' +BumpPercent = 20 +BumpThreshold = 3 +EIP1559DynamicFees = true +FeeCapDefault = '100 gwei' +TipCapDefault = '1 wei' +TipCapMin = '1 wei' + +[EVM.GasEstimator.BlockHistory] +BatchSize = 25 +BlockHistorySize = 8 +CheckInclusionBlocks = 12 +CheckInclusionPercentile = 90 +TransactionPercentile = 60 + +[EVM.HeadTracker] +HistoryDepth = 100 +MaxBufferSize = 3 +SamplingInterval = '1s' +MaxAllowedFinalityDepth = 10000 +FinalityTagBypass = true + +[EVM.NodePool] +PollFailureThreshold = 5 +PollInterval = '10s' +SelectionMode = 'HighestHead' +SyncThreshold = 5 +LeaseDuration = '0s' +NodeIsSyncingEnabled = false +FinalizedBlockPollInterval = '5s' + +[EVM.OCR] +ContractConfirmations = 4 +ContractTransmitterTransmitTimeout = '10s' +DatabaseTimeout = '10s' +DeltaCOverride = '168h0m0s' +DeltaCJitterOverride = '1h0m0s' +ObservationGracePeriod = '1s' + +[EVM.OCR2] +[EVM.OCR2.Automation] +GasLimit = 5400000 + +[[EVM.Nodes]] +Name = 'fake' +WSURL = 'wss://foo.bar/ws' +HTTPURL = 'https://foo.bar' + # Configuration warning: -Tracing.TLSCertPath: invalid value (something): must be empty when Tracing.Mode is 'unencrypted' -Valid configuration. +2 errors: + - EVM.ChainType: invalid value (xdai): deprecated and will be removed in v2.13.0, use 'gnosis' instead + - Tracing.TLSCertPath: invalid value (something): must be empty when Tracing.Mode is 'unencrypted' +Valid configuration. \ No newline at end of file