-
Notifications
You must be signed in to change notification settings - Fork 1.7k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Feed LatestPrice query cache #11326
Feed LatestPrice query cache #11326
Changes from all commits
File filter
Filter by extension
Conversations
Jump to
Diff view
Diff view
There are no files selected for viewing
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -1,7 +1,18 @@ | ||
package config | ||
|
||
import ocr2models "github.com/smartcontractkit/chainlink/v2/core/services/ocr2/models" | ||
import ( | ||
"time" | ||
|
||
ocr2models "github.com/smartcontractkit/chainlink/v2/core/services/ocr2/models" | ||
) | ||
|
||
type MercuryCache interface { | ||
LatestReportTTL() time.Duration | ||
MaxStaleAge() time.Duration | ||
LatestReportDeadline() time.Duration | ||
} | ||
|
||
type Mercury interface { | ||
Credentials(credName string) *ocr2models.MercuryCredentials | ||
Cache() MercuryCache | ||
} |
Original file line number | Diff line number | Diff line change |
---|---|---|
|
@@ -66,6 +66,7 @@ type RelayerOpts struct { | |
pg.QConfig | ||
CSAETHKeystore | ||
pg.EventBroadcaster | ||
MercuryPool wsrpc.Pool | ||
} | ||
|
||
func (c RelayerOpts) Validate() error { | ||
|
@@ -100,7 +101,7 @@ func NewRelayer(lggr logger.Logger, chain legacyevm.Chain, opts RelayerOpts) (*R | |
chain: chain, | ||
lggr: lggr, | ||
ks: opts.CSAETHKeystore, | ||
mercuryPool: wsrpc.NewPool(lggr), | ||
mercuryPool: opts.MercuryPool, | ||
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Why is this a top level pool now? Is the intention to share it across chains? That means it will cross the relay API, and I'm not sure how that is meant to work 🤔 There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Yes it is shared across chains. I asked @cedric-cordenier and this is what he suggested. There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. So the relay API has to proxy the shared websocket connections? Why is that better than each plugin process having it's own local pool? There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. @jmank88 The reason why I suggested adding this here was because I didn't really consider this part of the relayer LOOPP API. For me this isn't part of it -- see for example how we're passing in the database. In a LOOP world, we wouldn't do that: we'd either pass in the config values and create the pool in the relayer proper, or pass in a proxy service. I'm trying to understand your objection: I guess you're saying that this would break the isolation LOOPPs are supposed to provide? There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. I'm observing that this at least adds to the technical debt in the way of LOOPPifying mercury.
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Yes, good point. One interesting thought that occurred to me is that you could look at this as a symptom of there not being a single "Mercury relayer", i.e. it's a symptom of there being a mercury provider in all relayers. Instead, maybe a more natural way to look at this is that Mercury is a "chain", and we should give it its own relayer. If we do that, this problem goes away as calls to Mercury would go via the same relayer and therefore would benefit from the wsrpc pool which in that case would be a totally internal, but shared resource. This Mercury relayer would only implement the write APIs (i.e. the transmitter, the codec etc), and would leave the read side unimplemented or stubbed out. The benefit of this is that it cleanly maps the relayer as a 1-1 abstraction over the underlying resource, in the same way that relayers do with the chains they target. It does mean that we'd need to refine the relayer abstraction somewhat, to allow products to use a different relayer for reading and for writing, but that doesn't feel wrong to me, and might even be a better underlying fit (CCIP might benefit from this for example). There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Yeah I think that separate reading and writing relayers makes sense 👍 There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. @samsondav Do you think that is a good long-term direction for Mercury? There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. I agree, separating the read/write relayers makes sense to me. However, note that in this case the "write" relayer would also have to support reading. Since we send reports to mercury but also query reports from it. Treating mercury server as a "chain" is definitely the right way to go. |
||
eventBroadcaster: opts.EventBroadcaster, | ||
pgCfg: opts.QConfig, | ||
}, nil | ||
|
@@ -116,17 +117,16 @@ func (r *Relayer) Start(context.Context) error { | |
} | ||
|
||
func (r *Relayer) Close() error { | ||
return r.mercuryPool.Close() | ||
return nil | ||
} | ||
|
||
// Ready does noop: always ready | ||
func (r *Relayer) Ready() error { | ||
return r.mercuryPool.Ready() | ||
return nil | ||
} | ||
|
||
func (r *Relayer) HealthReport() (report map[string]error) { | ||
report = make(map[string]error) | ||
services.CopyHealth(report, r.mercuryPool.HealthReport()) | ||
samsondav marked this conversation as resolved.
Show resolved
Hide resolved
|
||
return | ||
} | ||
|
||
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Would it make sense to name this
LLO
? or will those settings be separate?There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I asked about this and there is still ambiguity about naming. I thought we were renaming it to
LLO
, now it may be rename toStreams
.We already have config named
Mercury
for keepers related to this, I think the path of least resistance may be to leave it calledMercury
for now which is at least unambiguous and easy to grep for, and we can change it later if necessary.There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
That's fine I guess, but we'll have to figure out how to navigate those breaking config changes carefully.