Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Support load per-iteration replacement of NamedSPI #14275

Open
wants to merge 2 commits into
base: main
Choose a base branch
from

Conversation

ChrisHegarty
Copy link
Contributor

This commit adds support load per-iteration replacement of NamedSPI. The primary motivation for this change is to support deterministic SPI loading when deploying Lucene as a module.

When deploying as on the class path the service loading follows the order of the jars on the classpath. When deploying as modules, services in modules are found before those in unnamed modules, but the order of services within a module layer is undefined.

The API change proposed, NamesSPI::replace, allows subsequent service providers found to optionally replace ones found during the same iteration. This is sufficient when deploying just a couple of service provider implementation, or to effectively replace one provided by Lucene itself, e.g. elastic/elasticsearch#123011

In fact, we think that this may be a sufficient model to provide customisation to certain codecs, just like in #123011 (where we changed CompletionPostingFormats to use off-heap FST load mode ).

@msokolov
Copy link
Contributor

Is it that this switches from "first one wins" to "last one wins"?

@ChrisHegarty
Copy link
Contributor Author

Is it that this switches from "first one wins" to "last one wins"?

Good q. Out of the box, the default remains unchanged - first one wins. But if overridden, allows subsequent providers to conditionally replace previous ones (in the same iteration).

Does this help? Or have I missed your point.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants