Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Run tests with Elasticsearch #357

Merged
merged 2 commits into from
Nov 8, 2024
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
16 changes: 8 additions & 8 deletions .github/workflows/deploy.yml
Original file line number Diff line number Diff line change
Expand Up @@ -52,16 +52,16 @@ jobs:
- name: Create ImageStreams
run: |
oc create imagestream search-quarkus-io || true
oc create imagestream opensearch-custom || true
oc create imagestream elasticsearch-custom || true
# https://docs.openshift.com/container-platform/4.14/openshift_images/using-imagestreams-with-kube-resources.html
oc set image-lookup search-quarkus-io
oc set image-lookup opensearch-custom
oc set image-lookup elasticsearch-custom

- name: Retrieve OpenShift Container Registry URL
id: oc-registry
run: |
echo -n "OC_REGISTRY_URL=" >> "$GITHUB_OUTPUT"
oc registry info >> "$GITHUB_OUTPUT"
oc get imagestream -o json | jq -r '.items[0].status.publicDockerImageRepository' | awk -F"[/]" '{print $1}' >> "$GITHUB_OUTPUT"
- name: Log in to OpenShift Container Registry
uses: docker/login-action@v3
with:
Expand All @@ -85,17 +85,17 @@ jobs:
-Drevision="${{ steps.app-version.outputs.value }}" \
-Dquarkus.container-image.build=true \
-Dquarkus.container-image.push=true \
-Dquarkus.container-image.registry="$(oc registry info)" \
-Dquarkus.container-image.registry="${{ steps.oc-registry.outputs.OC_REGISTRY_URL }}" \
-Dquarkus.container-image.group="$(oc project --short)"

- name: Push OpenSearch container image
- name: Push Elasticsearch container image
run: |
REMOTE_IMAGE_REF="$(oc registry info)/$(oc project --short)/opensearch-custom:${{ steps.app-version.outputs.value }}"
REMOTE_IMAGE_REF="${{ steps.oc-registry.outputs.OC_REGISTRY_URL }}/$(oc project --short)/elasticsearch-custom:${{ steps.app-version.outputs.value }}"
# docker doesn't allow the `push source target` syntax, so we have to do this in two steps.
docker image tag "opensearch-custom:latest" "$REMOTE_IMAGE_REF"
docker image tag "elasticsearch-custom:latest" "$REMOTE_IMAGE_REF"
docker push "$REMOTE_IMAGE_REF"

- name: Deploy Helm charts
run: |
helm upgrade --install search-quarkus-io ./target/helm/openshift/search-quarkus-io \
-f ./src/main/helm/values.$QUARKUS_PROFILE.yaml
-f ./src/main/helm/values.$QUARKUS_PROFILE.yaml
26 changes: 13 additions & 13 deletions README.adoc
Original file line number Diff line number Diff line change
Expand Up @@ -5,7 +5,7 @@
[[architecture]]
== Architecture

The application is deployed on an OpenShift cluster, next to an OpenSearch instance.
The application is deployed on an OpenShift cluster, next to an Elasticsearch instance.

It fetches the sources from quarkus.io and localized variants (pt.quarkus.io, ...) to index them,
and exposes the search feature through a REST API.
Expand Down Expand Up @@ -142,34 +142,34 @@ Then start it this way:
[source,shell]
----
podman pod create -p 8080:8080 -p 9000:9000 -p 9200:9200 --name search.quarkus.io
# Start multiple OpenSearch containers
# Start multiple Elasticsearch containers
podman container run -d --name search-backend-0 --pod search.quarkus.io \
--cpus=2 --memory=2g \
-e "node.name=search-backend-0" \
-e "discovery.seed_hosts=localhost" \
-e "cluster.initial_cluster_manager_nodes=search-backend-0,search-backend-1,search-backend-2" \
-e "OPENSEARCH_JAVA_OPTS=-Xms1g -Xmx1g" \
-e "DISABLE_SECURITY_PLUGIN=true" \
-e "ES_JAVA_OPTS=-Xms1g -Xmx1g" \
-e "xpack.security.enabled=false" \
-e "cluster.routing.allocation.disk.threshold_enabled=false" \
opensearch-custom:latest
elasticsearch-custom:latest
podman container run -d --name search-backend-1 --pod search.quarkus.io \
--cpus=2 --memory=2g \
-e "node.name=search-backend-1" \
-e "discovery.seed_hosts=localhost" \
-e "cluster.initial_cluster_manager_nodes=search-backend-0,search-backend-1,search-backend-2" \
-e "OPENSEARCH_JAVA_OPTS=-Xms1g -Xmx1g" \
-e "DISABLE_SECURITY_PLUGIN=true" \
-e "ES_JAVA_OPTS=-Xms1g -Xmx1g" \
-e "xpack.security.enabled=false" \
-e "cluster.routing.allocation.disk.threshold_enabled=false" \
opensearch-custom:latest
elasticsearch-custom:latest
podman container run -d --name search-backend-2 --pod search.quarkus.io \
--cpus=2 --memory=2g \
-e "node.name=search-backend-2" \
-e "discovery.seed_hosts=localhost" \
-e "cluster.initial_cluster_manager_nodes=search-backend-0,search-backend-1,search-backend-2" \
-e "OPENSEARCH_JAVA_OPTS=-Xms1g -Xmx1g" \
-e "DISABLE_SECURITY_PLUGIN=true" \
-e "ES_JAVA_OPTS=-Xms1g -Xmx1g" \
-e "xpack.security.enabled=false" \
-e "cluster.routing.allocation.disk.threshold_enabled=false" \
opensearch-custom:latest
elasticsearch-custom:latest
# Then the app; this will fetch the actual data on startup (might take a while):
podman container run -it --rm --name search.quarkus.io --pod search.quarkus.io search-quarkus-io:999-SNAPSHOT
# OR, if you already have locals clones of *.quarkus.io:
Expand Down Expand Up @@ -251,11 +251,11 @@ you will need to set up a few things manually:
on quarkusio/search.quarkus.io.
See `indexing.reporting.github` configuration properties for more details.
`search-backend-config`::
Environment variables for the OpenSearch instances.
Environment variables for the Elasticsearch instances.
+
Put in there whatever configuration you need for your specific cluster.
`search-backend-secret`::
Secret environment variables for the OpenSearch instances.
Secret environment variables for the Elasticsearch instances.
+
Put in there whatever secret configuration you need for your specific cluster.

Expand Down
33 changes: 28 additions & 5 deletions pom.xml
Original file line number Diff line number Diff line change
Expand Up @@ -37,7 +37,14 @@
<version.impsort-maven-plugin>1.12.0</version.impsort-maven-plugin>
<!-- This version needs to match the version in src/main/docker/opensearch-custom.Dockerfile -->
<version.opensearch>2.18</version.opensearch>
<!-- This version needs to match the version in src/main/docker/elasticsearch-custom.Dockerfile -->
<version.elasticsearch>8.15</version.elasticsearch>
<version.quarkus-web-bundler>1.7.3</version.quarkus-web-bundler>
<!-- Configuration for the search backend used in tests by default: -->
<search.backend.dockerfile>${project.basedir}/src/main/docker/elasticsearch-custom.Dockerfile</search.backend.dockerfile>
<search.backend.dockerversion>${version.elasticsearch}</search.backend.dockerversion>
<search.backend.dockername>elasticsearch-custom</search.backend.dockername>
<search.backend.distribution>elastic</search.backend.distribution>
</properties>
<dependencyManagement>
<dependencies>
Expand Down Expand Up @@ -269,7 +276,8 @@
<systemProperties>
<maven.revision>${revision}</maven.revision>
<maven.project.testResourceDirectory>${project.basedir}/src/test/resources</maven.project.testResourceDirectory>
<maven.version.opensearch>${version.opensearch}</maven.version.opensearch>
<maven.version.search.backend>${search.backend.dockerversion}</maven.version.search.backend>
<maven.name.search.backend>${search.backend.dockername}</maven.name.search.backend>
</systemProperties>
</configuration>
</plugin>
Expand All @@ -295,7 +303,9 @@
<maven.home>${maven.home}</maven.home>
<maven.revision>${revision}</maven.revision>
<maven.project.testResourceDirectory>${project.basedir}/src/test/resources</maven.project.testResourceDirectory>
<maven.version.opensearch>${version.opensearch}</maven.version.opensearch>
<maven.version.search.backend>${search.backend.dockerversion}</maven.version.search.backend>
<maven.name.search.backend>${search.backend.dockername}</maven.name.search.backend>
<maven.distribution.search.backend>${search.backend.distribution}</maven.distribution.search.backend>
</systemPropertyVariables>
<statelessTestsetReporter implementation="org.apache.maven.plugin.surefire.extensions.junit5.JUnit5Xml30StatelessReporter">
<usePhrasedFileName>true</usePhrasedFileName>
Expand Down Expand Up @@ -375,14 +385,14 @@
<configuration>
<images>

Expand All @@ -392,6 +402,19 @@
</plugins>
</build>
<profiles>
<profile>
<!--
A simple profile to run the tests against an OpenSearch distribution of the search backend.
Keep in mind that this will only work for the tests, not for deploying the app to OpenShift.
-->
<id>opensearch</id>
<properties>
<search.backend.dockerfile>${project.basedir}/src/main/docker/opensearch-custom.Dockerfile</search.backend.dockerfile>
<search.backend.dockerversion>${version.opensearch}</search.backend.dockerversion>
<search.backend.dockername>opensearch-custom</search.backend.dockername>
<search.backend.distribution>opensearch</search.backend.distribution>
</properties>
</profile>
<profile>
<!-- Locked dependencies for mvnpm (Update with 'mvn io.mvnpm:locker-maven-plugin:LATEST:lock -Dunlocked') -->
<id>locker</id>
Expand Down
5 changes: 5 additions & 0 deletions src/main/docker/elasticsearch-custom.Dockerfile
Original file line number Diff line number Diff line change
@@ -0,0 +1,5 @@
FROM elastic/elasticsearch:8.15.3

RUN /usr/share/elasticsearch/bin/elasticsearch-plugin install --batch analysis-kuromoji
RUN /usr/share/elasticsearch/bin/elasticsearch-plugin install --batch analysis-smartcn
RUN /usr/share/elasticsearch/bin/elasticsearch-plugin install --batch analysis-icu
8 changes: 4 additions & 4 deletions src/main/helm/values.staging.yaml
Original file line number Diff line number Diff line change
@@ -1,7 +1,7 @@
app:
envs:
QUARKUS_PROFILE: 'staging'
# Avoid overloading the rather resource-constrained OpenSearch instance
# Avoid overloading the rather resource-constrained Search backend instance
INDEXING_QUEUE_COUNT: '4'
INDEXING_BULK_SIZE: '10'
resources:
Expand All @@ -11,13 +11,13 @@ app:
requests:
cpu: 250m
memory: 500Mi
opensearch:
elasticsearch:
envs:
OPENSEARCH_JAVA_OPTS: ' -Xms500m -Xmx500m '
ES_JAVA_OPTS: ' -Xms500m -Xmx500m '
resources:
limits:
cpu: 500m
memory: 1.0Gi
requests:
cpu: 250m
memory: 750Mi
memory: 750Mi
60 changes: 37 additions & 23 deletions src/main/kubernetes/openshift.yml
Original file line number Diff line number Diff line change
Expand Up @@ -53,18 +53,18 @@ spec:
alpha.image.policy.openshift.io/resolve-names: '*'
spec:
containers:
- name: opensearch
- name: elasticsearch
# The image gets pushed manually as part of the "deploy" workflow.
# This gets replaced with the correct image ref (exact tag).
image: opensearch-custom:latest
image: elasticsearch-custom:latest
imagePullPolicy: Always
resources:
limits:
cpu: '{{ .Values.opensearch.resources.limits.cpu }}'
memory: '{{ .Values.opensearch.resources.limits.memory }}'
cpu: '{{ .Values.elasticsearch.resources.limits.cpu }}'
memory: '{{ .Values.elasticsearch.resources.limits.memory }}'
requests:
cpu: '{{ .Values.opensearch.resources.requests.cpu }}'
memory: '{{ .Values.opensearch.resources.requests.memory }}'
cpu: '{{ .Values.elasticsearch.resources.requests.cpu }}'
memory: '{{ .Values.elasticsearch.resources.requests.memory }}'
readinessProbe:
httpGet:
scheme: HTTP
Expand All @@ -80,7 +80,7 @@ spec:
protocol: TCP
volumeMounts:
- name: data
mountPath: /usr/share/opensearch/data
mountPath: /usr/share/elasticsearch/data
env:
- name: cluster.name
value: search-quarkus-io
Expand All @@ -96,26 +96,40 @@ spec:
# but this shouldn't be too bad as we don't expect swapping to be enabled.
- name: bootstrap.memory_lock
value: "false"
# OpenSearch doesn't seem to automatically adapt -Xmx to available memory, for some reason
- name: OPENSEARCH_JAVA_OPTS
value: '{{ .Values.opensearch.envs.OPENSEARCH_JAVA_OPTS }}'
# This is necessary to avoid OpenSearch trying to install various things on startup,
# which leads to filesystem operations (chmod/chown) that won't work
# because only user 1000 has the relevant permissions,
# and we can't run with user 1000 on OpenShift.
# See also:
# - https://github.com/opensearch-project/opensearch-devops/issues/97
# - src/main/docker/opensearch-custom.Dockerfile
- name: DISABLE_PERFORMANCE_ANALYZER_AGENT_CLI
value: 'true'
- name: DISABLE_INSTALL_DEMO_CONFIG
value: 'true'
# Set the -Xmx explictly and don't rely on the search backend to figure out memory limits on its own.
- name: ES_JAVA_OPTS
value: '{{ .Values.elasticsearch.envs.ES_JAVA_OPTS }}'
# Not exposed to the internet, no sensitive data
# => We don't bother with HTTPS and pesky self-signed certificates
# Setting this env variable is better than setting plugins.security.disabled
# because this skips installing the plugin altogether (see above)
- name: DISABLE_SECURITY_PLUGIN
- name: xpack.security.enabled
value: 'true'
# Disable disk-based shard allocation thresholds: on large, relatively full disks (>90% used),
# it will lead to index creation to get stuck waiting for other nodes to join the cluster,
# which will never happen since we only have one node.
# See https://www.elastic.co/guide/en/elasticsearch/reference/7.17/modules-cluster.html#disk-based-shard-allocation
- name: cluster.routing.allocation.disk.threshold_enabled
value: 'false'
# Disable more plugins/features that we do not use:
- name: 'cluster.deprecation_indexing'
value: 'false'
- name: 'xpack.profiling.enabled'
value: 'false'
- name: 'xpack.ent_search.enabled'
value: 'false'
- name: 'indices.lifecycle.history_index_enabled'
value: 'false'
- name: 'slm.history_index_enabled'
value: 'false'
- name: 'stack.templates.enabled'
value: 'false'
- name: 'xpack.ml.enabled'
value: 'false'
- name: 'xpack.monitoring.templates.enabled'
value: 'false'
- name: 'xpack.watcher.enabled"'
value: 'false'
envFrom:
- configMapRef:
name: search-backend-config
Expand All @@ -134,4 +148,4 @@ spec:
storageClassName: "gp2"
resources:
requests:
storage: 5Gi
storage: 5Gi
34 changes: 17 additions & 17 deletions src/main/resources/application.properties
Original file line number Diff line number Diff line change
Expand Up @@ -66,22 +66,22 @@ quarkus.rest.path=/api
########################
# Hibernate Search
########################
# This version needs to match the version in src/main/docker/opensearch-custom.Dockerfile
quarkus.hibernate-search-standalone.elasticsearch.version=opensearch:2.18
# This version needs to match the version in src/main/docker/elasticsearch-custom.Dockerfile
quarkus.hibernate-search-standalone.elasticsearch.version=${maven.distribution.search.backend:elastic}:${maven.version.search.backend:8.15}
# Not using :latest here as a workaround until we get https://github.com/quarkusio/quarkus/pull/38896
quarkus.elasticsearch.devservices.image-name=opensearch-custom:${maven.version.opensearch}
quarkus.elasticsearch.devservices.java-opts=${PROD_OPENSEARCH_JAVA_OPTS}
# Limit parallelism of indexing, because OpenSearch can only handle so many documents in its buffers.
quarkus.elasticsearch.devservices.image-name=${maven.name.search.backend}:${maven.version.search.backend}
quarkus.elasticsearch.devservices.java-opts=${PROD_ES_JAVA_OPTS}
# Limit parallelism of indexing, because the search backend can only handle so many documents in its buffers.
# This leads to at most 8*12=96 documents being indexed in parallel, which should be plenty
# given how large our documents can be.
INDEXING_QUEUE_COUNT=8
INDEXING_BULK_SIZE=12
quarkus.hibernate-search-standalone.elasticsearch.indexing.queue-count=${INDEXING_QUEUE_COUNT}
quarkus.hibernate-search-standalone.elasticsearch.indexing.max-bulk-size=${INDEXING_BULK_SIZE}
# We need to apply a custom OpenSearch mapping to exclude very large fields from the _source
# We need to apply a custom search backend mapping to exclude very large fields from the _source
quarkus.hibernate-search-standalone.elasticsearch.schema-management.mapping-file=indexes/mapping-template.json
quarkus.hibernate-search-standalone.elasticsearch.schema-management.settings-file=indexes/settings-template.json
# In production, we don't expect OpenSearch to be reachable when the application starts
# In production, we don't expect the search backend to be reachable when the application starts
%prod.quarkus.hibernate-search-standalone.elasticsearch.version-check.enabled=false
# ... and the application automatically creates indexes upon first indexing anyway.
%prod.quarkus.hibernate-search-standalone.schema-management.strategy=none
Expand Down Expand Up @@ -166,7 +166,7 @@ quarkus.swagger-ui.title=Quarkus Search API
# We don't need it but more importantly it doesn't work (leads to marshalling errors)
# for strings that look like numbers (e.g. 2.11)
quarkus.helm.map-system-properties=false
# Set common k8s labels everywhere, even on OpenSearch resources
# Set common k8s labels everywhere, even on the search backend resources
quarkus.helm.values."version".paths=metadata.labels.'app.kubernetes.io/version',spec.template.metadata.labels.'app.kubernetes.io/version'
quarkus.helm.values."version"[email protected]
quarkus.helm.values."version".value=${maven.revision}
Expand Down Expand Up @@ -233,16 +233,16 @@ quarkus.openshift.add-version-to-label-selectors=false
# so that changes to the image can be rolled back in sync with the app.
# It happens that the revision passed to maven is a convenient unique version,
# but in theory we could use another unique string.
quarkus.helm.values."opensearch-image".paths=(kind == StatefulSet).spec.template.spec.containers.image
quarkus.helm.values."opensearch-image".value=opensearch-custom:${maven.revision}
quarkus.helm.values."opensearch-image".property=@.opensearch.image
quarkus.helm.values."elasticsearch-image".paths=(kind == StatefulSet).spec.template.spec.containers.image
quarkus.helm.values."elasticsearch-image".value=${maven.name.search.backend}:${maven.revision}
quarkus.helm.values."elasticsearch-image".property=@.elasticsearch.image
# Resource requirements (overridden for staging, see src/main/helm)
PROD_OPENSEARCH_JAVA_OPTS=-Xms1g -Xmx1g
quarkus.helm.values."@.opensearch.envs.OPENSEARCH_JAVA_OPTS".value=\ ${PROD_OPENSEARCH_JAVA_OPTS}
quarkus.helm.values."@.opensearch.resources.limits.cpu".value=2000m
quarkus.helm.values."@.opensearch.resources.requests.cpu".value=500m
quarkus.helm.values."@.opensearch.resources.limits.memory".value=2Gi
quarkus.helm.values."@.opensearch.resources.requests.memory".value=1.9Gi
PROD_ES_JAVA_OPTS=-Xms1g -Xmx1g
quarkus.helm.values."@.elasticsearch.envs.ES_JAVA_OPTS".value=\ ${PROD_ES_JAVA_OPTS}
quarkus.helm.values."@.elasticsearch.resources.limits.cpu".value=2000m
quarkus.helm.values."@.elasticsearch.resources.requests.cpu".value=500m
quarkus.helm.values."@.elasticsearch.resources.limits.memory".value=2Gi
quarkus.helm.values."@.elasticsearch.resources.requests.memory".value=1.9Gi

########################
# Web Bundler config
Expand Down
Loading