Skip to content

Commit 33b0695

Browse files
authored
Merge pull request #209 from GlobalDataverseCommunityConsortium/develop
Release v4.20
2 parents 4d00ec2 + 1ea178a commit 33b0695

File tree

101 files changed

+2077
-394
lines changed

Some content is hidden

Large Commits have some content hidden by default. Use the searchbox below for content that may be hidden.

101 files changed

+2077
-394
lines changed

README.rst

+15-12
Original file line numberDiff line numberDiff line change
@@ -1,7 +1,7 @@
11
Deploying, Running and Using Dataverse on Kubernetes
22
====================================================
33

4-
.. image:: https://raw.githubusercontent.com/IQSS/dataverse-kubernetes/master/docs/img/title-composition.png
4+
.. image:: docs/img/title-composition.png
55

66
|Dataverse badge|
77
|Validation badge|
@@ -11,25 +11,28 @@ Deploying, Running and Using Dataverse on Kubernetes
1111
|Docs badge|
1212
|IRC badge|
1313

14-
This community-supported project aims to provide simple to re-use Kubernetes
15-
objects on how to run Dataverse on a Kubernetes cluster.
14+
This community-supported project aims at offering a new way to deploy, run and
15+
maintain a Dataverse installation for any purpose on any kind of Kubernetes-based
16+
cloud infrastructure.
1617

17-
It aims at day-1 deployments and day-2 operations.
18+
You can use this on your laptop, in your on-prem datacentre or public cloud.
19+
With the power of `Kubernetes <http://kubernetes.io>`_, many scenarios are possible.
1820

1921
* Documentation: https://dataverse-k8s.rtfd.io
20-
* Support: https://github.com/IQSS/dataverse-kubernetes/issues
21-
* Roadmap: https://dataverse-k8s.rtfd.io/en/latest/roadmap.html
22+
* Support and new ideas: https://github.com/IQSS/dataverse-kubernetes/issues
2223

23-
If you would like to contribute, you are most welcome. Head over to the
24-
`contribution guide <https://dataverse-k8s.rtfd.io/en/latest/contribute.html>`_
25-
for details.
24+
If you would like to contribute, you are most welcome.
2625

26+
This project follows the same branching strategy as the upstream Dataverse
27+
project, using a ``release`` branch for stable releases plus a ``develop``
28+
branch. In this branch unexpected or breaking changes may happen.
2729

2830

29-
.. |Dataverse badge| image:: https://img.shields.io/badge/Dataverse-v4.19-important.svg
31+
32+
.. |Dataverse badge| image:: https://img.shields.io/badge/Dataverse-v4.20-important.svg
3033
:target: https://dataverse.org
31-
.. |Validation badge| image:: https://jenkins.dataverse.org/job/dataverse-k8s/job/Kubeval%20Linting/job/master/badge/icon?subject=kubeval&status=valid&color=purple
32-
:target: https://jenkins.dataverse.org/blue/organizations/jenkins/dataverse-k8s%2FKubeval%20Linting/activity?branch=master
34+
.. |Validation badge| image:: https://jenkins.dataverse.org/job/dataverse-k8s/job/Kubeval%20Linting/job/release/badge/icon?subject=kubeval&status=valid&color=purple
35+
:target: https://jenkins.dataverse.org/blue/organizations/jenkins/dataverse-k8s%2FKubeval%20Linting/activity?branch=release
3336
.. |DockerHub dataverse-k8s badge| image:: https://img.shields.io/static/v1.svg?label=image&message=dataverse-k8s&logo=docker
3437
:target: https://hub.docker.com/r/iqss/dataverse-k8s
3538
.. |DockerHub solr-k8s badge| image:: https://img.shields.io/static/v1.svg?label=image&message=solr-k8s&logo=docker

dataverse

Submodule dataverse updated 189 files

docker-compose.yaml

+32
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,32 @@
1+
---
2+
version: '3.5'
3+
services:
4+
5+
postgresql:
6+
image: postgres:9.6
7+
expose:
8+
- 5432
9+
environment:
10+
- POSTGRES_USER=dataverse
11+
- POSTGRES_PASSWORD=changeme
12+
13+
solr:
14+
image: iqss/solr-k8s
15+
expose:
16+
- 8983
17+
18+
dataverse:
19+
build:
20+
context: .
21+
dockerfile: ./docker/dataverse-k8s/glassfish-dev/Dockerfile
22+
image: iqss/dataverse-k8s:dev
23+
depends_on:
24+
- postgresql
25+
- solr
26+
ports:
27+
- 8080:8080
28+
volumes:
29+
- type: bind
30+
source: ./personas/docker-compose/secrets
31+
target: /secrets
32+
read_only: true

docker/dataverse-k8s/Jenkinsfile

+2-2
Original file line numberDiff line numberDiff line change
@@ -74,7 +74,7 @@ pipeline {
7474
}
7575
stage('latest') {
7676
when {
77-
branch 'master'
77+
branch 'release'
7878
}
7979
environment {
8080
// credentials() will magically add DOCKER_HUB_USR and DOCKER_HUB_PSW
@@ -83,7 +83,7 @@ pipeline {
8383
}
8484
steps {
8585
script {
86-
// Push master image to latest tag
86+
// Push release image to latest tag
8787
docker.withRegistry("${env.DOCKER_REGISTRY}", "${env.DOCKER_HUB_CRED}") {
8888
gf_docker_image.push("latest")
8989
pyr_docker_image.push("payara")

docker/dataverse-k8s/bin/bootstrap-job.sh

+3-2
Original file line numberDiff line numberDiff line change
@@ -17,7 +17,8 @@ DATAVERSE_URL=${DATAVERSE_URL:-"http://${DATAVERSE_SERVICE_HOST}:${DATAVERSE_SER
1717
# The Solr Service IP is always available under its name within the same namespace.
1818
# If people want to use a different Solr than we normally deploy, they have the
1919
# option to override.
20-
SOLR_K8S_HOST=${SOLR_K8S_HOST:-"solr"}
20+
SOLR_SERVICE_HOST=${SOLR_SERVICE_HOST:-"solr"}
21+
SOLR_SERVICE_PORT=${SOLR_SERVICE_PORT:-"8983"}
2122

2223
# Check postgres and API key secrets are available
2324
if [ ! -s "${SECRETS_DIR}/db/password" ]; then
@@ -53,7 +54,7 @@ sed -i -e "s#[email protected]#${CONTACT_MAIL}#" data/user-admin.json
5354
./setup-all.sh --insecure -p="${ADMIN_PASSWORD:-admin}"
5455

5556
# 4.) Configure Solr location
56-
curl -sS -X PUT -d "${SOLR_K8S_HOST}:8983" "${DATAVERSE_URL}/api/admin/settings/:SolrHostColonPort"
57+
curl -sS -X PUT -d "${SOLR_SERVICE_HOST}:${SOLR_SERVICE_PORT}" "${DATAVERSE_URL}/api/admin/settings/:SolrHostColonPort"
5758

5859
# 5.) Provision builtin users key to enable creation of more builtin users
5960
if [ -s "${SECRETS_DIR}/api/userskey" ]; then

docker/dataverse-k8s/bin/default.config

+8
Original file line numberDiff line numberDiff line change
@@ -16,6 +16,14 @@ JMX_EXPORTER_CONFIG=${JMX_EXPORTER_CONFIG:-"${HOME}/jmx_exporter_config.yaml"}
1616
# (Exporting needed as they cannot be seen by `env` otherwise)
1717

1818
export dataverse_files_directory=${dataverse_files_directory:-/data}
19+
export dataverse_files_storage__driver__id=${dataverse_files_storage__driver__id:-local}
20+
21+
if [ "${dataverse_files_storage__driver__id}" = "local" ]; then
22+
export dataverse_files_local_type=${dataverse_files_local_type:-file}
23+
export dataverse_files_local_label=${dataverse_files_local_label:-Local}
24+
export dataverse_files_local_directory=${dataverse_files_local_directory:-/data}
25+
fi
26+
1927
export dataverse_rserve_host=${dataverse_rserve_host:-rserve}
2028
export dataverse_rserve_port=${dataverse_rserve_port:-6311}
2129
export dataverse_rserve_user=${dataverse_rserve_user:-rserve}

docker/dataverse-k8s/glassfish/Dockerfile

+18-11
Original file line numberDiff line numberDiff line change
@@ -8,9 +8,9 @@ FROM centos:7
88

99
LABEL maintainer="FDM FZJ <[email protected]>"
1010

11-
ARG TINI_VERSION=v0.18.0
11+
ARG TINI_VERSION=v0.19.0
1212
ARG JMX_EXPORTER_VERSION=0.12.0
13-
ARG VERSION=4.19
13+
ARG VERSION=4.20
1414
ARG DOMAIN=domain1
1515

1616
ENV HOME_DIR=/opt/dataverse\
@@ -21,11 +21,12 @@ ENV HOME_DIR=/opt/dataverse\
2121
DOCROOT_DIR=/docroot\
2222
METADATA_DIR=/metadata\
2323
SECRETS_DIR=/secrets\
24+
DUMPS_DIR=/dumps\
2425
GLASSFISH_PKG=http://download.java.net/glassfish/4.1/release/glassfish-4.1.zip\
2526
GLASSFISH_SHA1=704a90899ec5e3b5007d310b13a6001575827293\
2627
WELD_PKG=https://repo1.maven.org/maven2/org/jboss/weld/weld-osgi-bundle/2.2.10.SP1/weld-osgi-bundle-2.2.10.SP1-glassfish4.jar\
27-
GRIZZLY_PKG=http://guides.dataverse.org/en/latest/_downloads/glassfish-grizzly-extra-all.jar\
28-
PGDRIVER_PKG=https://jdbc.postgresql.org/download/postgresql-42.2.10.jar\
28+
GRIZZLY_PKG=http://guides.dataverse.org/en/${VERSION}/_downloads/glassfish-grizzly-extra-all.jar\
29+
PGDRIVER_PKG=https://jdbc.postgresql.org/download/postgresql-42.2.12.jar\
2930
DATAVERSE_VERSION=${VERSION}\
3031
DATAVERSE_PKG=https://github.com/IQSS/dataverse/releases/download/v${VERSION}/dvinstall.zip\
3132
JMX_EXPORTER_PKG=https://repo1.maven.org/maven2/io/prometheus/jmx/jmx_prometheus_javaagent/${JMX_EXPORTER_VERSION}/jmx_prometheus_javaagent-${JMX_EXPORTER_VERSION}.jar\
@@ -43,15 +44,13 @@ RUN groupadd -g 1000 glassfish && \
4344
useradd -u 1000 -M -s /bin/bash -d ${HOME_DIR} glassfish -g glassfish && \
4445
echo glassfish:glassfish | chpasswd && \
4546
mkdir -p ${HOME_DIR} ${SCRIPT_DIR} ${SECRETS_DIR} && \
46-
mkdir -p ${DATA_DIR} ${METADATA_DIR} ${DOCROOT_DIR} && \
47-
chown -R glassfish: ${HOME_DIR} ${DATA_DIR} ${METADATA_DIR} ${DOCROOT_DIR}
47+
mkdir -p ${DATA_DIR} ${METADATA_DIR} ${DOCROOT_DIR} ${DUMPS_DIR} && \
48+
chown -R glassfish: ${HOME_DIR} ${DATA_DIR} ${METADATA_DIR} ${DOCROOT_DIR} ${DUMPS_DIR}
4849

4950
# Install tini as minimized init system
50-
RUN wget --no-verbose -O /tini https://github.com/krallin/tini/releases/download/${TINI_VERSION}/tini && \
51-
wget --no-verbose -O /tini.asc https://github.com/krallin/tini/releases/download/${TINI_VERSION}/tini.asc && \
52-
gpg --batch --keyserver "hkp://p80.pool.sks-keyservers.net:80" --recv-keys 595E85A6B1B4779EA4DAAEC70B588DFF0527A9B7 && \
53-
gpg --batch --verify /tini.asc /tini && \
54-
chmod +x /tini
51+
RUN wget --no-verbose -O tini-amd64 https://github.com/krallin/tini/releases/download/${TINI_VERSION}/tini-amd64 && \
52+
echo '93dcc18adc78c65a028a84799ecf8ad40c936fdfc5f2a57b1acda5a8117fa82c tini-amd64' | sha256sum -c - && \
53+
mv tini-amd64 /tini && chmod +x /tini
5554

5655
# Install esh template engine from Github
5756
RUN wget --no-verbose -O esh https://raw.githubusercontent.com/jirutka/esh/v0.3.0/esh && \
@@ -94,6 +93,14 @@ RUN ${GLASSFISH_DIR}/bin/asadmin start-domain && \
9493
for MEMORY_JVM_OPTION in $(${GLASSFISH_DIR}/bin/asadmin list-jvm-options | grep "Xm[sx]"); do\
9594
${GLASSFISH_DIR}/bin/asadmin delete-jvm-options $MEMORY_JVM_OPTION;\
9695
done && \
96+
${GLASSFISH_DIR}/bin/asadmin create-jvm-options -- "-XX\:+HeapDumpOnOutOfMemoryError" && \
97+
${GLASSFISH_DIR}/bin/asadmin create-jvm-options -- "-XX\:HeapDumpPath=${DUMPS_DIR}" && \
98+
${GLASSFISH_DIR}/bin/asadmin create-jvm-options -- "-XX\:+UseG1GC" && \
99+
${GLASSFISH_DIR}/bin/asadmin create-jvm-options -- "-XX\:+UseStringDeduplication" && \
100+
${GLASSFISH_DIR}/bin/asadmin create-jvm-options -- "-XX\:MaxGCPauseMillis=500" && \
101+
${GLASSFISH_DIR}/bin/asadmin create-jvm-options -- "-XX\:MetaspaceSize=256m" && \
102+
${GLASSFISH_DIR}/bin/asadmin create-jvm-options -- "-XX\:MaxMetaspaceSize=2g" && \
103+
${GLASSFISH_DIR}/bin/asadmin create-jvm-options -- "-XX\:+IgnoreUnrecognizedVMOptions" && \
97104
${GLASSFISH_DIR}/bin/asadmin create-jvm-options -- "-server" && \
98105
${GLASSFISH_DIR}/bin/asadmin stop-domain && \
99106
mkdir -p ${DOMAIN_DIR}/autodeploy && \

docker/dataverse-k8s/glassfish/bin/init_1_conf_glassfish.sh

+23-10
Original file line numberDiff line numberDiff line change
@@ -30,16 +30,29 @@ do
3030
done
3131

3232
# 1b. Create AWS access credentials when storage driver is set to s3
33-
# See IQSS/dataverse-kubernetes#28 for details of this workaround.
34-
if [ "s3" = "${dataverse_files_storage__driver__id}" ]; then
35-
if [ -f ${SECRETS_DIR}/s3/access-key ] && [ -f ${SECRETS_DIR}/s3/secret-key ]; then
36-
mkdir -p ${HOME_DIR}/.aws
37-
echo "[default]" > ${HOME_DIR}/.aws/credentials
38-
cat ${SECRETS_DIR}/s3/access-key | sed -e "s#^#aws_access_key_id = #" -e "s#\$#\n#" >> ${HOME_DIR}/.aws/credentials
39-
cat ${SECRETS_DIR}/s3/secret-key | sed -e "s#^#aws_secret_access_key = #" -e "s#\$#\n#" >> ${HOME_DIR}/.aws/credentials
40-
else
41-
echo "WARNING: Could not find all S3 access secrets in ${SECRETS_DIR}/s3/(access-key|secret-key). Check your Kubernetes Secrets and their mounting!"
42-
fi
33+
# Find all access keys
34+
if [ -d "${SECRETS_DIR}/s3" ]; then
35+
S3_KEYS=`find "${SECRETS_DIR}/s3" -readable -type f -iname '*access-key'`
36+
S3_CRED_FILE=${HOME_DIR}/.aws/credentials
37+
mkdir -p `dirname "${S3_CRED_FILE}"`
38+
rm -f ${S3_CRED_FILE}
39+
# Iterate keys
40+
while IFS= read -r S3_ACCESS_KEY; do
41+
echo "Loading S3 key ${S3_ACCESS_KEY}"
42+
# Try to find the secret key, parse for profile and add to the credentials file.
43+
S3_PROFILE=`echo "${S3_ACCESS_KEY}" | sed -ne "s#.*/\(.*\)-access-key#\1#p"`
44+
S3_SECRET_KEY=`echo "${S3_ACCESS_KEY}" | sed -ne "s#\(.*/\|.*/.*-\)access-key#\1secret-key#p"`
45+
46+
if [ -r ${S3_SECRET_KEY} ]; then
47+
[ -z "${S3_PROFILE}" ] && echo "[default]" >> "${S3_CRED_FILE}" || echo "[${S3_PROFILE}]" >> "${S3_CRED_FILE}"
48+
cat "${S3_ACCESS_KEY}" | sed -e "s#^#aws_access_key_id = #" -e "s#\$#\n#" >> "${S3_CRED_FILE}"
49+
cat "${S3_SECRET_KEY}" | sed -e "s#^#aws_secret_access_key = #" -e "s#\$#\n#" >> "${S3_CRED_FILE}"
50+
echo "" >> "${S3_CRED_FILE}"
51+
else
52+
echo "ERROR: Could not find or read matching \"$S3_SECRET_KEY\"."
53+
exit 1
54+
fi
55+
done <<< "${S3_KEYS}"
4356
fi
4457

4558
# 2. Domain-spaced resources (JDBC, JMS, ...)

docker/dataverse-k8s/payara/Dockerfile

+8-19
Original file line numberDiff line numberDiff line change
@@ -4,44 +4,33 @@
44
# You may obtain a copy of the License at
55
# http://www.apache.org/licenses/LICENSE-2.0
66

7-
FROM payara/server-full:5.201
7+
FROM payara/server-full:5.2020.3
88
LABEL maintainer="FDM FZJ <[email protected]>"
99

10-
ARG VERSION=4.19
10+
ARG VERSION=4.20
1111
ARG DOMAIN=domain1
1212

1313
ENV DATA_DIR=/data\
1414
DOCROOT_DIR=/docroot\
1515
METADATA_DIR=/metadata\
1616
SECRETS_DIR=/secrets\
17+
DUMPS_DIR=/dumps\
1718
DOMAIN_DIR=${PAYARA_DIR}/glassfish/domains/${DOMAIN_NAME}\
1819
DATAVERSE_VERSION=${VERSION}\
1920
DATAVERSE_PKG=https://github.com/IQSS/dataverse/releases/download/v${VERSION}/dvinstall.zip\
2021
PGDRIVER_PKG=https://jdbc.postgresql.org/download/postgresql-42.2.12.jar\
21-
MEM_MAX_RAM_PERCENTAGE=70.0\
22-
MEM_XSS=512k
22+
# Make heap dumps on OOM appear in DUMPS_DIR
23+
JVM_ARGS="-XX:+HeapDumpOnOutOfMemoryError -XX:HeapDumpPath=\${ENV=DUMPS_DIR}"
2324

2425
# Create basic pathes
2526
USER root
2627
RUN mkdir -p ${HOME_DIR} ${SCRIPT_DIR} ${SECRETS_DIR} && \
27-
mkdir -p ${DATA_DIR} ${METADATA_DIR} ${DOCROOT_DIR} && \
28-
chown -R payara: ${DATA_DIR} ${METADATA_DIR} ${DOCROOT_DIR} ${SECRETS_DIR}
29-
30-
# WORKAROUND MEMORY ISSUES UNTIL UPSTREAM FIXES THEM IN NEW RELEASE
31-
RUN ${PAYARA_DIR}/bin/asadmin --user=${ADMIN_USER} --passwordfile=${PASSWORD_FILE} start-domain ${DOMAIN_NAME} && \
32-
${PAYARA_DIR}/bin/asadmin --user=${ADMIN_USER} --passwordfile=${PASSWORD_FILE} delete-jvm-options \
33-
'-XX\:+UnlockExperimentalVMOptions:-XX\:+UseCGroupMemoryLimitForHeap:-XX\:MaxRAMFraction=1' && \
34-
${PAYARA_DIR}/bin/asadmin --user=${ADMIN_USER} --passwordfile=${PASSWORD_FILE} create-jvm-options \
35-
'-XX\:+UseContainerSupport:-XX\:MaxRAMPercentage=${ENV=MEM_MAX_RAM_PERCENTAGE}:-Xss${ENV=MEM_XSS}' && \
36-
${PAYARA_DIR}/bin/asadmin --user=${ADMIN_USER} --passwordfile=${PASSWORD_FILE} stop-domain ${DOMAIN_NAME} && \
37-
# Cleanup after initialization
38-
rm -rf \
39-
${PAYARA_DIR}/glassfish/domains/${DOMAIN_NAME}/osgi-cache \
40-
${PAYARA_DIR}/glassfish/domains/${DOMAIN_NAME}/logs
28+
mkdir -p ${DATA_DIR} ${METADATA_DIR} ${DOCROOT_DIR} ${DUMPS_DIR} && \
29+
chown -R payara: ${DATA_DIR} ${METADATA_DIR} ${DOCROOT_DIR} ${SECRETS_DIR} ${DUMPS_DIR}
4130

4231
# Install prerequisites
4332
RUN apt-get -qq update && \
44-
apt-get -qqy install postgresql-client jq imagemagick curl
33+
apt-get -qqy install postgresql-client jq imagemagick curl wget unzip
4534

4635
# Install esh template engine from Github
4736
RUN wget --no-verbose -O esh https://raw.githubusercontent.com/jirutka/esh/v0.3.0/esh && \

docker/dataverse-k8s/payara/bin/init_2_conf_payara.sh

+23-11
Original file line numberDiff line numberDiff line change
@@ -34,17 +34,29 @@ do
3434
done
3535

3636
# 1b. Create AWS access credentials when storage driver is set to s3
37-
# See IQSS/dataverse-kubernetes#28 for details of this workaround.
38-
if [ "s3" = "${dataverse_files_storage__driver__id}" ]; then
39-
if [ -f ${SECRETS_DIR}/s3/access-key ] && [ -f ${SECRETS_DIR}/s3/secret-key ]; then
40-
echo "INFO: Deploying AWS credentials."
41-
mkdir -p ${HOME_DIR}/.aws
42-
echo "[default]" > ${HOME_DIR}/.aws/credentials
43-
cat ${SECRETS_DIR}/s3/access-key | sed -e "s#^#aws_access_key_id = #" -e "s#\$#\n#" >> ${HOME_DIR}/.aws/credentials
44-
cat ${SECRETS_DIR}/s3/secret-key | sed -e "s#^#aws_secret_access_key = #" -e "s#\$#\n#" >> ${HOME_DIR}/.aws/credentials
45-
else
46-
echo "WARNING: Could not find all S3 access secrets in ${SECRETS_DIR}/s3/(access-key|secret-key). Check your Kubernetes Secrets and their mounting!"
47-
fi
37+
# Find all access keys
38+
if [ -d "${SECRETS_DIR}/s3" ]; then
39+
S3_KEYS=`find "${SECRETS_DIR}/s3" -readable -type f -iname '*access-key'`
40+
S3_CRED_FILE=${HOME_DIR}/.aws/credentials
41+
mkdir -p `dirname "${S3_CRED_FILE}"`
42+
rm -f ${S3_CRED_FILE}
43+
# Iterate keys
44+
while IFS= read -r S3_ACCESS_KEY; do
45+
echo "Loading S3 key ${S3_ACCESS_KEY}"
46+
# Try to find the secret key, parse for profile and add to the credentials file.
47+
S3_PROFILE=`echo "${S3_ACCESS_KEY}" | sed -ne "s#.*/\(.*\)-access-key#\1#p"`
48+
S3_SECRET_KEY=`echo "${S3_ACCESS_KEY}" | sed -ne "s#\(.*/\|.*/.*-\)access-key#\1secret-key#p"`
49+
50+
if [ -r ${S3_SECRET_KEY} ]; then
51+
[ -z "${S3_PROFILE}" ] && echo "[default]" >> "${S3_CRED_FILE}" || echo "[${S3_PROFILE}]" >> "${S3_CRED_FILE}"
52+
cat "${S3_ACCESS_KEY}" | sed -e "s#^#aws_access_key_id = #" -e "s#\$#\n#" >> "${S3_CRED_FILE}"
53+
cat "${S3_SECRET_KEY}" | sed -e "s#^#aws_secret_access_key = #" -e "s#\$#\n#" >> "${S3_CRED_FILE}"
54+
echo "" >> "${S3_CRED_FILE}"
55+
else
56+
echo "ERROR: Could not find or read matching \"$S3_SECRET_KEY\"."
57+
exit 1
58+
fi
59+
done <<< "${S3_KEYS}"
4860
fi
4961

5062
# 2. Domain-spaced resources (JDBC, JMS, ...)

docker/solr-k8s/Dockerfile

+1-1
Original file line numberDiff line numberDiff line change
@@ -10,7 +10,7 @@ LABEL maintainer="FDM FZJ <[email protected]>"
1010

1111
ARG WEBHOOK_VERSION=2.6.11
1212
ARG TINI_VERSION=v0.18.0
13-
ARG VERSION=4.19
13+
ARG VERSION=4.20
1414
ARG COLLECTION=collection1
1515
ENV SOLR_OPTS="-Dsolr.jetty.request.header.size=102400"\
1616
COLLECTION_DIR=/opt/solr/server/solr/${COLLECTION}\

docker/solr-k8s/Jenkinsfile

+2-2
Original file line numberDiff line numberDiff line change
@@ -68,7 +68,7 @@ pipeline {
6868
}
6969
stage('latest') {
7070
when {
71-
branch 'master'
71+
branch 'release'
7272
}
7373
environment {
7474
// credentials() will magically add DOCKER_HUB_USR and DOCKER_HUB_PSW
@@ -77,7 +77,7 @@ pipeline {
7777
}
7878
steps {
7979
script {
80-
// Push master image to latest tag
80+
// Push release image to latest tag
8181
docker.withRegistry("${env.DOCKER_REGISTRY}", "${env.DOCKER_HUB_CRED}") {
8282
docker_image.push("latest")
8383
}

docs/.gitignore

+1
Original file line numberDiff line numberDiff line change
@@ -1 +1,2 @@
11
.build
2+
_build

docs/conf.py

+2-1
Original file line numberDiff line numberDiff line change
@@ -25,7 +25,7 @@
2525
author = u'Oliver Bertuch'
2626

2727
# The short X.Y version
28-
version = u'4.19'
28+
version = u'4.20'
2929
# The full version, including alpha/beta/rc tags
3030
release = version
3131

@@ -87,6 +87,7 @@
8787
autosectionlabel_prefix_document = True
8888

8989
extlinks = {
90+
'tree': ('https://github.com/IQSS/dataverse-kubernetes/tree/master/%s', 'folder of master branch '),
9091
'issue': ('https://github.com/IQSS/dataverse-kubernetes/issues/%s', 'issue '),
9192
'issue_dv': ('https://github.com/IQSS/dataverse/issues/%s', 'issue '),
9293
'guide_dv': ('http://guides.dataverse.org/en/'+version+'/%s', 'upstream docs ')

0 commit comments

Comments
 (0)