Skip to content

Commit

Permalink
Merge pull request #245 from tapis-project/dev
Browse files Browse the repository at this point in the history
Dev
  • Loading branch information
mpackard authored Jul 17, 2023
2 parents d565b18 + 974156c commit da19522
Show file tree
Hide file tree
Showing 70 changed files with 631 additions and 477 deletions.
29 changes: 29 additions & 0 deletions CHANGELOG.md
Original file line number Diff line number Diff line change
Expand Up @@ -2,6 +2,35 @@

Notable changes between versions.

## 1.4.0

### Services updated:

- Nginx locations for individual components have been split into their own location files. Should not cause a breaking change or interrupt routing.
- [Systems: 1.3.3 to 1.4.0 (tapis/systems)](https://github.com/tapis-project/tapis-systems/blob/1.4.0/CHANGELOG.md)
- [Apps: 1.3.3 to 1.4.0 (tapis/apps)](https://github.com/tapis-project/tapis-apps/blob/1.4.0/CHANGELOG.md)
- [Notifications: 1.3.4 to 1.4.0 (tapis/notifications, notifications-dispatcher)](https://github.com/tapis-project/tapis-notifications/blob/1.4.0/CHANGELOG.md)
- [Files: 1.3.6 to 1.4.0 (tapis/tapis-files, tapis/tapis-files-workers)](https://github.com/tapis-project/tapis-files/blob/dev/CHANGELOG.md)
- [Jobs: 1.3.5 to 1.4.0 (tapis/jobsworker, jobsmigrate, jobsapi)](https://github.com/tapis-project/tapis-jobs/blob/dev/tapis-jobsapi/CHANGELOG.md)
- [Security: 1.3.2 to 1.4.0 (tapis/securitymigrate, securityadmin, securityapi, securityexport)](https://github.com/tapis-project/tapis-security/blob/dev/tapis-securityapi/CHANGELOG.md)
- [Authenticator: 1.3.4 to 1.3.5 (authenticator_api_image, authenticator_migrations_image)](https://github.com/tapis-project/authenticator/blob/staging/CHANGELOG.md)
- [Pods: 1.3.2 to 1.4.0 (tapis/pods-api)](https://github.com/tapis-project/pods_service/blob/prod/CHANGELOG.md#140---2023-07-06)

### Breaking Changes for Services / Tapis Users

- For Systems and Apps: Environment variables beginning with *_tapis* are no longer valid in the envVariables attribute. (This matches existing Jobs service behavior.) If you are a Tapis user who creates or maintains Systems or Apps resources, creating a resource that specifies an environment variable starting with *_tapis* will now result in the resource creation to be rejected. If such a resource already exists, future jobs that use it will fail.
- Authenticator:
- The DELETE /v3/oauth2/clients endpoint now returns the standard 5-stanza Tapis response. Previously, it returned an empty HTTP response. Applications that use this endpoint should be updated to handle a non-empty response.
- The POST /v3/oauth2/tokens endpoint has been changed in the case of the device_code grant to require only the client_id as a POST parameter. Previously, the client_id and client_key were erroneously both required to be passed using an HTTP Basic Auth header. Client applications that utilized the device code grant type and passed the client credentials as part of the HTTP Basic Auth header must be updated to pass only the client id as part of the POST payload. The OA3 spec has been updated to reflect this new requirement. See issue #32.

### Deployer updates:

- None

### Breaking Changes for Tapis Deployer Admins

- None

## 1.3.8

- Added java heap max and min options for apps, systems, and notifications when using Docker compose.
Expand Down
2 changes: 1 addition & 1 deletion playbooks/roles/apps/defaults/main/images.yml
Original file line number Diff line number Diff line change
@@ -1,3 +1,3 @@
apps_api_image: tapis/apps:1.3.3
apps_api_image: tapis/apps:1.4.0
apps_postgres_image: postgres:12.4
apps_pgadmin_image: dpage/pgadmin4:6.20
4 changes: 2 additions & 2 deletions playbooks/roles/authenticator/defaults/main/images.yml
Original file line number Diff line number Diff line change
@@ -1,4 +1,4 @@
authenticator_api_image: tapis/authenticator:1.3.4
authenticator_migrations_image: tapis/authenticator-migrations:1.3.4
authenticator_api_image: tapis/authenticator:1.3.5
authenticator_migrations_image: tapis/authenticator-migrations:1.3.5
authenticator_postgres_image: postgres:11.4
authenticator_ldap_image: tacc/slapd:1
2 changes: 1 addition & 1 deletion playbooks/roles/baseburnup/defaults/main/vars.yml
Original file line number Diff line number Diff line change
@@ -1,4 +1,4 @@
baseburnup_tapis_deployer_version: 1.3.8
baseburnup_tapis_deployer_version: 1.3.9
baseburnup_service_url: "{{ global_service_url }}"
baseburnup_vault_url: "{{ global_vault_url }}"

Empty file.
2 changes: 1 addition & 1 deletion playbooks/roles/baseburnup/templates/kube/burnup
Original file line number Diff line number Diff line change
Expand Up @@ -298,7 +298,7 @@ secondary_services() {
files
{% endif %}

{% if "gloubs-proxy" in components_to_deploy %}
{% if "globus-proxy" in components_to_deploy %}
globus-proxy
{% endif %}

Expand Down
4 changes: 2 additions & 2 deletions playbooks/roles/files/defaults/main/images.yml
Original file line number Diff line number Diff line change
@@ -1,5 +1,5 @@
files_api_image: tapis/tapis-files:1.3.6
files_workers_image: tapis/tapis-files-workers:1.3.6
files_api_image: tapis/tapis-files:1.4.0
files_workers_image: tapis/tapis-files-workers:1.4.0
files_postgres_image: postgres:11
files_migrations_image: postgres:11
files_minio_image: minio/minio
Expand Down
6 changes: 3 additions & 3 deletions playbooks/roles/jobs/defaults/main/images.yml
Original file line number Diff line number Diff line change
@@ -1,6 +1,6 @@
jobs_api_image: tapis/jobsapi:1.3.5
jobs_migrations_image: tapis/jobsmigrate:1.3.5
jobs_worker_image: tapis/jobsworker:1.3.5
jobs_api_image: tapis/jobsapi:1.4.0
jobs_migrations_image: tapis/jobsmigrate:1.4.0
jobs_worker_image: tapis/jobsworker:1.4.0
jobs_postgres_image: postgres:12.4
jobs_pgadmin_image: dpage/pgadmin4:6.20
jobs_rabbitmq_management_image: rabbitmq:3.8.11-management
Original file line number Diff line number Diff line change
Expand Up @@ -446,7 +446,7 @@ http {
{%- endif %}
proxy_redirect off;
proxy_set_header Host $host;
}
}

# workflows
location /v3/workflows
Expand Down
4 changes: 2 additions & 2 deletions playbooks/roles/notifications/defaults/main/images.yml
Original file line number Diff line number Diff line change
@@ -1,5 +1,5 @@
notifications_postgres_image: postgres:12.4
notifications_pgadmin_image: dpage/pgadmin4:6.20
notifications_rabbitmq_image: rabbitmq:3.8.11-management
notifications_api_image: tapis/notifications:1.3.4
notifications_dispatcher_image: tapis/notifications-dispatcher:1.3.4
notifications_api_image: tapis/notifications:1.4.0
notifications_dispatcher_image: tapis/notifications-dispatcher:1.4.0
2 changes: 1 addition & 1 deletion playbooks/roles/pods/defaults/main/images.yml
Original file line number Diff line number Diff line change
Expand Up @@ -2,4 +2,4 @@ pods_alpine_image: alpine:3.17
pods_postgres_image: postgres:14-alpine
pods_rabbitmq_image: rabbitmq:3.6.12-management
pods_api_image: tapis/pods-api:{{ pods_image_version }}
pods_traefik_image: notchristiangarcia/traefik-testing:1.3.0
pods_traefik_image: traefik:3.0
6 changes: 2 additions & 4 deletions playbooks/roles/pods/templates/kube/burndown
Original file line number Diff line number Diff line change
Expand Up @@ -4,9 +4,6 @@
# Security/Roles
#kubectl delete -f security.yml

# Certs
#kubectl delete secret pods-certs

# Configs
kubectl delete configmap pods-config
kubectl delete configmap pods-traefik-conf
Expand All @@ -24,4 +21,5 @@ kubectl delete -f nfs.yml

# PVC - Don't burn these down silly.
#kubectl delete -f postgres-pvc.yml
#kubectl delete -f nfs-pvc.yml
#kubectl delete -f nfs-pvc.yml
#kubectl delete -f traefik-pvc.yml
13 changes: 5 additions & 8 deletions playbooks/roles/pods/templates/kube/burnup
Original file line number Diff line number Diff line change
Expand Up @@ -8,22 +8,19 @@ kubectl apply -f services.yml
# Security/Roles
kubectl apply -f security.yml

# Certs
{% if pods_traefik_certs_file and pods_traefik_cert_key %}
kubectl create secret tls pods-certs --cert={{ pods_traefik_certs_file }} --key={{ pods_traefik_cert_key }}
{% else %}
kubectl create secret generic pods-certs
{% endif %}

# Configs
kubectl create configmap pods-config --from-file=config.json
kubectl create configmap pods-traefik-conf --from-file=traefik.yml

# PVC
kubectl apply -f postgres-pvc.yml
kubectl wait --for=condition=complete job/chown-pods-postgres-pvc
kubectl apply -f nfs-pvc.yml
kubectl apply -f traefik-pvc.yml

# Wait for PVC jobs to complete
kubectl wait --for=condition=complete job/chown-pods-postgres-pvc
kubectl wait --for=condition=complete job/pods-nfs-mkdirs
kubectl wait --for=condition=complete job/chown-pods-traefik-pvc

# Storage
kubectl apply -f postgres.yml
Expand Down
26 changes: 17 additions & 9 deletions playbooks/roles/pods/templates/kube/traefik-proxy.yml
Original file line number Diff line number Diff line change
Expand Up @@ -32,22 +32,30 @@ spec:
- --api.dashboard=true
- --api.insecure=true
- --accesslog=true
- --log.level=DEBUG
- --tracing.instana.loglevel=DEBUG
- --tracing.instana=false
#- --providers.file.directory=/etc/traefik/
- --log.level=INFO # INFO | DEBUG
- --certificatesresolvers.tlsletsencrypt.acme.tlschallenge=true
- --certificatesresolvers.tlsletsencrypt.acme.storage=/etc/traefik3/acme.json
# To use letsencrypts staging server, to avoid rate limiting when testing
#- --certificatesresolvers.tlsletsencrypt.acme.caserver=https://acme-staging-v02.api.letsencrypt.org/directory
# To use zerossl rather than letsencrypt, create account, fill in eab.kid and eab.hmacencoded.
#- --certificatesresolvers.tlsletsencrypt.acme.caserver=https://acme.zerossl.com/v2/DV90
#- --certificatesresolvers.tlsletsencrypt.acme.email=email@domain
#- --certificatesresolvers.tlsletsencrypt.acme.eab.kid=fromwebsite
#- --certificatesresolvers.tlsletsencrypt.acme.eab.hmacencoded=fromwebsite

- --providers.http=true
- --providers.http.endpoint=http://pods-api:8000/traefik-config
volumeMounts:
- name: pods-traefik-conf
mountPath: /etc/traefik2/
- name: certs
mountPath: /tmp/ssl
- name: acme-storage
mountPath: /etc/traefik3/acme.json
subPath: acme.json

volumes:
- name: pods-traefik-conf
configMap:
name: pods-traefik-conf
- name: certs
secret:
secretName: pods-certs
- name: acme-storage
persistentVolumeClaim:
claimName: pods-traefik-vol01
34 changes: 34 additions & 0 deletions playbooks/roles/pods/templates/kube/traefik-pvc.yml
Original file line number Diff line number Diff line change
@@ -0,0 +1,34 @@
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: pods-traefik-vol01
spec:
accessModes:
- ReadWriteOnce
storageClassName: rbd-new
resources:
requests:
storage: 10Gi

---
apiVersion: batch/v1
kind: Job
metadata:
name: chown-pods-traefik-pvc
spec:
ttlSecondsAfterFinished: 60
template:
spec:
restartPolicy: Never
containers:
- name: touch-acme-json
image: busybox
command: ["bin/sh", "-c"]
args: ["touch /mnt/acme.json && chmod 0600 /mnt/acme.json"]
volumeMounts:
- name: pods-traefik-data
mountPath: /mnt
volumes:
- name: pods-traefik-data
persistentVolumeClaim:
claimName: pods-traefik-vol01
15 changes: 1 addition & 14 deletions playbooks/roles/pods/templates/kube/traefik.yml
Original file line number Diff line number Diff line change
Expand Up @@ -39,17 +39,4 @@ http:
pods-api:
loadBalancer:
servers:
- url: http://pods-api:8000

tls:
options:
default:
minVersion: VersionTLS12
certificates:
- certFile: /tmp/ssl/tls.crt
keyFile: /tmp/ssl/tls.key
stores:
default:
defaultCertificate:
certFile: /tmp/ssl/tls.crt
keyFile: /tmp/ssl/tls.key
- url: http://pods-api:8000
5 changes: 3 additions & 2 deletions playbooks/roles/proxy/templates/docker/locations/actors.conf
Original file line number Diff line number Diff line change
Expand Up @@ -4,9 +4,10 @@ location /v3/actors
auth_request /_auth;
error_page 500 /token-revoked.json;

# use var to allow nginx to start even if $upstream is down
set $upstream http://actors-nginx:80;
resolver 127.0.0.11;
set $upstream "http://actors-nginx:80";
proxy_pass $upstream;


proxy_redirect off;
proxy_set_header Host $host;
Expand Down
6 changes: 3 additions & 3 deletions playbooks/roles/proxy/templates/docker/locations/apps.conf
Original file line number Diff line number Diff line change
@@ -1,11 +1,11 @@
# apps
location /v3/apps
{
{
auth_request /_auth;
error_page 500 /token-revoked.json;

# use var to allow nginx to start even if $upstream is down
set $upstream http://apps-api:8080;
resolver 127.0.0.11;
set $upstream "http://apps-api:8080;";
proxy_pass $upstream;

proxy_redirect off;
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -4,8 +4,8 @@ location /v3/oauth2
auth_request /_auth;
error_page 500 /token-revoked.json;

# use var to allow nginx to start even if $upstream is down
set $upstream http://authenticator-api:5000;
resolver 127.0.0.11;
set $upstream "http://authenticator-api:5000";
proxy_pass $upstream;

proxy_redirect off;
Expand Down
6 changes: 3 additions & 3 deletions playbooks/roles/proxy/templates/docker/locations/files.conf
Original file line number Diff line number Diff line change
@@ -1,12 +1,12 @@

# files
location /v3/files
{
{
auth_request /_auth;
error_page 500 /token-revoked.json;

# use var to allow nginx to start even if $upstream is down
set $upstream http://files-api:8080;
resolver 127.0.0.11;
set $upstream "http://files-api:8080";
proxy_pass $upstream;

proxy_redirect off;
Expand Down
Original file line number Diff line number Diff line change
@@ -1,11 +1,10 @@
# globus-proxy
location /v3/globus-proxy
{

# use var to allow nginx to start even if $upstream is down
set $upstream http://globus-proxy:5000;
resolver 127.0.0.11;
set $upstream "http://globus-proxy:5000";
proxy_pass $upstream;

proxy_redirect off;
proxy_set_header Host $host;
}
Expand Down
5 changes: 3 additions & 2 deletions playbooks/roles/proxy/templates/docker/locations/jobs.conf
Original file line number Diff line number Diff line change
Expand Up @@ -4,9 +4,10 @@ location /v3/jobs
auth_request /_auth;
error_page 500 /token-revoked.json;

# use var to allow nginx to start even if $upstream is down
set $upstream http://jobs-api:8080;
resolver 127.0.0.11;
set $upstream "http://jobs-api:8080";
proxy_pass $upstream;


proxy_redirect off;
proxy_set_header Host $host;
Expand Down
5 changes: 3 additions & 2 deletions playbooks/roles/proxy/templates/docker/locations/meta.conf
Original file line number Diff line number Diff line change
Expand Up @@ -5,10 +5,11 @@ location /v3/meta
auth_request /_auth;
error_page 500 /token-revoked.json;

# use var to allow nginx to start even if $upstream is down
set $upstream http://meta:8080;
resolver 127.0.0.11;
set $upstream "http://meta:8080";
proxy_pass $upstream;


proxy_redirect off;
proxy_set_header Host $host;
}
Original file line number Diff line number Diff line change
Expand Up @@ -4,9 +4,10 @@ location /v3/notifications
auth_request /_auth;
error_page 500 /token-revoked.json;

# use var to allow nginx to start even if $upstream is down
set $upstream http://notifications-api:8080;
resolver 127.0.0.11;
set $upstream "http://notifications-api:8080";
proxy_pass $upstream;


proxy_redirect off;
proxy_set_header Host $host;
Expand Down
5 changes: 3 additions & 2 deletions playbooks/roles/proxy/templates/docker/locations/pgrest.conf
Original file line number Diff line number Diff line change
Expand Up @@ -5,9 +5,10 @@ location /v3/pgrest
auth_request /_auth;
error_page 500 /token-revoked.json;

# use var to allow nginx to start even if $upstream is down
set $upstream http://pgrest-api:5000;
resolver 127.0.0.11;
set $upstream "http://pgrest-api:5000";
proxy_pass $upstream;


proxy_redirect off;
proxy_set_header Host $host;
Expand Down
5 changes: 3 additions & 2 deletions playbooks/roles/proxy/templates/docker/locations/pods.conf
Original file line number Diff line number Diff line change
Expand Up @@ -2,9 +2,10 @@
location /v3/pods
{

# use var to allow nginx to start even if $upstream is down
set $upstream {{ proxy_service_url }};
resolver 127.0.0.11;
set $upstream "{{ proxy_service_url }}";
proxy_pass $upstream;


proxy_redirect off;
proxy_set_header Host $host;
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -4,9 +4,10 @@ location /v3/security
auth_request /_auth;
error_page 500 /token-revoked.json;

# use var to allow nginx to start even if $upstream is down
set $upstream http://security-api:8080;
resolver 127.0.0.11;
set $upstream "http://security-api:8080";
proxy_pass $upstream;


proxy_redirect off;
proxy_set_header Host $host;
Expand Down
Loading

0 comments on commit da19522

Please sign in to comment.