-
Notifications
You must be signed in to change notification settings - Fork 1
Acceptance Release checklist
This wikipage is dedicated to a checklist for deploying a new release on Acceptance environment. The complexity of the release depends on the nature of it. When datamodels have changed the impact is bigger than when we only need to deploy a new latest version. Breaking datamodel changes impact is even bigger as we need to clear existing data. For Acceptance environment, we make the assumption that we can remove all data and start fresh with a new breaking release. We will re-ingest the data after the release has been finished.
Clean up relevant data from data storage layer. This could be in three places depending on the data, the database, elasticsearch index and/or mongoDB. Not all objects are stored in all three storage solutions. For example DigitalSpecimen data is stored in all three but MAS information is stored in the database and MongoDB. So breaking changes in the datamodel of the DigitalSpecimen needs to be changed in all three solution, MAS only in two. Breaking changes do not include additions as we can always add additional attributes, but structural changes in the existing model do require a purge of storage.
To purge data from the database we can do a Truncate of the table. Only in very few instances we need to fully drop the table. However, we truncate because the datamodel in the JsonB column had changed and is no longer compatible with the new model. Truncating can be done through your favourite DB IDE with a Truncate command.
Only for the objects DigitalSpecimen, DigitalMediaObject and Annotation is a purge of Elastic necesarry.
To purge data from elasticSearch we can make a port-forward to kibana.
k port-forward -n elastic service/kibana-kb-http 5601
We then go to https://localhost:5601/login?next=%2Fapp%2Fhome (Ignore the cert warning) and login with the credentials.
We can now use Kibana (Stack Management -> Index Management).
In MongoDB we store versions for our objects.
For almost all objects we publish new versions on changes.
To purge all old version (in the old data model) we can drop the Collection.
This is the fastest way to remove all the data, however it does require us to recreate the Collection.
So recreate the Collection and add the indices (see Setup indices for mongdb)
Logging into MongoDB can be done through opening op a port-forward (make sure the mongodb Tunnel is running)
k port-forward service/mongodb-tunnel 27017
After the port is opened, you can log in through MongoDB Compass.
Or through CML through a pod on the network, as explained in the Installation guide.
(Database changed) Database update We can now deploy any database changes. We are not yet using a database migration tool like Flyway or Liquibase as this needs to be done by hand. Deploy all changes to the database which should be available as SQL statements in the deployment repository
(Database changed) Elastic update Deploy the latest version or update the existing mapping. Statements should be available as commands in the deployment repository
- Update deployment files Update all the deployment files which are needed to deploy during the release. Keep an eye out for new or changed environmental variables. Also keep an eye out for new Secrets which need to be added. A PR can be made so that others can review the changes. Be Aware that when the PR is merged this will kick of ArgoCD and will start the actual deployment of these files on k8s.
(Breaking release) Ingest data When there were breaking changes in the release we might need to reingest the data. This ensures that all the data adheres to the new data model.