Skip to content

Containerized Deployment on Amazon Web Services

Wouter Addink edited this page Mar 3, 2023 · 1 revision

AWS Services

We are currently using AWS as Infrastructure as a Service (IaaS). This means that AWS manages virtualization (segmenting actual machines into smaller pieces), servers, storage, networking. Meanwhile, DiSSCo manages applications, data, runtime, middleware, etc.

The following are AWS services we use.

Elastic Compute Cloud (EC2): A secure virtual machine. The Core DiSSCo Service is deployed on an EC2 instance, as is our Handle Server. Our EC2 instances mostly run in London (eu-west-2) or Frankfurt (eu-central-1).

Elastic Container Registry (ECR): A repository of containers. When GitHub actions are properly set up, the most up-to-date image of a project on GitHub is automatically pushed to the ECR, where it can be deployed through Kubernetes.

Containerized Deployment

All deployed projects must first be containerized as a Docker Image. A Docker image is a file used to execute code in a Docker container. Docker images act as a set of instructions to build a Docker container, like a template. Docker images also act as the starting point when using Docker.

Docker images are automatically pushed to an Elastic Container Registry on AWS (when GitHub actions are enabled). DiSSCo uses Kubernetes to deploy applications. Kubernetes acts as an abstraction layer above docker containers that manages their deployment. Kubernetes decides which VM to deploy a container on and dynamically assigns jobs.

A kubernetes.yaml file identifies which container(s) (and which version) to deploy, and the kubernetes-route.yaml file describes the network routing.

Clone this wiki locally