Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

cdk_framewrok -> New schema for validationa and new config format #9

Open
wants to merge 14 commits into
base: terraform
Choose a base branch
from
Open
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
8 changes: 8 additions & 0 deletions cdk_framework/.gitignore
Original file line number Diff line number Diff line change
@@ -0,0 +1,8 @@
*.js
!jest.config.js
*.d.ts
node_modules

# CDK asset staging directory
.cdk.staging
cdk.out
6 changes: 6 additions & 0 deletions cdk_framework/.npmignore
Original file line number Diff line number Diff line change
@@ -0,0 +1,6 @@
*.ts
!*.d.ts

# CDK asset staging directory
.cdk.staging
cdk.out
3 changes: 3 additions & 0 deletions cdk_framework/EKS/.eslintignore
Original file line number Diff line number Diff line change
@@ -0,0 +1,3 @@
node_modules
dist
jest.config.js
17 changes: 17 additions & 0 deletions cdk_framework/EKS/.eslintrc
Original file line number Diff line number Diff line change
@@ -0,0 +1,17 @@
{
"root": true,
"parser": "@typescript-eslint/parser",
"plugins": [
"@typescript-eslint"
],
"extends": [
"eslint:recommended",
"plugin:@typescript-eslint/eslint-recommended",
"plugin:@typescript-eslint/recommended"
],
"rules": {
"@typescript-eslint/no-var-requires": 0,
"eslint-disable no-undef": 0
}
}

143 changes: 143 additions & 0 deletions cdk_framework/EKS/DESIGN.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,143 @@
# Purpose

The purpose of this directory is to deploy different EKS clusters using AWS CDK.

# Architecture
![Deployment design](https://user-images.githubusercontent.com/54683946/183471629-59479f8c-db49-4c53-bbe5-48b5f18d6b14.png)


Steps in how the cluster deployment occurs:
1. Root construct, App, is created and calls cluster-deployment method
2. Cluster configuration file is read and validated and VPC stack is created. In it, the VPC is made and prepared to be passed in to each cluster.
3. Cluster stacks are configured from the configuration file (by being casted to interfaces and validated), and the VPC is passed in as one of the props.
5. A single cluster is made in each stack with the configurations provided. The reason we need multiple stacks instead of putting all the clusters in one stack is because stack can’t hold more than one EKS cluster.

## Directory Structure

```
/lib
/config
/cluster-config
config.yml
/interfaces
cluster-interface
ec2cluster-interface
/stacks
ec2-cluster-stack
fargate-cluster-stack
vpc-stack
/utils
validate-config-schema
validate-interface-schema
apps
cluster-deployment
/test
```
`apps.ts` will call `cluster-deployment` to create all the stacks. The user either supplies the environment variable for the route to preferred configuraiton file or the default config file found in `/config/cluster-config` folder. `cluster-deployment.ts` will first validate the config file, then call `vpc-stack.ts` to create a VPC which will be passed in to each of the clusters specified in the configuration file. Then `resource-deployment` will cast each cluster defined in config file to either `ec2-cluster-stack` or `fargate-cluster-stack`. This stack will then create a cluster to be deployed.

## Data Validation

There are 2 different validation steps:
1. Validate the configuration file. This is done by calling `validate-config-schema`. This validates that there are no added fields in the config file and that all the values for the defined fields are of type string.
2. Once interfaces are created by the config file, each interface is passed into `validate-interface-schema` which checks:
* `launch_type` and `version` need to be specified.
* `launch_type ` must have 1 field - either `ec2` or `fargate` to specify what type of cluster will be made.
* If `launch_type` is `ec2`, then, `instance_type` is verified.
* `instance_type` is verified to be compatible. Listings could be found at [Compatible Node Sizes](https://www.amazonaws.cn/en/ec2/instance- types/).
* `version` needs to be between 1.18 to 1.21. Patch version releases are not supported by the CDK.

## VPC

A default VPC is created by implemeting:
```
const vpc = new ec2.Vpc(this, 'EKSVpc',
{cidr: "10.0.0.0/16",
natGateways: 1,
vpnGateway: true,
availabilityZones: ['us-west-2a', 'us-west-2b', 'us-west-2c'],
subnetConfiguration: [
{
cidrMask: 24,
subnetType: ec2.SubnetType.PRIVATE_WITH_NAT,
name: 'private_subnet'
},
{
cidrMask: 24,
subnetType: ec2.SubnetType.PUBLIC,
name: "public_subnet"
}
]
});
```
The VPC was configured based on what was done in the [terraform framework](https://github.com/aws-observability/aws-otel-test-framework/blob/6cd6478ce2c32223494460b390f33aeb5e61c48e/terraform/eks_fargate_setup/main.tf#:~:text=%23%20%2D%2D%2D%2D%2D%2D%2D%2D%2D%2D%2D%2D%2D%2D%2D%2D%2D%2D%2D%2D%2D%2D%2D%2D%2D%2D%2D%2D%2D%2D%2D%2D%2D%2D%2D%2D%2D%2D%2D%2D%2D%2D%2D%2D%2D%2D%2D%2D%2D%2D%2D%2D%2D%2D%2D%2D%2D%2D%2D%2D%2D%2D%2D%2D%2D%2D%2D%2D%2D%2D%2D%2D%2D-,module%20%22vpc%22%20%7B,-source%20%3D%20%22). The same configuration is done for every EKS cluster, so it was fine making this the default. The only difference between the terraform framework and CDK is that the CIDR blocks for the subnets aren’t the same. This is fine as long as they both remain in the range in the VPC CIDR block.

## Interfaces

There are 2 different interfaces that could be made - `ClusterInterface` and `ec2ClusterInterface`. Every cluster in configuration file is first casted to `ClusterInterface`. `ClusterInterface` has 3 fields:
* `name` - the name of the cluster
* `launch_type` - the launch type of the cluster. Either `ec2` or `fargate`
* `version` - the Kubernetes version of the cluster

Once the `ClusterInterface` is created, it checks to see what `launch_type` it has. If it is `fargate`, then nothing happens, but if it is `ec2`, you cast the cluster to `ec2ClusterInterface`. The reason for this is because there is an additional field called `instance_type` which is the instance type for the ec2 cluster node groups.

## Cluster Stacks

There are 2 different stacks that could get potentially be created - `ec2ClusterStack` and `fargateClusterStack`. To determine which stack needs to be deployed, the ClusterInterface checks to see which `launch_type`. If it is `fargate`, it will deploy to `fargateClusterStack`, and if it is `ec2`, it will cast to `ec2ClusterInterface` and deploy to `ec2ClusterStack`.

There are a few props that need to be passed into the `fargate-cluster-stack.ts` file for the cluster to be created and deployed:
* `name`. The name of the cluster.
* `vpc`. The VPC that was made in the `vpc-stack` is passed in.
* `version`. The Kubernetes Version needs to be specified.

Then, the stack will call eks.FargateCluster to create the cluster. It will look like this:

```
this.cluster = new eks.FargateCluster(this, props.name, {
clusterName: props.name,
vpc: props.vpc,
version: props.version,
clusterLogging: logging
});
```
The vpc is the VPC that is passed in as a prop. The kubernetes version is the version that is passed in as a prop. Cluster logging types specify what type of logs we want the deployment to broadcast.

If `ec2-cluster-stack.ts` is called, there are some additional props that need to be passed:
* `name`. The name of the cluster.
* `vpc`. The VPC that was made in the `vpc-stack` is passed in.
* `version`. The Kubernetes Version needs to be specified.
* `instance_type`. The instance type of the ec2 Cluster node groups.

Then, the stack will call eks.Cluster to create the cluster. It will look like this:

```
this.cluster = new eks.Cluster(this, props.name, {
clusterName: props.name,
vpc: props.vpc,
vpcSubnets: [{subnetType: ec2.SubnetType.PUBLIC}],
defaultCapacity: 0, // we want to manage capacity our selves
version: props.version,
clusterLogging: logging,

});
```

The default capcity is 0, so it will start off with 0 node groups. This way we could add a node group of whatever launch type we want, instead of having the default node group `amd_64`.

Because we specified a default node capacity of 0, we need to add a NodeGroup. This is where we call `cluster.addNodeGroupCapacity` where we add the instanceType to the node group. Adding the node group will look like this:

```
this.cluster.addNodegroupCapacity('ng-' + instanceType, {
instanceTypes: [new ec2.InstanceType(instace)],
minSize: 2,
nodeRole: workerRole
})
```
The `instanceType` needs to be `m5`, `m6g` or `t4g` plus their compatible node size. The `minSize` is 2, which is the recommended minSize. `nodeRole` refers to the IAM Role that we want to assign to the node group. It is critical to provide the node group proper authorization so that the clusters can be properly managed, such as addign resources to these clusters.

## Testing

### Fine-Grained Assertion Tests

These Tests were done in order to make sure the cloudformation template that was created for deployment has the right template, and will therefore provide the correct information. This is done for both the VPC stack and Cluster stacks.

For testing the Cluster stack, the environment variable `CDK_CONFIG_PATH` is changed to be directed towards the `/test/test_config/test_clusters.yml` file.
18 changes: 18 additions & 0 deletions cdk_framework/EKS/Makefile
Original file line number Diff line number Diff line change
@@ -0,0 +1,18 @@
CLUSTERS := $(shell cdk ls)
EKSCLUSTERS := $(shell printf '%s' '$(CLUSTERS)' | grep -oh "\w*EKSCluster\w*")
len := $(shell printf '%s' '$(EKSCLUSTERS)' | wc -w)

EKS-infra:
make deploy-VPC
make -j $(len) deploy-clusters
.PHONY: local

deploy-clusters: $(EKSCLUSTERS:%=deploy-clusters/%)
deploy-clusters/%: CLUSTER=$*
deploy-clusters/%:
cdk deploy $(CLUSTER) --require-approval never

deploy-VPC:
cdk synth
cdk deploy EKSVpc
.PHONY: deploy-VPC
118 changes: 118 additions & 0 deletions cdk_framework/EKS/README.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,118 @@
# Welcome to your CDK Test Framework

This is the repository for running tests using AWS CDK.

## Repo Structure

There are a number of important files that are used when running the CDK.

The `lib` directory is where all the cluster deployment implementation is done. `app.ts` is where the root construct, app, is created and calls cluster deployment and resource deployment.

The `test` directory is where testing is done on the cluster deployment.

The `cdk.json` file tells the CDK Toolkit how to execute your app.

The `Makefile` is a file used to deploy the clusters in parallel. All that is required is call `make EKS-infra` and all the clusters configured in configuration file will be deployed.

The `package.json` file contains the important libraries that are needed to run the deployment.

The `tsconfig.json` file is used to tell TypeScript how to configure the project.

## Getting Started

### Environment Setup

Since the code base in written in TypeScript, the CDK library has to be downloaded using Node.

1. Make sure you have Node, so that you can use `npm` control.
2. Download from EKS directory the AWS CDK Library by typing `npm install aws-cdk-lib`.
3. Download `yaml-schema-validator` by typing in command line `npm install yaml-schema-validator`.
4. Download ajv library by typing in command line `npm install ajv`.
5. Download ajv errors by typing in command line `npm install ajv-errors`.
6. In order to use the linter, the eslint dependency needs to be downloaded. This could be done by calling `npm install --save-dev eslint @typescript-eslint/parser @typescript-eslint/eslint-plugin`.

### Environemnt Variables

There are a number of environment variables that should be defined before deploying the clusters:

* `CDK_CONFIG_PATH` - This is the path for which the cluster configuration file is located - default is `clusters.yml` in `lib/config/cluster-config` folder.
* `REGION` - This is the region for which the clusters should be deployed - default is `us-west-2`.

### Setting Config File

Sample template of what config file looks like could be seen in the YAML files found in `lib/config` folder. Should create a category called `clusters`, where each desired cluster should be configured. The name of the cluster given should be the key name for each cluster. Then, there are a couple of fields that need to be addressed:

* `clusters`:
* `name` - The name of the cluster. It needs to be a string. Can't have two clusters with the same name.
* `launch_type` - choose either `ec2` or `fargate`. Determines the launch type for the cluster to be deployed. Case insensitive.
* `version` - Kubernetes Version. Supported Kubernetes versions are any versions between 1.18-1.21. This can be seen at [KubernetesVersion API](https://docs.aws.amazon.com/cdk/api/v2/docs/aws-cdk-lib.aws_eks.KubernetesVersion.html). Additionally, specifying patch releases isn't an option as the CDK doesn't support it. Therefore, every input value must be the minor version.
* `instance_type` - This is a string which is only required for ec2 clusters. There are 2 parts to the `instance_type`: ec2 instance which is the cpu architecture, and node size which is the size of the CPU. The options for ec2 instance are `m6g`, `t4g`, amd `m5` - each represent a different cpu architecture. There is a vast variety of node sizes. The list of compatible sizes could be found here: [Compatible Node Sizes](https://www.amazonaws.cn/en/ec2/instance-types/). The string for this should follow this template: "ec2_instance" + "." + "node_size". An example would be `m5.large`. It is case insensitive.

Here is a sample configuration file example:
```
---
clusters:
- name: ec2Cluster
version: "1.21"
launch_type: ec2
instance_type: m5.large
- name: fargateCluster
version: "1.20"
launch_type: fargate
```

There are two different clusters being deployed in this example - amdCluster, fargateCluster, and t4gCluster. There are 4 fields for each cluster - `name`, `launch_type`, `version`, and `instance_type`. `instance_type` is only required for ec2 cluster.


### Deploying clusters

1. Call `cdk synth` to synthesize the clusters to be deployed. This also makes sure everything was configured properly.
2. Call `cdk deploy --all` to dpeloy all the clusters. You could specify the cluster to deploy by the name calling `cdk deploy CLUSTERNAME`. CLUSTERNAME is the name of the cluster you passed in. It is important to note that the stack name is the name provided for the cluster in the configuration file, plus "EKSCluster" at the end. So, if you named a Cluster "cluster_example", then the stack name is "cluster_exampleEKSCluster".
3. Call `cdk destroy --all` to destroy all the clusters. You could specify the cluster to be destroyed by calling `cdk destroy CLUSTERNAME` where CLUSTERNAME is the name of the cluster stack. Same rules for cluster stack name as above.

#### Updating Clusters

In order to update clusters, just change the config file and then call `make -j #number_of_clusters deploy-clusters` where #number_of_clusters is the number of clusters in configuration file. There are some limitations in what could be updated:
1. Can't downgrade a version.
2. Can't change launch_types without changing Cluster name

#### Makefile

The makefile is used to deploy the clusters in parallel. It determines how many clusters are configured, and then sets that number to the number of processes to run to deploy all the clusters simultaneously. In order to accomplish this, after saving the configuration file and setting all appropriate envrionment variables, call `make EKS-infra` and all the clusters should be deployed. This call will first call `deploy-VPC` which will call `cdk synth` and then create the VPC by calling `cdk deploy EKSVpc`. After the VPC is deployed, it then calls `deploy-clusters` where all the EKS clusters that were configured are deployed. The way it calls `deploy-clusters` is by calling `make -j #processes deploy-clusters`, where #processes is the number of clusters being deployed.

### Example Deployment

Here is an example case of how to run a deployment. Let's say there are two clusters that one desires to deploy, and amd_64 cluster (ec2_instance type is `m5`), with node size "`large`" and a fargate cluster. So, first thing to do is to set up the configuration file. Looking at the example provided above, all we have to do is give copy the template above and give the clusters names. For this example, we will use `amdCluster` for the amd_64 cluster and `fargateCluster` for the fargate cluster.

```
---
clusters:
- name: ec2Cluster
version: "1.21"
launch_type: ec2
instance_type: m5.large
- name: fargateCluster
version: "1.20"
launch_type: fargate
```
Now that we have the configuration file set up, we want to make sure the CDK_CONFIG_PATH environment variable is set to the route to this configuration file. This only needs to be done if the clusters.yml file in /lib/config/cluster-config folder was not overriden. Once the variable is set, all that needs to be done is call `make EKS-infra` and all the clusters will be deployed.

## Testing

1. Fine-Grained Assertion Tests
* These tests are used to test the template of the the cloudformation stacks that are being created.

In order to run these tests, use command `npm test`.

### Linter

ESLint is used to make sure formatting is done well. To run the linter, call `npm run lint`.

## Useful commands

* `npm run build` compile typescript to js
* `npm run watch` watch for changes and compile
* `npm run test` perform the jest unit tests
* `cdk deploy` deploy this stack to your default AWS account/region
* `cdk diff` compare deployed stack with current state
* `cdk synth` emits the synthesized CloudFormation template
37 changes: 37 additions & 0 deletions cdk_framework/EKS/cdk.json
Original file line number Diff line number Diff line change
@@ -0,0 +1,37 @@
{
"app": "npx ts-node --prefer-ts-exts lib/app.ts",
"watch": {
"include": [
"**"
],
"exclude": [
"README.md",
"cdk*.json",
"**/*.d.ts",
"**/*.js",
"tsconfig.json",
"package*.json",
"yarn.lock",
"node_modules",
"test"
]
},
"context": {
"@aws-cdk/aws-apigateway:usagePlanKeyOrderInsensitiveId": true,
"@aws-cdk/core:stackRelativeExports": true,
"@aws-cdk/aws-rds:lowercaseDbIdentifier": true,
"@aws-cdk/aws-lambda:recognizeVersionProps": true,
"@aws-cdk/aws-lambda:recognizeLayerVersion": true,
"@aws-cdk/aws-cloudfront:defaultSecurityPolicyTLSv1.2_2021": true,
"@aws-cdk-containers/ecs-service-extensions:enableDefaultLogDriver": true,
"@aws-cdk/aws-ec2:uniqueImdsv2TemplateName": true,
"@aws-cdk/core:checkSecretUsage": true,
"@aws-cdk/aws-iam:minimizePolicies": true,
"@aws-cdk/core:validateSnapshotRemovalPolicy": true,
"@aws-cdk/aws-codepipeline:crossAccountKeyAliasStackSafeResourceName": true,
"@aws-cdk/core:target-partitions": [
"aws",
"aws-cn"
]
}
}
9 changes: 9 additions & 0 deletions cdk_framework/EKS/jest.config.js
Original file line number Diff line number Diff line change
@@ -0,0 +1,9 @@
/* eslint-disable no-undef */
module.exports = {
testEnvironment: 'node',
roots: ['<rootDir>/test'],
testMatch: ['**/*.test.ts'],
transform: {
'^.+\\.tsx?$': 'ts-jest'
}
};
8 changes: 8 additions & 0 deletions cdk_framework/EKS/lib/app.ts
Original file line number Diff line number Diff line change
@@ -0,0 +1,8 @@
#!/usr/bin/env node
import 'source-map-support/register';
import * as cdk from 'aws-cdk-lib';
import { deployClusters } from './cluster-deployment';

const app = new cdk.App();

const clusterMap = deployClusters(app);
Loading