Skip to content
This repository has been archived by the owner on Sep 8, 2021. It is now read-only.

codeready-toolchain/toolchain-infra

Repository files navigation

Pre-requisites

Installing AWS CLI

You can find more information about how to install aws CLI [here](https://docs.aws.amazon.com/cli/latest/userguide/install-cliv1.html) OR Simply install using following bash commands:

curl "https://s3.amazonaws.com/aws-cli/awscli-bundle.zip" -o "tmp/awscli-bundle.zip"
unzip /tmp/awscli-bundle.zip
sudo ./tmp/awscli-bundle/install -i /usr/local/aws -b /usr/local/bin/aws

Verify that aws cli is installed correctly using aws --version

Configure AWS

Configuring Profiles

Toolchain Permanent Cluster

Toolchain AWS already has a robot account crt-robot with required minimum permissions to create an Openshift cluster. With available access and a secret key, configure AWS profile by name crt-robot using the following:

aws configure --profile crt-robot
AWS Access Key ID [None]: AKIAI44QH8DHBEXAMPLE
AWS Secret Access Key [None]: je7MtGbClwBF/2Zp9Utk/h3yCo8nvbEXAMPLEKEY
Default region name [None]: us-east-2
Default output format [None]: text

48 Hrs Temporary Cluster

If you want to setup toolchain on a 48 hrs temporary cluster, you should configure AWS with profile openshift-dev using following:

aws configure --profile openshift-dev
AWS Access Key ID [None]: AKIAI44QH8DHBEXAMPLE
AWS Secret Access Key [None]: je7MtGbClwBF/2Zp9Utk/h3yCo8nvbEXAMPLEKEY
Default region name [None]: us-east-2
Default output format [None]: text

Installing openshift-installer

We need to setup the openshift-install binary to create an Openshift cluster

wget https://mirror.openshift.com/pub/openshift-v4/clients/ocp/latest/openshift-install-linux-4.5.5.tar.gz -P /tmp/
tar -xvf /tmp/openshift-install-linux-4.5.5.tar.gz --directory /tmp
sudo mv /tmp/openshift-install /usr/local/bin/

You can download the latest openshift-installer from https://mirror.openshift.com/pub/openshift-v4/clients/ocp/latest/

Setting Required Environment Variables

CLIENT_SECRET

To setup RHD Identity Provider, you need to register a client with RHD and with client’s secret, create a secret in openshift-config namespace to be used by OAuth cluster config. To create the required secret, the user needs to set the CLIENT_SECRET environment variable to its base64 encoded value.

export CLIENT_SECRET=base64_encoded_client_secret

ISSUER

If you want to setup a different Identity Provider, e.g. your hosted keycloak, you need to use the issuer URL which is a publicly exposed route for your keycloak

export ISSUER=your_issuer_url

If you have not set this variable, script will use default issuer i.e. "https://sso.redhat.com/auth/realms/redhat-external" which points for RH SSO

PULL_SECRET

We are storing host, member and 48 hrs temp clusters configuration files under /config directory, for which we need pull secrets to be set by the environment variable PULL_SECRET

export PULL_SECRET='{"auths":{"cloud.openshift.com":{"auth":"HSADJDFJJLDFbhf345==","email":"[email protected]"},"quay.io":{"auth":"jkfdsjfTH78==","email":"[email protected]"},"registry.connect.redhat.com":{"auth":"jhfkjdjfjdADSDS398njdnfj==","email":"[email protected]"},"registry.redhat.io":{"auth":"jdfjfdhfADSDSFDSF67dsgh==","email":"[email protected]"}}}'

You can download/copy the required pull_secret from https://cloud.redhat.com/openshift/install/aws/installer-provisioned

SSH_PUBLIC_KEY

We need to add ssh keys under authorized keys for all the nodes created by the installer for which we are passing ssh public keys by setting the environment variable SSH_PUBLIC_KEY

export SSH_PUBLIC_KEY="ssh-rsa AAAAB3NzaC1yc2EAAAABIwAAAQEAklOUpkDHrfHY17SbrmTIpNLTGK9Tjom/BWDSUGPl+nafzlHDTYW7hdI4yZ5ew18JH4JW9jbhUFrviQzM7xlELEVf4h9lFX5QVkbPppSwg0cda3Pbv7kOdJ/MTyBlWXFCR+HAo3FXRitBqxiX1nKhXpHAZsMciLq8V6RjsNAQwdsdMFvSlVK/7XAt3FaoJoAsncM1Q9x5+3V0Ww68/eIFmb1zuUFljQJKprrX88XypNDvjYNby6vw/Pb0rwert/EnmZ+AW4OZPnTPI89ZPmVMLuayrD2cE86Z/il8b+gw3r3+1nKatmIkjn2so1d01QraTlMqVSsbxNrRFi9wrf+M7Q== [email protected]"

MAILGUN_DOMAIN

A domain is required for sending messages using the Mailgun API. The value of this environment variable must be base64 encoded.

export MAILGUN_DOMAIN=your-mailgun-domain

MAILGUN_API_KEY

Authentication is required in order to interact with the Mailgun API. The value of this environment variable must be base64 encoded.

export MAILGUN_API_KEY=your-mailgun-api-key

MAILGUN_SENDER_EMAIL

The sender’s email address is required in order to interact with the Mailgun API. The value of this environment variable must be base64 encoded.

export MAILGUN_SENDER_EMAIL=your-mailgun-senders-email-address

REGISTRATION_SERVICE_URL

The url where the registration-service is running.

export REGISTRATION_SERVICE_URL=registration-service-url

CONSOLE_NAMESPACE

The console namespace. If CONSOLE_NAMESPACE env var is not provided, it will default to "openshift-console"

export CONSOLE_NAMESPACE=console-namespace

CONSOLE_ROUTE_NAME

The console route name. If CONSOLE_ROUTE_NAME env var is not provided, it will default to "console"

export CONSOLE_ROUTE_NAME=console-route-name

IDENTITY_PROVIDER

The identity provider. If IDENTITY_PROVIDER env var is not provided, it will default to "rhd"

export IDENTITY_PROVIDER=identity-provider

Setting up Toolchain on Host and Member cluster

To setup a hosted toolchain on multiple clusters (currently we are using 2 clusters i.e. host and member), we need to do following:

  1. Create host and member cluster

  2. Setup RHD Identity Provider

  3. Create admin users with cluster-admin roles

  4. Remove self-provisioner cluster role from authenticated users

  5. Deploy registration service and host-operator on host cluster

  6. Deploy member-operator on member cluster

  7. Create/setup ToolchainCluster

With Single Step

Permanent Clusters

In order to achieve all of the above on permanent clusters use the following:

./setup_toolchain.sh
Overwriting Operator’s Namespace

If you want to overwrite the operator’s namespace, you can use respective flags such as the following:

./setup_toolchain.sh -hs my_host_ns -mn my_member_ns

48 Hrs Dev Clusters

In order to achieve all of the above on temporary clusters for 48 hrs, use the following:

./setup_toolchain.sh -d

Installing the Cluster Logging

NOTE

this is installation is now part of the default cluster setup

If you want to collect logs for all pods on an existing cluster, you can run the following script which will take care of installing the Cluster Logging and Elasticsearch operators, and configure all the pieces (Fluentd, Collector, Elasticsearch and Kibana).

./setup_logging.sh

Once all pods in the openshift-logging are in Running state, you can access the Kibana dashboard available on:

`oc get routes/kibana -o jsonpath='https://{.spec.host}'`

With Multiple Steps

With Default Namespace for Operators

If you want to try this setup one step at a time, you can follow the following steps:

./setup_cluster.sh -t host
./setup_cluster.sh -t member
./setup_toolchainclusters.sh

With Overriding an Operator’s Namespace

If you want to overwrite an operator’s namespace, you can use the respective flags or environment variable like following steps:

./setup_cluster.sh -t host -hs my_host_ns -mn my_member_ns
./setup_cluster.sh -t member -hs my_host_ns -mn my_member_ns
./setup_toolchainclusters.sh
MEMBER_OPERATOR_NS=my_member_ns HOST_OPERATOR_NS=my_host_ns ./setup_toolchainclusters.sh

Installing and Renewing Let’s Encrypt Certificates

Make sure you have acme.sh installed and AWS access credentials are set (see this for details):

  1. Clone the acme.sh GitHub repository:

cd $HOME
git clone https://github.com/neilpang/acme.sh
cd acme.sh
  1. Update the file $HOME/acme.sh/dnsapi/dns_aws.sh with your AWS access credentials:

#!/usr/bin/env sh
#AWS_ACCESS_KEY_ID="YOUR ACCESS KEY"
#AWS_SECRET_ACCESS_KEY="YOUR SECRET ACCESS KEY"
#This is the Amazon Route53 api wrapper for acme.sh
[...]

To issue new certificates and install them to both host and member clusters, use the following command:

./install_certs.sh

For renewing the existing certificates:

./install_certs.sh --renew

Cleaning Up Default Kubeadmin

Once host and member clusters are setup with all the required things and you confirm that crt-admin can login and they have required access for cluster scoped resources you can remove the default kube-admin user using the following step:

oc delete secret kubeadmin -n kube-system

Destroying Cluster

Make sure to export required AWS profile.

  • If your cluster is created for 48 hrs then export AWS_PROFILE=openshift-dev

  • If your cluster is a permanent cluster, then export AWS_PROFILE=crt-robot

From the Directory Which Stores Metadata for Openshift 4 Cluster

openshift-install destroy cluster

If You Lost Metadata Required to Destroy Openshift 4 Cluster

If the OpenShift 4 cluster is deployed by the installer and you lost the metadata, there is no way to delete the cluster using the OpenShift installer without the metadata. In order to destroy the cluster using the installer, you should generate a metadata.json file.

Set Required Variables Using the Following

CLUSTER_NAME=NAME
AWS_REGION=REGION
CLUSTER_UUID=$(oc get clusterversions.config.openshift.io version -o jsonpath='{.spec.clusterID}{"\n"}')
INFRA_ID=$(oc get infrastructures.config.openshift.io cluster -o jsonpath='{.status.infrastructureName}{"\n"}')

Generate metadata.json

echo "{\"clusterName\":\"${CLUSTER_NAME}\",\"clusterID\":\"${CLUSTER_UUID}\",\"infraID\":\"${INFRA_ID}\",\"aws\":{\"region\":\"${AWS_REGION}\",\"identifier\":[{\"kubernetes.io/cluster/${INFRA_ID}\":\"owned\"},{\"openshiftClusterID\":\"${CLUSTER_UUID}\"}]}}" > metadata.json

Destroy Cluster With the Generated metadata.json File

openshift-install destroy cluster

About

No description, website, or topics provided.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages