You can find more information about how to install aws
CLI [here](https://docs.aws.amazon.com/cli/latest/userguide/install-cliv1.html) OR Simply install using following bash commands:
curl "https://s3.amazonaws.com/aws-cli/awscli-bundle.zip" -o "tmp/awscli-bundle.zip"
unzip /tmp/awscli-bundle.zip
sudo ./tmp/awscli-bundle/install -i /usr/local/aws -b /usr/local/bin/aws
Verify that aws cli
is installed correctly using aws --version
Toolchain AWS already has a robot account crt-robot
with required minimum permissions to create an Openshift cluster.
With available access and a secret key, configure AWS profile by name crt-robot
using the following:
aws configure --profile crt-robot
AWS Access Key ID [None]: AKIAI44QH8DHBEXAMPLE
AWS Secret Access Key [None]: je7MtGbClwBF/2Zp9Utk/h3yCo8nvbEXAMPLEKEY
Default region name [None]: us-east-2
Default output format [None]: text
If you want to setup toolchain on a 48 hrs temporary cluster, you should configure AWS with profile openshift-dev
using following:
aws configure --profile openshift-dev
AWS Access Key ID [None]: AKIAI44QH8DHBEXAMPLE
AWS Secret Access Key [None]: je7MtGbClwBF/2Zp9Utk/h3yCo8nvbEXAMPLEKEY
Default region name [None]: us-east-2
Default output format [None]: text
We need to setup the openshift-install
binary to create an Openshift cluster
wget https://mirror.openshift.com/pub/openshift-v4/clients/ocp/latest/openshift-install-linux-4.5.5.tar.gz -P /tmp/
tar -xvf /tmp/openshift-install-linux-4.5.5.tar.gz --directory /tmp
sudo mv /tmp/openshift-install /usr/local/bin/
You can download the latest openshift-installer from https://mirror.openshift.com/pub/openshift-v4/clients/ocp/latest/
To setup RHD Identity Provider, you need to register a client with RHD and with client’s secret, create a secret in openshift-config
namespace to be used by OAuth cluster config.
To create the required secret, the user needs to set the CLIENT_SECRET
environment variable to its base64 encoded value.
export CLIENT_SECRET=base64_encoded_client_secret
If you want to setup a different Identity Provider, e.g. your hosted keycloak, you need to use the issuer URL which is a publicly exposed route for your keycloak
export ISSUER=your_issuer_url
If you have not set this variable, script will use default issuer i.e. "https://sso.redhat.com/auth/realms/redhat-external" which points for RH SSO
We are storing host, member and 48 hrs temp clusters configuration files under /config
directory, for which we need pull secrets to be set by the environment variable PULL_SECRET
export PULL_SECRET='{"auths":{"cloud.openshift.com":{"auth":"HSADJDFJJLDFbhf345==","email":"[email protected]"},"quay.io":{"auth":"jkfdsjfTH78==","email":"[email protected]"},"registry.connect.redhat.com":{"auth":"jhfkjdjfjdADSDS398njdnfj==","email":"[email protected]"},"registry.redhat.io":{"auth":"jdfjfdhfADSDSFDSF67dsgh==","email":"[email protected]"}}}'
You can download/copy the required pull_secret from https://cloud.redhat.com/openshift/install/aws/installer-provisioned
We need to add ssh keys under authorized keys for all the nodes created by the installer for which we are passing ssh public keys by setting the environment variable SSH_PUBLIC_KEY
export SSH_PUBLIC_KEY="ssh-rsa AAAAB3NzaC1yc2EAAAABIwAAAQEAklOUpkDHrfHY17SbrmTIpNLTGK9Tjom/BWDSUGPl+nafzlHDTYW7hdI4yZ5ew18JH4JW9jbhUFrviQzM7xlELEVf4h9lFX5QVkbPppSwg0cda3Pbv7kOdJ/MTyBlWXFCR+HAo3FXRitBqxiX1nKhXpHAZsMciLq8V6RjsNAQwdsdMFvSlVK/7XAt3FaoJoAsncM1Q9x5+3V0Ww68/eIFmb1zuUFljQJKprrX88XypNDvjYNby6vw/Pb0rwert/EnmZ+AW4OZPnTPI89ZPmVMLuayrD2cE86Z/il8b+gw3r3+1nKatmIkjn2so1d01QraTlMqVSsbxNrRFi9wrf+M7Q== [email protected]"
A domain is required for sending messages using the Mailgun API. The value of this environment variable must be base64 encoded.
export MAILGUN_DOMAIN=your-mailgun-domain
Authentication is required in order to interact with the Mailgun API. The value of this environment variable must be base64 encoded.
export MAILGUN_API_KEY=your-mailgun-api-key
The sender’s email address is required in order to interact with the Mailgun API. The value of this environment variable must be base64 encoded.
export MAILGUN_SENDER_EMAIL=your-mailgun-senders-email-address
The url where the registration-service is running.
export REGISTRATION_SERVICE_URL=registration-service-url
The console namespace. If CONSOLE_NAMESPACE env var is not provided, it will default to "openshift-console"
export CONSOLE_NAMESPACE=console-namespace
The console route name. If CONSOLE_ROUTE_NAME env var is not provided, it will default to "console"
export CONSOLE_ROUTE_NAME=console-route-name
To setup a hosted toolchain on multiple clusters (currently we are using 2 clusters i.e. host and member), we need to do following:
-
Create host and member cluster
-
Setup RHD Identity Provider
-
Create admin users with
cluster-admin
roles -
Remove self-provisioner cluster role from authenticated users
-
Deploy registration service and host-operator on host cluster
-
Deploy member-operator on member cluster
-
Create/setup ToolchainCluster
In order to achieve all of the above on permanent clusters use the following:
./setup_toolchain.sh
In order to achieve all of the above on temporary clusters for 48 hrs, use the following:
./setup_toolchain.sh -d
- NOTE
-
this is installation is now part of the default cluster setup
If you want to collect logs for all pods on an existing cluster, you can run the following script which will take care of installing the Cluster Logging and Elasticsearch operators, and configure all the pieces (Fluentd, Collector, Elasticsearch and Kibana).
./setup_logging.sh
Once all pods in the openshift-logging
are in Running
state, you can access the Kibana dashboard
available on:
`oc get routes/kibana -o jsonpath='https://{.spec.host}'`
If you want to try this setup one step at a time, you can follow the following steps:
./setup_cluster.sh -t host
./setup_cluster.sh -t member
./setup_toolchainclusters.sh
If you want to overwrite an operator’s namespace, you can use the respective flags or environment variable like following steps:
./setup_cluster.sh -t host -hs my_host_ns -mn my_member_ns
./setup_cluster.sh -t member -hs my_host_ns -mn my_member_ns
./setup_toolchainclusters.sh
MEMBER_OPERATOR_NS=my_member_ns HOST_OPERATOR_NS=my_host_ns ./setup_toolchainclusters.sh
Make sure you have acme.sh installed and AWS access credentials are set (see this for details):
-
Clone the acme.sh GitHub repository:
cd $HOME git clone https://github.com/neilpang/acme.sh cd acme.sh
-
Update the file $HOME/acme.sh/dnsapi/dns_aws.sh with your AWS access credentials:
#!/usr/bin/env sh #AWS_ACCESS_KEY_ID="YOUR ACCESS KEY" #AWS_SECRET_ACCESS_KEY="YOUR SECRET ACCESS KEY" #This is the Amazon Route53 api wrapper for acme.sh [...]
To issue new certificates and install them to both host and member clusters, use the following command:
./install_certs.sh
For renewing the existing certificates:
./install_certs.sh --renew
Once host and member clusters are setup with all the required things and you confirm that crt-admin can login and they have required access for cluster scoped resources you can remove the default kube-admin user using the following step:
oc delete secret kubeadmin -n kube-system
Make sure to export required AWS profile.
-
If your cluster is created for 48 hrs then
export AWS_PROFILE=openshift-dev
-
If your cluster is a permanent cluster, then
export AWS_PROFILE=crt-robot
If the OpenShift 4 cluster is deployed by the installer and you lost the metadata, there is no way to delete the cluster using the OpenShift installer without the metadata. In order to destroy the cluster using the installer, you should generate a metadata.json file.
CLUSTER_NAME=NAME
AWS_REGION=REGION
CLUSTER_UUID=$(oc get clusterversions.config.openshift.io version -o jsonpath='{.spec.clusterID}{"\n"}')
INFRA_ID=$(oc get infrastructures.config.openshift.io cluster -o jsonpath='{.status.infrastructureName}{"\n"}')
echo "{\"clusterName\":\"${CLUSTER_NAME}\",\"clusterID\":\"${CLUSTER_UUID}\",\"infraID\":\"${INFRA_ID}\",\"aws\":{\"region\":\"${AWS_REGION}\",\"identifier\":[{\"kubernetes.io/cluster/${INFRA_ID}\":\"owned\"},{\"openshiftClusterID\":\"${CLUSTER_UUID}\"}]}}" > metadata.json