diff --git a/.gitignore b/.gitignore index dbeea486..0c423ea7 100644 --- a/.gitignore +++ b/.gitignore @@ -5,3 +5,6 @@ aro-content/app/.DS_Store virtualenv/* site/* .npm +# prevent inadvertent publishing of credential pages +aro-content/credentials/* +!aro-content/credentials/.gitkeep \ No newline at end of file diff --git a/aro-content/100-setup/0-credentials.md b/aro-content/100-setup/0-credentials.md new file mode 100644 index 00000000..b316dd13 --- /dev/null +++ b/aro-content/100-setup/0-credentials.md @@ -0,0 +1,18 @@ +# Welcome to the MOBB ARO Workshop! +

To access your lab resources, including credentials, please enter the username provided by the facilitator.

+ + + + \ No newline at end of file diff --git a/aro-content/100-setup/1-environment.md b/aro-content/100-setup/1-environment.md index f067232d..618dc9f4 100644 --- a/aro-content/100-setup/1-environment.md +++ b/aro-content/100-setup/1-environment.md @@ -7,6 +7,10 @@ Your workshop environment consists of several components which have been pre-con Click through to the [Credentials document]({{ credentials_doc }}){:target="_blank"} and claim a username/password by putting your name next to it in the **Participant** column. {% endif %} +### Objective + +For starter, let's learn about the environment you will be using for this workshop. This environment includes the resources that we have created in advance for your convenience. In addition, you will learn about the requirements needed to create an ARO cluster so you can spin up your own after completing the workshop. + To access your working environment, you'll need to log into the [Microsoft Azure portal](https://portal.azure.com){:target="_blank"}. Use your provided username with `@rhcsbb.onmicrosoft.com`. If you use the Azure portal regularly and it already knows a set of credentials for you, it's probably a good idea to use an Incognito window for this task, it should obviate the need to log out of an existing session. When prompted, you'll log in with the credentials provided by the workshop team. @@ -138,3 +142,14 @@ When your shell is ready and you are at the bash prompt, run the following comma ```bash . ~/.workshoprc ``` + +### Summary and Next Steps + +Here you learned: + +* What the Azure and Red Hat requirements are for ARO cluster. +* How to access your environment via Azure Cloud Shell. + +Next, you will learn how to: + +* Access your ARO cluster. \ No newline at end of file diff --git a/aro-content/100-setup/3-access-cluster.md b/aro-content/100-setup/3-access-cluster.md index b03f353d..a7e0dfed 100644 --- a/aro-content/100-setup/3-access-cluster.md +++ b/aro-content/100-setup/3-access-cluster.md @@ -1,8 +1,12 @@ +## Introduction + +{% if precreated_clusters %}Now that you understand the environment setup and the resources we created in advance, let's login to your pre-created cluster. Here you will learn how to access the cluster via the OpenShift Web Console and CLI.{% else %}Now that you've created your cluster, we need to login to it. Here you will learn how to access the cluster via the OpenShift Web Console and CLI.{% endif %} + ## Access the OpenShift Console and CLI ### Login to the OpenShift Web Console -1. Make sure your cluster is ready +1. First, let's ensure your cluster is ready by running the following command: ```bash az aro show \ @@ -17,7 +21,7 @@ Succeeded ``` -1. First, let's configure your workshop environment with our helper variables. To do so, let's run the following command: +1. Next, let's configure your workshop environment with our helper variables. To do so, let's run the following command: ```bash cat << EOF >> ~/.workshoprc @@ -86,3 +90,10 @@ Now that you're logged into the cluster's console, return to your Azure Cloud Sh ``` Congratulations, you're now logged into the cluster and ready to move on to the workshop content. + +### Summary and Next Steps + +Here you learned how to: + +* Access your {% if precreated_clusters %}pre-created{% else %}newly created{% endif %} cluster. +* Login to your cluster via OpenShift CLI. \ No newline at end of file diff --git a/aro-content/200-ops/day2/autoscaling.md b/aro-content/200-ops/day2/autoscaling.md index 48747deb..3d6bbe02 100644 --- a/aro-content/200-ops/day2/autoscaling.md +++ b/aro-content/200-ops/day2/autoscaling.md @@ -1,6 +1,10 @@ ## Introduction -The cluster autoscaler adjusts the size of an Azure Red Hat OpenShift (ARO) cluster to meet the resource needs of the cluster. The cluster autoscaler increases the size of the cluster when there are pods that fail to schedule on any of the current worker nodes due to insufficient resources or when another node is necessary to meet deployment needs. The cluster autoscaler does not increase the cluster resources beyond the limits that you specify. To learn more about cluster autoscaling, visit the [Red Hat documentation for cluster autoscaling](https://docs.openshift.com/container-platform/latest/machine_management/applying-autoscaling.html){:target="_blank"}. +ARO Cluster Autoscaler is a feature that helps automatically adjust the size of an ARO cluster based on the current workload and resource demands. Cluster Autoscaler offers automatic and intelligent scaling of ARO clusters, leading to efficient resource utilization, improved application performance, high availability, and simplified cluster management. By dynamically adjusting the cluster size based on workload demands, it helps organizations optimize their infrastructure costs while ensuring optimal application performance and scalability. The cluster autoscaler does not increase the cluster resources beyond the limits that you specify. + +![Diagram illustrating the cluster autoscaler process](../../assets/images/diagram-cluster-autoscaler.png){ align=center } + +To learn more about cluster autoscaling, visit the [Red Hat documentation for cluster autoscaling](https://docs.openshift.com/container-platform/latest/machine_management/applying-autoscaling.html){:target="_blank"}. ## Create a Machine Autoscaler @@ -196,3 +200,10 @@ Now let's test the cluster autoscaler and see it in action. To do so, we'll depl Congratulations! You've successfully demonstrated cluster autoscaling. + +### Summary and Next Steps + +Here you learned: + +* Create and configure a machine autoscaler +* Deploy an application on the cluster and watch the cluster autoscaler scale your cluster to support the increased workload \ No newline at end of file diff --git a/aro-content/200-ops/day2/labels.md b/aro-content/200-ops/day2/labels.md index 81168fe1..9353de3e 100644 --- a/aro-content/200-ops/day2/labels.md +++ b/aro-content/200-ops/day2/labels.md @@ -143,3 +143,10 @@ Now that we've successfully labeled our nodes, let's deploy a workload to demons The application is exposed over the default ingress using a predetermined URL and trusted TLS certificate. This is done using the OpenShift `Route` resource which is an extension to the Kubernetes `Ingress` resource. Congratulations! You've successfully demonstrated the ability to label nodes and target those nodes using `nodeSelector`. + +### Summary and Next Steps + +Here you learned: + +* Set label for the MachineSets. +* Deploy an application on nodes with certain labels \ No newline at end of file diff --git a/aro-content/200-ops/day2/observability.md b/aro-content/200-ops/day2/observability.md index 1a8b4cc8..e337df88 100644 --- a/aro-content/200-ops/day2/observability.md +++ b/aro-content/200-ops/day2/observability.md @@ -1,101 +1,50 @@ -# Configure Metrics and Log Forwarding to Azure Files +## Introduction -OpenShift stores logs and metrics inside the cluster by default, however it also provides tooling to forward both to various locations. Here we will configure ARO -to forward logs and metrics to Azure Files and use Grafana to view them. +Azure Red Hat OpenShift (ARO) clusters store log data inside the cluster by default. Understanding metrics and logs is critical in successfully running your cluster. Included with ARO is the OpenShift Cluster Logging Operator, which is intended to simplify log management and analysis within an ARO cluster, offering centralized log collection, powerful search capabilities, visualization tools, and integration with other other Azure systems like [Azure Files](https://azure.microsoft.com/en-us/products/storage/files). - - -1. Create a Storage Account +1. First, let's create our Azure Files storage account. To do so, run the following command: ```bash AZR_STORAGE_ACCOUNT_NAME="${AZ_USER}${UNIQUE}" az storage account create --name "${AZR_STORAGE_ACCOUNT_NAME}" -g "${AZ_RG}" --location "${AZ_LOCATION}" --sku Standard_LRS ``` -1. Fetch your storage account key +1. Next, let's grab our storage account key. To do so, run the following command: ```bash AZR_STORAGE_KEY=$(az storage account keys list -g "${AZ_RG}" \ -n "${AZR_STORAGE_ACCOUNT_NAME}" --query "[0].value" -o tsv) ``` -1. Create a storage bucket for logs +1. Now, let's create a separate storage bucket for logs and metrics. To do so, run the following command: ```bash az storage container create --name "aro-logs" \ --account-name "${AZR_STORAGE_ACCOUNT_NAME}" \ --account-key "${AZR_STORAGE_KEY}" - ``` - -1. Create a storage bucket for metrics - - ```bash az storage container create --name "aro-metrics" \ --account-name "${AZR_STORAGE_ACCOUNT_NAME}" \ --account-key "${AZR_STORAGE_KEY}" ``` -1. Deploy ElasticSearch CRDs (not used, but needed for a [bug workaround](https://access.redhat.com/solutions/6990588)) - - ```bash - oc create -f https://raw.githubusercontent.com/openshift/elasticsearch-operator/release-5.5/bundle/manifests/logging.openshift.io_elasticsearches.yaml - ``` - -1. Set up the MOBB Helm Chart Repository +1. Next, let's add the MOBB Helm Chart repository. To do so, run the following command: ```bash helm repo add mobb https://rh-mobb.github.io/helm-charts/ helm repo update ``` -1. Create a project to deploy the Helm charts into +1. Now, we need to create a project (namespace) to deploy our logging resources to. To create that, run the following command: ```bash oc new-project custom-logging ``` -1. Create list of Operators to install +1. Next, we need to install a few operators to run our logging setup. These operators include the Red Hat Cluster Logging Operator, the Loki operator, the Grafana operator, and more. First, we'll create a list of all the operators we'll need to install by running the following command: ```yaml cat < clf-operators.yaml @@ -138,7 +87,7 @@ User Workload Metrics is a Prometheus stack that runs in the cluster that can co EOF ``` -1. Deploy the Grafana, Cluster Logging, and Loki Operator from the file just created above using Helm +1. Next, let's deploy the Grafana, Cluster Logging, and Loki operators from the file we just created above. To do so, run the following command: ```bash oc create ns openshift-logging @@ -149,7 +98,7 @@ User Workload Metrics is a Prometheus stack that runs in the cluster that can co --values ./clf-operators.yaml ``` -1. Wait for the Operators to be installed +1. Now, let's wait for the operators to be installed. !!! info "These will loop through each type of resource until the CRDs for the Operators have been deployed. Eventually you'll see the message `No resources found in custom-logging namespace.` and be returned to a prompt." @@ -160,7 +109,7 @@ User Workload Metrics is a Prometheus stack that runs in the cluster that can co while ! oc get resourcelocker; do sleep 5; echo -n .; done ``` -1. Deploy Helm Chart to deploy Grafana and forward metrics to Azure +1. Now that the operators have been successfully installed, let's use a helm chart to deploy Grafana and forward metrics to Azure Files. To do so, run the following command: ```bash helm upgrade -n "custom-logging" aro-thanos-af \ @@ -171,7 +120,7 @@ User Workload Metrics is a Prometheus stack that runs in the cluster that can co --set "enableUserWorkloadMetrics=true" ``` -1. Validate Grafana is accessible, by fetching it's Route and browsing to it with your web browser. +1. Next, let's ensure that we can access Grafana. To do so, we should fetch its route and try browsing to it with your web browser. To grab the route, run the following command: ```bash oc -n custom-logging get route grafana-route \ @@ -190,7 +139,7 @@ User Workload Metrics is a Prometheus stack that runs in the cluster that can co oc -n custom-logging rollout restart deployment grafana-deployment ``` -1. Deploy Helm Chart to enable Cluster Log forwarding to Azure +1. Next, let's use another helm chart to deploy forward logs to Azure Files. To do so, run the following command: ```bash helm upgrade -n custom-logging aro-clf-blob \ @@ -200,16 +149,13 @@ User Workload Metrics is a Prometheus stack that runs in the cluster that can co --set "azure.storageContainer=aro-logs" ``` -1. Wait for the Log Collector agent to be started +1. Once the Helm Chart deploys its resource, we need to wait for the Log Collector agent to be started. To watch its status, run the following command: ```bash oc -n openshift-logging rollout status daemonset collector ``` -1. Restart Log Collector - - !!! info - Sometimes the log collector agent starts before the operator has finished configuring Loki, restarting it here will resolve. +1. Occasionally, the log collector agent starts before the operator has finished configuring Loki. To proactively address this, we need to restart the agent. To do so, run the following command: ```bash oc -n openshift-logging rollout restart daemonset collector @@ -225,23 +171,28 @@ User Workload Metrics is a Prometheus stack that runs in the cluster that can co ## View the Metrics and Logs -Now that the Metrics and Log forwarding is set up we can view them in Grafana. +Now that the metrics and log forwarding are forwarding to Azure Files, let's view them in Granfa. -1. Fetch the Route for Grafana +1. First, we'll need to fetch the route for Grafana and visit it in our web browser. To get the route, run the following command: ```bash oc -n custom-logging get route grafana-route \ -o jsonpath='{"https://"}{.spec.host}{"\n"}' ``` -1. Browse to the provided route address in the same browser window as your OCP console and login using your OpenShift credentials (either AAD or kubeadmin). - -1. View an existing dashboard such as **custom-logging -> Node Exporter -> USE Method -> Cluster**. +1. Once you get to the Grafana interface, you'll be redirected to login using your ARO credentials that you were given by the workshop team. Once you login, proceed with viewing an existing dashboard such as **custom-logging -> Node Exporter -> USE Method -> Cluster**. - !!! info "These dashboards are copies of the dashboards that are available directly on the OpenShift web console under **Observability**" + !!! info "These dashboards are copies of the dashboards that are available directly on the OpenShift Web Console under **Observability**" ![](../Images/grafana-metrics.png) 1. Click the Explore (compass) Icon in the left hand menu, select “Loki (Application)” in the dropdown and search for `{kubernetes_namespace_name="custom-logging"}` ![](../Images/grafana-logs.png) + +### Summary and Next Steps + +Here you learned how to: + +* Configured metrics and log forwarding to Azure Files. +* View the metrics and logs in a Grafana dashboard. \ No newline at end of file diff --git a/aro-content/200-ops/day2/scaling-nodes.md b/aro-content/200-ops/day2/scaling-nodes.md index 147a6670..2fbbd113 100644 --- a/aro-content/200-ops/day2/scaling-nodes.md +++ b/aro-content/200-ops/day2/scaling-nodes.md @@ -1,9 +1,17 @@ ## Introduction -When deploying your Azure Red Hat OpenShift (ARO) cluster, you can configure many aspects of your worker nodes, but what happens when you need to change your worker nodes after they've already been created? These activities include scaling the number of nodes, changing the instance type, adding labels or taints, just to name a few. +When deploying your ARO cluster, you can configure many aspects of your worker nodes, but what happens when you need to change your worker nodes after they've already been created? These activities include scaling the number of nodes, changing the instance type, adding labels or taints, just to name a few. Many of these changes are done using MachineSets. MachineSets ensure that a specified number of Machine replicas are running at any given time. Think of a MachineSet as a "template" for the kinds of Machines that make up the worker nodes of your cluster. These are similar to other Kubernetes resources, like a ReplicaSet is to Pods. One important caveat, is that MachineSets allow users to manage many Machines as a single entity, but are contained to a specific availability zone. If you'd like to learn more, see the [Red Hat documentation on machine management](https://docs.openshift.com/container-platform/latest/machine_management/index.html){:target="_blank"}. +Here are some of the advantages of using ARO MachineSets to manage the size of your cluster + +* Scalability - MachineSets enables horizontal scaling of your cluster. It can easily add or remove worker to handle the changes in workload. This flexibility ensures that your cluster can dynamically scale to meet the needs of your applications +* Infrastructure Diversity - MachineSets allow you to provision worker nodes of different instance type. This enables you you leverage the best kind of instance family for different workloads. +* Integration with Cluster Autoscaler - MachineSets seamlessly integrate with the Cluster Autoscaler feature, which automatically adjusts the number of worker nodes based on the current demand. This integration ensures efficient resource utilization by scaling the cluster up or down as needed, optimizing costs and performance. + +![scale_machinesets](../../assets/images/scale_machinesets.png){ align=center } + ## Scaling worker nodes ### Via the CLI @@ -121,3 +129,10 @@ Now let's scale the cluster back down to a total of 3 worker nodes, but this tim ![Web Console - MachineSets Edit Count](/assets/images/web-console-machinesets-edit-count.png){ align=center } Congratulations! You've successfully scaled your cluster up and back down to three nodes. + +### Summary and Next Steps + +Here you learned how to: + +* Scaling an existing MachineSet up to add more nodes to the cluster +* Scaling your MachineSet down to remove worker nodes from the cluster diff --git a/aro-content/200-ops/day2/upgrades.md b/aro-content/200-ops/day2/upgrades.md index db4a2440..87f6d402 100644 --- a/aro-content/200-ops/day2/upgrades.md +++ b/aro-content/200-ops/day2/upgrades.md @@ -1,8 +1,8 @@ ## Introduction -Azure Red Hat OpenShift (ARO) provides fully-managed cluster updates. These updates can be triggered from inside the OpenShift Console, or scheduled in advance by utilizing the Managed Upgrade Operator. All updates are monitored and managed by the Red Hat and Microsoft ARO SRE team. +Azure Red Hat OpenShift (ARO) provides provides fully-managed cluster upgrades. These updates can be triggered from inside the OpenShift Web Console, or scheduled in advance by utilizing the Managed Upgrade Operator. The ARO Site Reliability Engineering (SRE) Team will monitor and manage all ARO cluster upgrades. -For more information on how OpenShift's Upgrade Service works, please see the [Red Hat documentation](https://docs.openshift.com/container-platform/4.10/updating/index.html){:target="_blank"}. +During ARO upgrades, one node is upgraded at a time. This is done to ensure customer applications are not impacted during the update, when deployed in a highly-available and fault-tolerant method. ## Upgrade using the OpenShift Web Console @@ -18,15 +18,15 @@ For more information on how OpenShift's Upgrade Service works, please see the [R !!! warning "Upgrade channel is not configured by default" - By default, the [upgrade channel](https://docs.openshift.com/container-platform/4.10/updating/understanding-upgrade-channels-release.html){:target="_blank"} (which is used to recommend the appropriate release versions for cluster updates), is not set in ARO. + By default, the [upgrade channel](https://docs.openshift.com/container-platform/{{ aro_version }}/updating/understanding-upgrade-channels-release.html){:target="_blank"} (which is used to recommend the appropriate release versions for cluster updates), is not set in ARO. -1. In the *Channel* field, enter `stable-4.10` to set the upgrade channel to the stable releases of OpenShift 4.10 and click *Save*. +1. In the *Channel* field, enter `stable-{{ aro_version }}` to set the upgrade channel to the stable releases of OpenShift {{ aro_version }} and click *Save*. ![Web Console - Input Channel](/assets/images/web-console-input-channel.png){ align=center } 1. In a moment, you'll begin to see what upgrades are available for your cluster. From here, you could click the *Select a version* button and upgrade the cluster, or you could follow the instructions below to use the Managed Upgrade Operator. - !!! warning "Do not perform an upgrade at this stage" + !!! warning "Do not perform an upgrade at this stage." ![Web Console - Available Upgrades](../../Images/aro-console-upgrade.png) @@ -58,8 +58,6 @@ Configuring the Managed Upgrade Operator for ARO ensures that your cluster funct 1. Next, let's use that information to populate a manifest for the Managed Upgrade Operator to use. To do so, run the following command: ```yaml - OCP_UPGRADE_TO=$(oc get clusterversion version -o \ - jsonpath='{.status.availableUpdates[0].version}') cat < +## Introduction -!!! warning "In order to complete these steps you need permission to create Azure AD Applications, and other Administrative level permissions. If you do not have Admin access in your Azure tenant, you may want to skip this section. If you're not sure you can proceed, if you get any errors, again, just move on to the next section." +[Azure Active Directory](https://azure.microsoft.com/en-us/products/active-directory){:target="_blank"} is a fully managed authentication, authorization, and user management service provided by Microsoft Azure. It simplifies the process of adding user sign-up, sign-in, and access control to your ARO Cluster. Integrating your ARO cluster with Azure Active Directory simplifies user authentication, provides secure access control, supports federated identity and SSO, and enables centralized user management and audit trails. -Your Azure Red Hat OpenShift (ARO) cluster has a built-in OAuth server. Developers and administrators do not really directly interact with the OAuth server itself, but instead interact with an external identity provider (such as Azure AD) which is brokered by the OAuth server in the cluster. To learn more about cluster authentication, visit the [Red Hat documentation for identity provider configuration](https://docs.openshift.com/container-platform/latest/authentication/understanding-identity-provider.html){:target="_blank"} and the [Microsoft documentation for configuring Azure Active Directory authentication for ARO](https://learn.microsoft.com/en-us/azure/openshift/configure-azure-ad-cli){:target="_blank"}. +As part of the Access Your Cluster page, we utilized a temporary cluster-admin user using the `az aro list-credentials` command. This uses a local identity provider to allow you to access the cluster. In this section of the workshop, we'll configure Azure Active Directory as the cluster identity provider in your ARO cluster. -In this section of the workshop, we'll configure Azure AD as the cluster identity provider in Azure Red Hat OpenShift. +Your ARO cluster has a built-in OAuth server. Developers and administrators do not really directly interact with the OAuth server itself, but instead interact with an external identity provider (such as Azure Active Directory) which is brokered by the OAuth server in the cluster. To learn more about cluster authentication, visit the [Red Hat documentation for identity provider configuration](https://docs.openshift.com/container-platform/latest/authentication/understanding-identity-provider.html){:target="_blank"} and the [Microsoft documentation for configuring Azure Active Directory authentication for ARO](https://learn.microsoft.com/en-us/azure/openshift/configure-azure-ad-cli){:target="_blank"}. + +The following diagram illustrates the ARO authentication process for a cluster configured with Azure Active Directory. + +![Flow chart illustrating the ARO authentication process for a cluster configured with Azure Active Directory](../../assets/images/aro_idp_aad.png){ align=center } + +To learn more about cluster authentication, visit the [Red Hat documentation for identity provider configuration](https://docs.openshift.com/container-platform/latest/authentication/understanding-identity-provider.html){:target="_blank"} and the [Microsoft documentation for configuring Azure Active Directory authentication for ARO](https://learn.microsoft.com/en-us/azure/openshift/configure-azure-ad-cli){:target="_blank"}. ## Configure our Azure AD application @@ -140,3 +145,10 @@ In this section of the workshop, we'll configure Azure AD as the cluster identit 1. Logout from your OCP Console and browse back to the Console URL (`echo $OCP_CONSOLE` if you have forgotten it) and you should see a new option to login called `AAD`. Select that, and log in using your workshop Azure credentials. !!! warning "If you do not see a new **AAD** login option, wait a few more minutes as this process can take a few minutes to deploy across the cluster and revisit the Console URL." + +### Summary and Next Steps + +Here you learned how to: + +* Configure your Azure AD application. +* Configure your cluster to use Azure AD for authentication. \ No newline at end of file diff --git a/aro-content/300-app/deploy.md b/aro-content/300-app/deploy.md index 21a69ea1..5ac9f2db 100644 --- a/aro-content/300-app/deploy.md +++ b/aro-content/300-app/deploy.md @@ -1,6 +1,6 @@ -# Introduction +## Introduction -It's time for us to put our cluster to work and deploy a workload. We're going to build an example Java application, [microsweeper](https://github.com/redhat-mw-demos/microsweeper-quarkus/tree/ARO){:target="_blank"}, using [Quarkus](https://quarkus.io/){:target="_blank"} (a Kubernetes Native Java stack) and [Azure Database for PostgreSQL](https://azure.microsoft.com/en-us/products/postgresql/){:target="_blank"}. We'll then deploy the application to our Azure Red Hat OpenShift cluster, connect to the database using Azure Private Link, and automate the deployment with OpenShift Pipelines. +It's time for us to put our cluster to work and deploy a workload! We're going to build an example Java application, [microsweeper](https://github.com/redhat-mw-demos/microsweeper-quarkus/tree/ARO){:target="_blank"}, using [Quarkus](https://quarkus.io/){:target="_blank"} (a Kubernetes-native Java stack) and [Azure Database for PostgreSQL](https://azure.microsoft.com/en-us/products/postgresql/){:target="_blank"}. After configuring the database, we will use both Quarkus, a Kubernetes-native Java framework optimized for containers, and Source-to-Image (S2I), a toolkit for building container images from source code, to deploy the microsweeper application. ## Create Azure Database for PostgreSQL instance @@ -10,8 +10,7 @@ It's time for us to put our cluster to work and deploy a workload. We're going t oc new-project microsweeper-ex ``` -1. Create the Azure Postgres Server resource. To do so, run the following command (this command will take ~ 5mins) - +1. Create the Azure Postgres Server resource. To do so, run the following command: (this command will take ~ 5mins) !!! warning "For the sake of the workshop we are creating a public database that any host in Azure can connect to. In a real world scenario you would create a private database and connect to it over a private link service" @@ -261,3 +260,14 @@ Next, click on the pool that ends in 443. Notice the *Backend pool*. This is the subnet that contains all the worker nodes. And the best part is all of this came with Azure Red Hat OpenShift out of the box! ![Azure Portal - Load Balancer Backend Pool](../assets/images/azure-portal-lb-backend-pool.png) + +### Summary and Next Steps + +Here you learned how to: + +* Create Azure Database for PostgreSQL instance. +* Build and deploy the Microsweeper app. + +Next, you will learn how to: + +* Make your app resilient to node failure. \ No newline at end of file diff --git a/aro-content/300-app/networkpolicy.md b/aro-content/300-app/networkpolicy.md index 6a1bc779..dbc42d46 100644 --- a/aro-content/300-app/networkpolicy.md +++ b/aro-content/300-app/networkpolicy.md @@ -1,8 +1,39 @@ -# Applying Network Policies to lock down networking +## Introduction -You should have two application deployed in your cluster, the `microsweeper` application deployed in the `Deploy an App` portion of this workshop and the `bgd` app deployed in the `gitops` portion of this workshop. Each live in their own named Projects (or namespace in Kubernetes-speak). +NetworkPolicy objects are used to control communication between pods within a cluster. They provide a declarative approach to define and enforce network traffic rules, allowing you to specify the desired network behavior. By using NetworkPolicy objects, you can enhance the overall security of your applications by isolating and segmenting different components within the cluster. These policies enable fine-grained control over network access, allowing you to define ingress and egress rules based on criteria such as IP addresses, ports, and pod selectors. -1. Fetch the IP address of the `microsweeper` Pod +For this module we will be applying a NetworkPolicy to the previously created 'microsweeper-ex' namespace and using the 'microsweeper' app to test these policies. In addition, we will deploy two new applications to test against the 'microsweeper app. + +## Applying Network Policies + +1. First, let's create a new project for us to build a new application in. To do so, run the following command: + + ```bash + oc new-project networkpolicy-test + ``` + +1. Next, we will create a new application that will allow us to test connectivity to the microsweeper application. To do so, run the following command: + + ```bash + cat << EOF | oc apply -f - + apiVersion: v1 + kind: Pod + metadata: + name: networkpolicy-pod + namespace: networkpolicy-test + labels: + app: networkpolicy + spec: + securityContext: + allowPrivilegeEscalation: false + containers: + - name: networkpolicy-pod + image: registry.access.redhat.com/ubi9/ubi-minimal + command: ["sleep", "infinity"] + EOF + ``` + +1. Now, let's grab the IP address of the `microsweeper` pod. To do so, run the following command: ```bash MS_IP=$(oc -n microsweeper-ex get pod -l \ @@ -11,13 +42,13 @@ You should have two application deployed in your cluster, the `microsweeper` app echo $MS_IP ``` -1. Check to see if the `bgd` app can access the Pod. +1. Now, let's validate that the `networkpolicy-pod` can access the `microsweeper` pod. By default, ARO does not implement any NetworkPolicy, so we should be able to easily verify that we can access the microsweeper application. To test this, let's run the following command: ```bash - oc -n bgd exec -ti deployment/bgd -- curl $MS_IP:8080 | head + oc -n networkpolicy-test exec -ti pod/networkpolicy-pod -- curl $MS_IP:8080 | head ``` - The output should show a successful connection + The output should show a successful connection to the microsweeper application. ```{.html .no-copy} @@ -32,9 +63,7 @@ You should have two application deployed in your cluster, the `microsweeper` app src="https://code.jquery.com/jquery-3.2.1.min.js" ``` -1. It's common to want to not allow Pods from another Project. This can be done by a fairly simple Network Policy. - - !!! info "This Network Policy will restrict Ingress to the Pods in the project `microsweeper-ex` to just the OpenShift Ingress Pods and only on port 8080." +1. While ARO doesn't block inter-project pod traffic by default, it is a common use case to not allow pods to cross communicate between projects (namespaces). This can be done by a fairly simple NetworkPolicy. ```yaml cat << EOF | oc apply -f - @@ -44,35 +73,40 @@ You should have two application deployed in your cluster, the `microsweeper` app name: allow-from-openshift-ingress namespace: microsweeper-ex spec: - ingress: - - from: - - namespaceSelector: - matchLabels: - network.openshift.io/policy-group: ingress podSelector: {} policyTypes: - - Ingress + - Ingress + ingress: + - from: + - namespaceSelector: + matchLabels: + network.openshift.io/policy-group: ingress + ports: + - protocol: TCP + port: 8080 EOF ``` -1. Try to access microsweeper from the bgd pod again + !!! info "This Network Policy will restrict ingress to the pods in the project `microsweeper-ex` to only allow traffic from the OpneShift IngressController and only on port 8080." + +1. Now that we've implemented that NetworkPolicy, let's try to access the `microsweeper` pod from the `networkpolicy-pod` pod again. To do so, run the following command: ```bash - oc -n bgd exec -ti deployment/bgd -- curl $MS_IP:8080 | head + oc -n networkpolicy-test exec -ti pod/networkpolicy-pod -- curl $MS_IP:8080 | head ``` - This time it should fail to connect. Hit Ctrl-C to avoid having to wait until a timeout. + This time it should fail to connect. You can hit Ctrl + C to avoid having to wait until a timeout. - !!! info "If you have your browser still open to the microsweeper app, you can refresh and see that you can still access it." + !!! info "If you still have your browser open to the microsweeper app, you can refresh and see that you can still access it." -1. Sometimes you want your application to be accessible to other namespaces. You can allow access to just your microsweeper frontend from the `bgd` pods in the `bgd` namespace like so +1. In other cases, it you may want to allow your application to be accessible to only _certain_ namespaces. In this example, lets allow access to just your `microsweeper` application from only the `networkpolicy-pod` in the `networkpolicy-test` namespace using a label selector. To do so, run the following command: ```yaml cat < - - - - - - Microsweeper - -