Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Draft] PR to show diff from Summit23 #85

Open
wants to merge 23 commits into
base: main
Choose a base branch
from
Open
Show file tree
Hide file tree
Changes from all commits
Commits
Show all changes
23 commits
Select commit Hold shift + click to select a range
ab55466
Updating intro content for summit
michaelryanmcneill May 17, 2023
d67f3d9
Merge pull request #74 from rh-mobb/mrmc
michaelryanmcneill May 17, 2023
2a9a0cc
Cleaning up header image
michaelryanmcneill May 17, 2023
f6024f3
Cleaning up index sidebar
michaelryanmcneill May 17, 2023
9fc41e9
Merge pull request #75 from rh-mobb/mrmc
michaelryanmcneill May 17, 2023
77e6508
One more change to the homepage
michaelryanmcneill May 17, 2023
0091658
Merge pull request #76 from rh-mobb/mrmc
michaelryanmcneill May 17, 2023
fc7acb3
Update resilience.md
dirkporter May 17, 2023
316d0bd
Merge pull request #78 from rh-mobb/dirk
dirkporter May 17, 2023
1591502
adding summary and next steps
diana-sari May 18, 2023
bb12cf4
Merge pull request #79 from rh-mobb/dsari
mfdii May 18, 2023
1eb5fa0
Add files via upload
kumuduh May 18, 2023
071bc45
Added machine sets image
kumuduh May 18, 2023
83c752b
Added IDP image
kumuduh May 18, 2023
0fff496
adding objectives in each section
diana-sari May 18, 2023
50d574b
Merge pull request #81 from rh-mobb/kumuduh-update-day2-imgs
diana-sari May 19, 2023
5fff1d9
Merge pull request #82 from rh-mobb/dsari0
diana-sari May 19, 2023
9b2521f
Merge pull request #84 from rh-mobb/post-apac
mfdii May 19, 2023
6590674
Adding credentials to .gitignore
michaelryanmcneill May 22, 2023
89d5979
Adding various changes
michaelryanmcneill May 22, 2023
953a7e8
Fixing requirements
michaelryanmcneill May 22, 2023
62c04a2
Adding an if else for precreated cluster content
michaelryanmcneill May 24, 2023
32e0238
enabling create cluster, frontdoor and gitops section
sohaibazed Jul 17, 2023
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
3 changes: 3 additions & 0 deletions .gitignore
Original file line number Diff line number Diff line change
Expand Up @@ -5,3 +5,6 @@ aro-content/app/.DS_Store
virtualenv/*
site/*
.npm
# prevent inadvertent publishing of credential pages
aro-content/credentials/*
!aro-content/credentials/.gitkeep
18 changes: 18 additions & 0 deletions aro-content/100-setup/0-credentials.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,18 @@
# Welcome to the MOBB ARO Workshop!
<p>To access your lab resources, including credentials, please enter the username provided by the facilitator.</p>
<input class="md-input" type="text" id="mkdocs-content-username" placeholder="Enter your username as provided by the facilitator" size="46">
<button class="md-button md-button--primary" id="mkdocs-redirect-button">Submit</button>

<script>
// register an event listener for the button. This can also be done using element.onlick
document.getElementById("mkdocs-redirect-button").addEventListener("click", () => {
// get the number from the input field
let user = document.getElementById("mkdocs-content-username").value;

// construct the url to redirect to using a template string
let url = location.protocol + '//' + location.host + '/credentials/' + user;

// redirect the user to the new location
window.open(url);
});
</script>
15 changes: 15 additions & 0 deletions aro-content/100-setup/1-environment.md
Original file line number Diff line number Diff line change
Expand Up @@ -7,6 +7,10 @@ Your workshop environment consists of several components which have been pre-con
Click through to the [Credentials document]({{ credentials_doc }}){:target="_blank"} and claim a username/password by putting your name next to it in the **Participant** column.
{% endif %}

### Objective

For starter, let's learn about the environment you will be using for this workshop. This environment includes the resources that we have created in advance for your convenience. In addition, you will learn about the requirements needed to create an ARO cluster so you can spin up your own after completing the workshop.

To access your working environment, you'll need to log into the [Microsoft Azure portal](https://portal.azure.com){:target="_blank"}. Use your provided username with `@rhcsbb.onmicrosoft.com`. If you use the Azure portal regularly and it already knows a set of credentials for you, it's probably a good idea to use an Incognito window for this task, it should obviate the need to log out of an existing session.

When prompted, you'll log in with the credentials provided by the workshop team.
Expand Down Expand Up @@ -138,3 +142,14 @@ When your shell is ready and you are at the bash prompt, run the following comma
```bash
. ~/.workshoprc
```

### Summary and Next Steps

Here you learned:

* What the Azure and Red Hat requirements are for ARO cluster.
* How to access your environment via Azure Cloud Shell.

Next, you will learn how to:

* Access your ARO cluster.
15 changes: 13 additions & 2 deletions aro-content/100-setup/3-access-cluster.md
Original file line number Diff line number Diff line change
@@ -1,8 +1,12 @@
## Introduction

{% if precreated_clusters %}Now that you understand the environment setup and the resources we created in advance, let's login to your pre-created cluster. Here you will learn how to access the cluster via the OpenShift Web Console and CLI.{% else %}Now that you've created your cluster, we need to login to it. Here you will learn how to access the cluster via the OpenShift Web Console and CLI.{% endif %}

## Access the OpenShift Console and CLI

### Login to the OpenShift Web Console

1. Make sure your cluster is ready
1. First, let's ensure your cluster is ready by running the following command:

```bash
az aro show \
Expand All @@ -17,7 +21,7 @@
Succeeded
```

1. First, let's configure your workshop environment with our helper variables. To do so, let's run the following command:
1. Next, let's configure your workshop environment with our helper variables. To do so, let's run the following command:

```bash
cat << EOF >> ~/.workshoprc
Expand Down Expand Up @@ -86,3 +90,10 @@ Now that you're logged into the cluster's console, return to your Azure Cloud Sh
```

Congratulations, you're now logged into the cluster and ready to move on to the workshop content.

### Summary and Next Steps

Here you learned how to:

* Access your {% if precreated_clusters %}pre-created{% else %}newly created{% endif %} cluster.
* Login to your cluster via OpenShift CLI.
13 changes: 12 additions & 1 deletion aro-content/200-ops/day2/autoscaling.md
Original file line number Diff line number Diff line change
@@ -1,6 +1,10 @@
## Introduction

The cluster autoscaler adjusts the size of an Azure Red Hat OpenShift (ARO) cluster to meet the resource needs of the cluster. The cluster autoscaler increases the size of the cluster when there are pods that fail to schedule on any of the current worker nodes due to insufficient resources or when another node is necessary to meet deployment needs. The cluster autoscaler does not increase the cluster resources beyond the limits that you specify. To learn more about cluster autoscaling, visit the [Red Hat documentation for cluster autoscaling](https://docs.openshift.com/container-platform/latest/machine_management/applying-autoscaling.html){:target="_blank"}.
ARO Cluster Autoscaler is a feature that helps automatically adjust the size of an ARO cluster based on the current workload and resource demands. Cluster Autoscaler offers automatic and intelligent scaling of ARO clusters, leading to efficient resource utilization, improved application performance, high availability, and simplified cluster management. By dynamically adjusting the cluster size based on workload demands, it helps organizations optimize their infrastructure costs while ensuring optimal application performance and scalability. The cluster autoscaler does not increase the cluster resources beyond the limits that you specify.

![Diagram illustrating the cluster autoscaler process](../../assets/images/diagram-cluster-autoscaler.png){ align=center }

To learn more about cluster autoscaling, visit the [Red Hat documentation for cluster autoscaling](https://docs.openshift.com/container-platform/latest/machine_management/applying-autoscaling.html){:target="_blank"}.

## Create a Machine Autoscaler

Expand Down Expand Up @@ -196,3 +200,10 @@ Now let's test the cluster autoscaler and see it in action. To do so, we'll depl


Congratulations! You've successfully demonstrated cluster autoscaling.

### Summary and Next Steps

Here you learned:

* Create and configure a machine autoscaler
* Deploy an application on the cluster and watch the cluster autoscaler scale your cluster to support the increased workload
7 changes: 7 additions & 0 deletions aro-content/200-ops/day2/labels.md
Original file line number Diff line number Diff line change
Expand Up @@ -143,3 +143,10 @@ Now that we've successfully labeled our nodes, let's deploy a workload to demons
The application is exposed over the default ingress using a predetermined URL and trusted TLS certificate. This is done using the OpenShift `Route` resource which is an extension to the Kubernetes `Ingress` resource.

Congratulations! You've successfully demonstrated the ability to label nodes and target those nodes using `nodeSelector`.

### Summary and Next Steps

Here you learned:

* Set label for the MachineSets.
* Deploy an application on nodes with certain labels
105 changes: 28 additions & 77 deletions aro-content/200-ops/day2/observability.md
Original file line number Diff line number Diff line change
@@ -1,101 +1,50 @@
# Configure Metrics and Log Forwarding to Azure Files
## Introduction

OpenShift stores logs and metrics inside the cluster by default, however it also provides tooling to forward both to various locations. Here we will configure ARO
to forward logs and metrics to Azure Files and use Grafana to view them.
Azure Red Hat OpenShift (ARO) clusters store log data inside the cluster by default. Understanding metrics and logs is critical in successfully running your cluster. Included with ARO is the OpenShift Cluster Logging Operator, which is intended to simplify log management and analysis within an ARO cluster, offering centralized log collection, powerful search capabilities, visualization tools, and integration with other other Azure systems like [Azure Files](https://azure.microsoft.com/en-us/products/storage/files).

<!--
## Configure User Workload Metrics
In this section of the workshop, we'll configure ARO to forward logs and metrics to Azure Files and view them using Grafana.

User Workload Metrics is a Prometheus stack that runs in the cluster that can collect metrics from your applications.
## Configure Metrics and Log Forwarding to Azure Files

1. Enable user workload metrics

```bash
cat << EOF | oc apply -f -
apiVersion: v1
kind: ConfigMap
metadata:
name: cluster-monitoring-config
namespace: openshift-monitoring
data:
config.yaml: |
enableUserWorkload: true
alertmanagerMain: {}
prometheusK8s: {}
EOF
```

1. Watch as the user workload prometheus is created

```bash
oc -n openshift-user-workload-monitoring get pods --watch
```

Once the output looks like this you can run `CTRL-C` and move on.

```{.text .no-copy}
NAME READY STATUS RESTARTS AGE
prometheus-operator-58768d7cc-hp796 2/2 Running 0 47s
prometheus-user-workload-0 6/6 Running 0 45s
prometheus-user-workload-1 6/6 Running 0 45s
thanos-ruler-user-workload-0 3/3 Running 0 40s
thanos-ruler-user-workload-1 3/3 Running 0 40s
```


## Configure Cluster Log Forwarding to Azure Files
-->

1. Create a Storage Account
1. First, let's create our Azure Files storage account. To do so, run the following command:

```bash
AZR_STORAGE_ACCOUNT_NAME="${AZ_USER}${UNIQUE}"
az storage account create --name "${AZR_STORAGE_ACCOUNT_NAME}" -g "${AZ_RG}" --location "${AZ_LOCATION}" --sku Standard_LRS
```

1. Fetch your storage account key
1. Next, let's grab our storage account key. To do so, run the following command:

```bash
AZR_STORAGE_KEY=$(az storage account keys list -g "${AZ_RG}" \
-n "${AZR_STORAGE_ACCOUNT_NAME}" --query "[0].value" -o tsv)
```

1. Create a storage bucket for logs
1. Now, let's create a separate storage bucket for logs and metrics. To do so, run the following command:

```bash
az storage container create --name "aro-logs" \
--account-name "${AZR_STORAGE_ACCOUNT_NAME}" \
--account-key "${AZR_STORAGE_KEY}"
```

1. Create a storage bucket for metrics

```bash
az storage container create --name "aro-metrics" \
--account-name "${AZR_STORAGE_ACCOUNT_NAME}" \
--account-key "${AZR_STORAGE_KEY}"
```

1. Deploy ElasticSearch CRDs (not used, but needed for a [bug workaround](https://access.redhat.com/solutions/6990588))

```bash
oc create -f https://raw.githubusercontent.com/openshift/elasticsearch-operator/release-5.5/bundle/manifests/logging.openshift.io_elasticsearches.yaml
```

1. Set up the MOBB Helm Chart Repository
1. Next, let's add the MOBB Helm Chart repository. To do so, run the following command:

```bash
helm repo add mobb https://rh-mobb.github.io/helm-charts/
helm repo update
```

1. Create a project to deploy the Helm charts into
1. Now, we need to create a project (namespace) to deploy our logging resources to. To create that, run the following command:

```bash
oc new-project custom-logging
```

1. Create list of Operators to install
1. Next, we need to install a few operators to run our logging setup. These operators include the Red Hat Cluster Logging Operator, the Loki operator, the Grafana operator, and more. First, we'll create a list of all the operators we'll need to install by running the following command:

```yaml
cat <<EOF > clf-operators.yaml
Expand Down Expand Up @@ -138,7 +87,7 @@ User Workload Metrics is a Prometheus stack that runs in the cluster that can co
EOF
```

1. Deploy the Grafana, Cluster Logging, and Loki Operator from the file just created above using Helm
1. Next, let's deploy the Grafana, Cluster Logging, and Loki operators from the file we just created above. To do so, run the following command:

```bash
oc create ns openshift-logging
Expand All @@ -149,7 +98,7 @@ User Workload Metrics is a Prometheus stack that runs in the cluster that can co
--values ./clf-operators.yaml
```

1. Wait for the Operators to be installed
1. Now, let's wait for the operators to be installed.

!!! info "These will loop through each type of resource until the CRDs for the Operators have been deployed. Eventually you'll see the message `No resources found in custom-logging namespace.` and be returned to a prompt."

Expand All @@ -160,7 +109,7 @@ User Workload Metrics is a Prometheus stack that runs in the cluster that can co
while ! oc get resourcelocker; do sleep 5; echo -n .; done
```

1. Deploy Helm Chart to deploy Grafana and forward metrics to Azure
1. Now that the operators have been successfully installed, let's use a helm chart to deploy Grafana and forward metrics to Azure Files. To do so, run the following command:

```bash
helm upgrade -n "custom-logging" aro-thanos-af \
Expand All @@ -171,7 +120,7 @@ User Workload Metrics is a Prometheus stack that runs in the cluster that can co
--set "enableUserWorkloadMetrics=true"
```

1. Validate Grafana is accessible, by fetching it's Route and browsing to it with your web browser.
1. Next, let's ensure that we can access Grafana. To do so, we should fetch its route and try browsing to it with your web browser. To grab the route, run the following command:

```bash
oc -n custom-logging get route grafana-route \
Expand All @@ -190,7 +139,7 @@ User Workload Metrics is a Prometheus stack that runs in the cluster that can co
oc -n custom-logging rollout restart deployment grafana-deployment
```

1. Deploy Helm Chart to enable Cluster Log forwarding to Azure
1. Next, let's use another helm chart to deploy forward logs to Azure Files. To do so, run the following command:

```bash
helm upgrade -n custom-logging aro-clf-blob \
Expand All @@ -200,16 +149,13 @@ User Workload Metrics is a Prometheus stack that runs in the cluster that can co
--set "azure.storageContainer=aro-logs"
```

1. Wait for the Log Collector agent to be started
1. Once the Helm Chart deploys its resource, we need to wait for the Log Collector agent to be started. To watch its status, run the following command:

```bash
oc -n openshift-logging rollout status daemonset collector
```

1. Restart Log Collector

!!! info
Sometimes the log collector agent starts before the operator has finished configuring Loki, restarting it here will resolve.
1. Occasionally, the log collector agent starts before the operator has finished configuring Loki. To proactively address this, we need to restart the agent. To do so, run the following command:

```bash
oc -n openshift-logging rollout restart daemonset collector
Expand All @@ -225,23 +171,28 @@ User Workload Metrics is a Prometheus stack that runs in the cluster that can co

## View the Metrics and Logs

Now that the Metrics and Log forwarding is set up we can view them in Grafana.
Now that the metrics and log forwarding are forwarding to Azure Files, let's view them in Granfa.

1. Fetch the Route for Grafana
1. First, we'll need to fetch the route for Grafana and visit it in our web browser. To get the route, run the following command:

```bash
oc -n custom-logging get route grafana-route \
-o jsonpath='{"https://"}{.spec.host}{"\n"}'
```

1. Browse to the provided route address in the same browser window as your OCP console and login using your OpenShift credentials (either AAD or kubeadmin).

1. View an existing dashboard such as **custom-logging -> Node Exporter -> USE Method -> Cluster**.
1. Once you get to the Grafana interface, you'll be redirected to login using your ARO credentials that you were given by the workshop team. Once you login, proceed with viewing an existing dashboard such as **custom-logging -> Node Exporter -> USE Method -> Cluster**.

!!! info "These dashboards are copies of the dashboards that are available directly on the OpenShift web console under **Observability**"
!!! info "These dashboards are copies of the dashboards that are available directly on the OpenShift Web Console under **Observability**"

![](../Images/grafana-metrics.png)

1. Click the Explore (compass) Icon in the left hand menu, select “Loki (Application)” in the dropdown and search for `{kubernetes_namespace_name="custom-logging"}`

![](../Images/grafana-logs.png)

### Summary and Next Steps

Here you learned how to:

* Configured metrics and log forwarding to Azure Files.
* View the metrics and logs in a Grafana dashboard.
17 changes: 16 additions & 1 deletion aro-content/200-ops/day2/scaling-nodes.md
Original file line number Diff line number Diff line change
@@ -1,9 +1,17 @@
## Introduction

When deploying your Azure Red Hat OpenShift (ARO) cluster, you can configure many aspects of your worker nodes, but what happens when you need to change your worker nodes after they've already been created? These activities include scaling the number of nodes, changing the instance type, adding labels or taints, just to name a few.
When deploying your ARO cluster, you can configure many aspects of your worker nodes, but what happens when you need to change your worker nodes after they've already been created? These activities include scaling the number of nodes, changing the instance type, adding labels or taints, just to name a few.

Many of these changes are done using MachineSets. MachineSets ensure that a specified number of Machine replicas are running at any given time. Think of a MachineSet as a "template" for the kinds of Machines that make up the worker nodes of your cluster. These are similar to other Kubernetes resources, like a ReplicaSet is to Pods. One important caveat, is that MachineSets allow users to manage many Machines as a single entity, but are contained to a specific availability zone. If you'd like to learn more, see the [Red Hat documentation on machine management](https://docs.openshift.com/container-platform/latest/machine_management/index.html){:target="_blank"}.

Here are some of the advantages of using ARO MachineSets to manage the size of your cluster

* Scalability - MachineSets enables horizontal scaling of your cluster. It can easily add or remove worker to handle the changes in workload. This flexibility ensures that your cluster can dynamically scale to meet the needs of your applications
* Infrastructure Diversity - MachineSets allow you to provision worker nodes of different instance type. This enables you you leverage the best kind of instance family for different workloads.
* Integration with Cluster Autoscaler - MachineSets seamlessly integrate with the Cluster Autoscaler feature, which automatically adjusts the number of worker nodes based on the current demand. This integration ensures efficient resource utilization by scaling the cluster up or down as needed, optimizing costs and performance.

![scale_machinesets](../../assets/images/scale_machinesets.png){ align=center }

## Scaling worker nodes
### Via the CLI

Expand Down Expand Up @@ -121,3 +129,10 @@ Now let's scale the cluster back down to a total of 3 worker nodes, but this tim
![Web Console - MachineSets Edit Count](/assets/images/web-console-machinesets-edit-count.png){ align=center }

Congratulations! You've successfully scaled your cluster up and back down to three nodes.

### Summary and Next Steps

Here you learned how to:

* Scaling an existing MachineSet up to add more nodes to the cluster
* Scaling your MachineSet down to remove worker nodes from the cluster
Loading