From c298ad4588d4bf470174f401c9479eccb30eb64a Mon Sep 17 00:00:00 2001 From: "openshift-merge-bot[bot]" Date: Fri, 17 May 2024 16:53:03 +0000 Subject: [PATCH] deploy: 7e9362aef8d4df33a98b45f009d12f1e1a9888a4 --- getting-started/installation/register-a-cluster/index.html | 2 +- getting-started/integration/cluster-proxy/index.html | 4 ++-- getting-started/integration/index.xml | 2 +- index.xml | 2 +- scenarios/deploy-kubernetes-resources/index.html | 2 +- scenarios/distribute-workload-with-placement/index.html | 5 +++-- scenarios/extending-managed-clusters/index.html | 2 +- scenarios/integration-with-argocd/index.html | 2 +- scenarios/manage-cluster-with-multiple-hubs/index.html | 4 ++-- scenarios/migrate-workload-with-placement/index.html | 3 ++- 10 files changed, 15 insertions(+), 13 deletions(-) diff --git a/getting-started/installation/register-a-cluster/index.html b/getting-started/installation/register-a-cluster/index.html index 1096ddf4..0c381b9d 100644 --- a/getting-started/installation/register-a-cluster/index.html +++ b/getting-started/installation/register-a-cluster/index.html @@ -453,7 +453,7 @@

Bootstrap a klusterlet in sing

Accept the join request and verify

After the OCM agent is running on your managed cluster, it will be sending a “handshake” to your hub cluster and waiting for an approval from the hub cluster admin. In this section, we will walk -through accepting the registration requests from the prespective of an OCM’s hub admin.

+through accepting the registration requests from the perspective of an OCM’s hub admin.

  1. Wait for the creation of the CSR object which will be created by your managed diff --git a/getting-started/integration/cluster-proxy/index.html b/getting-started/integration/cluster-proxy/index.html index d633f456..df7f0856 100644 --- a/getting-started/integration/cluster-proxy/index.html +++ b/getting-started/integration/cluster-proxy/index.html @@ -289,7 +289,7 @@

    Cluster proxy

  2. Example code
  3. -
  4. More insight +
  5. More insights
    • Troubleshooting
    • Related materials
    • @@ -435,7 +435,7 @@

      Command-line tools

      is running. With that being said, to mount the secret to the systems in the other namespaces, the users are expected to copy the secret on their own manually.

      -

      More insight

      +

      More insights

      Troubleshooting

      The installation of proxy servers and agents are prescribed by the custom resource called “managedproxyconfiguration”. We can check it out by the diff --git a/getting-started/integration/index.xml b/getting-started/integration/index.xml index 1e4f8f5d..ae298d95 100644 --- a/getting-started/integration/index.xml +++ b/getting-started/integration/index.xml @@ -28,7 +28,7 @@ Prerequisite You must meet the following prerequisites to install the applicatio https://open-cluster-management.io/getting-started/integration/cluster-proxy/ Cluster proxy is an OCM addon providing L4 network connectivity from hub cluster to the managed clusters without any additional requirement to the managed cluster’s network infrastructure by leveraging the Kubernetes official SIG sub-project apiserver-network-proxy. - Background About apiserver-network-proxy Architecture Prerequisite Installation Usage Command-line tools Example code More insight Troubleshooting Related materials Background The original architecture of OCM allows a cluster from anywhere to be registered and managed by OCM’s control plane (i. + Background About apiserver-network-proxy Architecture Prerequisite Installation Usage Command-line tools Example code More insights Troubleshooting Related materials Background The original architecture of OCM allows a cluster from anywhere to be registered and managed by OCM’s control plane (i. diff --git a/index.xml b/index.xml index 69d8cca5..e9170e95 100644 --- a/index.xml +++ b/index.xml @@ -226,7 +226,7 @@ Prerequisite You must meet the following prerequisites to install the applicatio https://open-cluster-management.io/getting-started/integration/cluster-proxy/ Cluster proxy is an OCM addon providing L4 network connectivity from hub cluster to the managed clusters without any additional requirement to the managed cluster’s network infrastructure by leveraging the Kubernetes official SIG sub-project apiserver-network-proxy. - Background About apiserver-network-proxy Architecture Prerequisite Installation Usage Command-line tools Example code More insight Troubleshooting Related materials Background The original architecture of OCM allows a cluster from anywhere to be registered and managed by OCM’s control plane (i. + Background About apiserver-network-proxy Architecture Prerequisite Installation Usage Command-line tools Example code More insights Troubleshooting Related materials Background The original architecture of OCM allows a cluster from anywhere to be registered and managed by OCM’s control plane (i. diff --git a/scenarios/deploy-kubernetes-resources/index.html b/scenarios/deploy-kubernetes-resources/index.html index cc67401a..dc5eb104 100644 --- a/scenarios/deploy-kubernetes-resources/index.html +++ b/scenarios/deploy-kubernetes-resources/index.html @@ -287,7 +287,7 @@

      Prerequisites

      cluster namespace, see details in this page.

    -

    Deploy the resource to a targetd cluster

    +

    Deploy the resource to a target cluster

    Now you can deploy a set of kubernetes resources defined in files to any clusters managed by the hub cluster.

    Connect to your hub cluster and you have 2 options to create a ManifestWork:

      diff --git a/scenarios/distribute-workload-with-placement/index.html b/scenarios/distribute-workload-with-placement/index.html index 4f41de26..e6217142 100644 --- a/scenarios/distribute-workload-with-placement/index.html +++ b/scenarios/distribute-workload-with-placement/index.html @@ -275,7 +275,8 @@

      Distribute workload with placement selected managed clusters

      with the PlacementDecision to leverage its scheduling capabilities.

      For example, with OCM addon policy installed, a Policy that includes a Placement mapping can distribute the -Policy to the managed clusters, details see this example.

      +Policy to the managed clusters. +For details see this example.

      Some popular open source projects also integrate with the Placement API. For example Argo CD, it can leverage the generated PlacementDecision to drive the assignment of Argo CD Applications to a @@ -449,7 +450,7 @@

      What happens behind the scene

      selfLink: ""
    1. -

      clusteradm get the PlacementDecision generated by Placment +

      clusteradm get the PlacementDecision generated by Placement placement1 with label cluster.open-cluster-management.io/placement: placement1, reference code. Then parse the clusterName cluster1 and cluster2, fill in that as ManifestWork diff --git a/scenarios/extending-managed-clusters/index.html b/scenarios/extending-managed-clusters/index.html index 74d920c3..e0c4c03f 100644 --- a/scenarios/extending-managed-clusters/index.html +++ b/scenarios/extending-managed-clusters/index.html @@ -294,7 +294,7 @@

      Labeling managed cluster

      <domain name>/<label name>: <label string value> ...

      However, there’re some restrictions to the label value such as the content -length, and legal charset etc, so it’s not convenient to put some structurelized +length, and legal charset etc, so it’s not convenient to put some structuralized or comprehensive data in the label value.

      Additionally, due to the fact that the finest granularity of authorization mechanism in Kubernetes is “resource”, so it’s also not convenient for us diff --git a/scenarios/integration-with-argocd/index.html b/scenarios/integration-with-argocd/index.html index dfcee751..d10bc0ce 100644 --- a/scenarios/integration-with-argocd/index.html +++ b/scenarios/integration-with-argocd/index.html @@ -295,7 +295,7 @@

      How it works

      statusListKey: decisions matchKey: clusterName EOF -

      With reference to this generator, an ApplicationSet can target the application to the clusters listed in the status of a set of PlacementDecision, which belong to a certian Placement.

      +

      With reference to this generator, an ApplicationSet can target the application to the clusters listed in the status of a set of PlacementDecision, which belong to a certain Placement.

      4. Grant Argo CD permissions to access OCM resources.

      cat << EOF | kubectl apply -f -
       apiVersion: rbac.authorization.k8s.io/v1
      diff --git a/scenarios/manage-cluster-with-multiple-hubs/index.html b/scenarios/manage-cluster-with-multiple-hubs/index.html
      index 5719e147..fc7fb62f 100644
      --- a/scenarios/manage-cluster-with-multiple-hubs/index.html
      +++ b/scenarios/manage-cluster-with-multiple-hubs/index.html
      @@ -277,14 +277,14 @@ 

      Manage a cluster with multiple hubs

    2. Run the agents in the hosted mode on the hosting clusters;
    3. Run all the agents on the managed cluster

      -

      Since there are multiple OCM agents are running on the managed cluster, each of them must have an uniqe agent namespace. So only one agent can be deployed in the default agent namespace open-cluster-management-agent.

      +

      Since there are multiple OCM agents are running on the managed cluster, each of them must have an unique agent namespace. So only one agent can be deployed in the default agent namespace open-cluster-management-agent.

      multiple hubs

      With this architecture, the managed cluster needs more resources, including CPUs and memory, to run agents for multiple hubs. And it’s a challenge to handle the version skew of the OCM hubs.

      An example built with kind and clusteradm can be found in Manage a cluster with multiple hubs.

      Run the agents in the hosted mode on the hosting clusters

      -

      By leveraging the hosted deployment mode, it’s possiable to run OCM agent outside of the managed cluster on a hosing cluster. The hosting cluster could be a managed cluster of the same hub.

      +

      By leveraging the hosted deployment mode, it’s possible to run OCM agent outside of the managed cluster on a hosing cluster. The hosting cluster could be a managed cluster of the same hub.

      multiple hubs in hosted mode
      diff --git a/scenarios/migrate-workload-with-placement/index.html b/scenarios/migrate-workload-with-placement/index.html index 70f9e03b..a812c1fc 100644 --- a/scenarios/migrate-workload-with-placement/index.html +++ b/scenarios/migrate-workload-with-placement/index.html @@ -275,7 +275,8 @@

      Migrate workload with placement

      with the PlacementDecision to leverage its scheduling capabilities.

      For example, with OCM addon policy installed, a Policy that includes a Placement mapping can distribute the -Policy to the managed clusters, details see this example.

      +Policy to the managed clusters. +For details see this example.

      Some popular open source projects also integrate with the Placement API. For example Argo CD, it can leverage the generated PlacementDecision to drive the assignment of Argo CD Applications to a