diff --git a/docs/README.md b/docs/README.md index 4121249c..f6c9c508 100644 --- a/docs/README.md +++ b/docs/README.md @@ -12,6 +12,9 @@ The **Orchestrator** project facilitates the automation of complex workflows acr - **postgresql/** - Documentation for installing PostgreSQL Server +- **main/** + - Installation instructions, and configuration details specific to main branch of the Orchestrator project. + - **release-x.y/** - Installation instructions, and configuration details specific to version x.y of the Orchestrator project. diff --git a/docs/main/README.md b/docs/main/README.md new file mode 100644 index 00000000..d37b60bf --- /dev/null +++ b/docs/main/README.md @@ -0,0 +1,272 @@ +# Orchestrator Documentation + +For comprehensive documentation on the Orchestrator, please visit [https://www.parodos.dev](https://www.parodos.dev). + +## Installing the Orchestrator Helm Operator + +Deploy the Orchestrator solution suite in an OCP cluster using the Orchestrator operator.\ +The chart installs the following components onto the target OpenShift cluster: + +- RHDH (Red Hat Developer Hub) Backstage +- OpenShift Serverless Logic Operator (with Data-Index and Job Service) +- OpenShift Serverless Operator + - Knative Eventing + - Knative Serving +- (Optional) An ArgoCD project named `orchestrator`. Requires an pre-installed ArgoCD/OpenShift GitOps instance in the cluster. Disabled by default +- (Optional) Tekton tasks and build pipeline. Requires an pre-installed Tekton/OpenShift Pipelines instance in the cluster. Disabled by default + +## Important Note for ARM64 Architecture Users + +Note that as of November 6, 2023, OpenShift Serverless Operator is based on RHEL 8 images which are not supported on the ARM64 architecture. Consequently, deployment of this helm chart on an [OpenShift Local](https://www.redhat.com/sysadmin/install-openshift-local) cluster on MacBook laptops with M1/M2 chips is not supported. + +## Prerequisites + +- Logged in to a Red Hat OpenShift Container Platform (version 4.13+) cluster as a cluster administrator. +- [OpenShift CLI (oc)](https://docs.openshift.com/container-platform/4.13/cli_reference/openshift_cli/getting-started-cli.html) is installed. +- [Operator Lifecycle Manager (OLM)](https://olm.operatorframework.io/docs/getting-started/) has been installed in your cluster. +- Your cluster has a [default storage class](https://docs.openshift.com/container-platform/4.13/storage/container_storage_interface/persistent-storage-csi-sc-manage.html) provisioned. +- [Helm](https://helm.sh/docs/intro/install/) v3.9+ is installed. +- A GitHub API Token - to import items into the catalog, ensure you have a `GITHUB_TOKEN` with the necessary permissions as detailed [here](https://backstage.io/docs/integrations/github/locations/). + - For classic token, include the following permissions: + - repo (all) + - admin:org (read:org) + - user (read:user, user:email) + - workflow (all) - required for using the software templates for creating workflows in GitHub + - For Fine grained token: + - Repository permissions: **Read** access to metadata, **Read** and **Write** access to actions, actions variables, administration, code, codespaces, commit statuses, environments, issues, pull requests, repository hooks, secrets, security events, and workflows. + - Organization permissions: **Read** access to members, **Read** and **Write** access to organization administration, organization hooks, organization projects, and organization secrets. + +### Deployment with GitOps + + If you plan to deploy in a GitOps environment, make sure you have installed the `ArgoCD/Red Hat OpenShift GitOps` and the `Tekton/Red Hat Openshift Pipelines Install` operators following these [instructions](https://github.com/parodos-dev/orchestrator-helm-operator/blob/main/docs/gitops/README.md). + The Orchestrator installs RHDH and imports software templates designed for bootstrapping workflow development. These templates are crafted to ease the development lifecycle, including a Tekton pipeline to build workflow images and generate workflow K8s custom resources. Furthermore, ArgoCD is utilized to monitor any changes made to the workflow repository and to automatically trigger the Tekton pipelines as needed. + +- `ArgoCD/OpenShift GitOps` operator + - Ensure at least one instance of `ArgoCD` exists in the designated namespace (referenced by `ARGOCD_NAMESPACE` environment variable). Example [here](https://raw.githubusercontent.com/parodos-dev/orchestrator-helm-operator/main/docs/gitops/resources/argocd-example.yaml) + - Validated API is `argoproj.io/v1alpha1/AppProject` +- `Tekton/OpenShift Pipelines` operator + - Validated APIs are `tekton.dev/v1beta1/Task` and `tekton.dev/v1/Pipeline` + - Requires ArgoCD installed since the manifests are deployed in the same namespace as the ArgoCD instance. + + Remember to enable [argocd](https://github.com/parodos-dev/orchestrator-helm-operator/blob/c577e95e063e2bf8119b2b23890df04792f9424c/config/crd/bases/rhdh.redhat.com_orchestrators.yaml#L451) and [tekton](https://github.com/parodos-dev/orchestrator-helm-operator/blob/c577e95e063e2bf8119b2b23890df04792f9424c/config/crd/bases/rhdh.redhat.com_orchestrators.yaml#L443) in your CR instance. + +## Installation + +1. Deploy the PostgreSQL reference implementation for persistence support in SonataFlow following these [instructions](https://github.com/parodos-dev/orchestrator-helm-operator/blob/main/docs/postgresql/README.md) + +1. Create a namespace for the Orchestrator solution: + + ```console + oc new-project orchestrator + ``` + +1. Create a namespace for the Red Hat Developer Hub Operator (RHDH Operator): + + ```console + oc new-project rhdh-operator + ``` + +1. Download the setup script from the github repository and run it to create the RHDH secret and label the GitOps namespaces: + + ```console + wget https://raw.githubusercontent.com/parodos-dev/orchestrator-helm-operator/main/hack/setup.sh -O /tmp/setup.sh && chmod u+x /tmp/setup.sh + ``` + + Run the script: + ```console + /tmp/setup.sh --use-default + ``` + **NOTE:** If you don't want to use the default values, omit the `--use-default` and the script will prompt you for input. + + The contents will vary depending on the configuration in the cluster. The following list details all the keys that can appear in the secret: + + > - `BACKEND_SECRET`: Value is randomly generated at script execution. This is the only mandatory key required to be in the secret for the RHDH Operator to start. + > - `K8S_CLUSTER_URL`: The URL of the Kubernetes cluster is obtained dynamically using `oc whoami --show-server`. + > - `K8S_CLUSTER_TOKEN`: The value is obtained dynamically based on the provided namespace and service account. + > - `GITHUB_TOKEN`: This value is prompted from the user during script execution and is not predefined. + > - `GITHUB_CLIENT_ID` and `GITHUB_CLIENT_SECRET`: The value for both these fields are used to authenticate against GitHub. For more information open this [link](https://backstage.io/docs/auth/github/provider/). + > - `ARGOCD_URL`: This value is dynamically obtained based on the first ArgoCD instance available. + > - `ARGOCD_USERNAME`: Default value is set to `admin`. + > - `ARGOCD_PASSWORD`: This value is dynamically obtained based on the first ArgoCD instance available. + + Keys will not be added to the secret if they have no values associated. So for instance, when deploying in a cluster without the GitOps operators, the `ARGOCD_URL`, `ARGOCD_USERNAME` and `ARGOCD_PASSWORD` keys will be omited in the secret. + + Sample of a secret created in a GitOps environment: + + ```console + $> oc get secret -n rhdh-operator -o yaml backstage-backend-auth-secret + apiVersion: v1 + data: + ARGOCD_PASSWORD: ... + ARGOCD_URL: ... + ARGOCD_USERNAME: ... + BACKEND_SECRET: ... + GITHUB_TOKEN: ... + K8S_CLUSTER_TOKEN: ... + K8S_CLUSTER_URL: ... + kind: Secret + metadata: + creationTimestamp: "2024-05-07T22:22:59Z" + name: backstage-backend-auth-secret + namespace: rhdh-operator + resourceVersion: "4402773" + uid: 2042e741-346e-4f0e-9d15-1b5492bb9916 + type: Opaque + ``` +1. Use the following manifest to install the operator in an OCP cluster: + + ```yaml + apiVersion: operators.coreos.com/v1alpha1 + kind: Subscription + metadata: + name: orchestrator-operator + namespace: openshift-operators + spec: + channel: alpha + installPlanApproval: Automatic + name: orchestrator-operator + source: redhat-operators + sourceNamespace: openshift-marketplace + ``` + +1. Run the following commands to determine when the installation is completed: + + ```console + wget https://raw.githubusercontent.com/parodos-dev/orchestrator-helm-operator/main/hack/wait_for_operator_installed.sh -O /tmp/wait_for_operator_installed.sh && chmod u+x /tmp/wait_for_operator_installed.sh + ``` + + During the installation process, Kubernetes cronjobs are created by the chart to monitor the lifecycle of the CRs managed by the chart: rhdh operator, serverless operator and sonataflow operator. When deleting one of the previously mentioned CRs, a job is triggered that ensures the CR is removed before the operator is. + In case of any failure at this stage, these jobs remain active, facilitating administrators in retrieving detailed diagnostic information to identify and address the cause of the failure. + + > **Note:** that every minute on the clock a job is triggered to reconcile the CRs with the chart values. These cronjobs are deleted when their respective features (e.g. `rhdhOperator.enabled=false`) are removed or when the chart is removed. This is required because the CRs are not managed by helm due to the CRD dependency pre availability to the deployment of the CR. + + +## Additional information + +### Additional Workflow Namespaces + +When deploying a workflow in a namespace different from where Sonataflow services are running (e.g., sonataflow-infra), several essential steps must be followed: + +1. **Label the Workflow Namespace:** + To allow Sonataflow services to accept traffic from workflows, apply the following label to the desired workflow namespace: + ```console + oc label ns $ADDITIONAL_NAMESPACE rhdh.redhat.com/workflow-namespace="" + ``` +2. **Identify the RHDH Namespace:** + Retrieve the namespace where RHDH is running by executing: + ```console + oc get backstage -A + ``` + Store the namespace value in RHDH_NAMESPACE. +3. **Identify the Sonataflow Services Namespace:** + Check the namespace where Sonataflow services are deployed: + ```console + oc get sonataflowclusterplatform -A + ``` + If there is no cluster platform, check for a namespace-specific platform: + ```console + oc get sonataflowplatform -A + ``` + Store the namespace value in SONATAFLOW_PLATFORM_NAMESPACE. + +4. **Set Up Network Policy:** + Configure a network policy to allow traffic only between RHDH, Sonataflow services, and the workflows. The policy can be derived from the charts/orchestrator/templates/network-policy.yaml file: + + ```console + oc create -f < secret.yaml + sed -i '/namespace: sonataflow-infra/d' secret.yaml + oc apply -f secret.yaml -n $ADDITIONAL_NAMESPACE + ``` + * **Configure the Namespace Attribute:** + Add the namespace attribute under the `serviceRef` property where the PostgreSQL server is deployed. + ```yaml + apiVersion: sonataflow.org/v1alpha08 + kind: SonataFlow + ... + spec: + ... + persistence: + postgresql: + secretRef: + name: sonataflow-psql-postgresql + passwordKey: postgres-password + userKey: postgres-username + serviceRef: + databaseName: sonataflow + databaseSchema: greeting + name: sonataflow-psql-postgresql + namespace: $POSTGRESQL_NAMESPACE + port: 5432 + ``` + Replace POSTGRESQL_NAMESPACE with the namespace where the PostgreSQL server is deployed. + +By following these steps, the workflow will have the necessary credentials to access PostgreSQL and will correctly reference the service in a different namespace. + +### GitOps environment + +See the dedicated [document](https://github.com/parodos-dev/orchestrator-helm-operator/blob/main/docs/gitops/README.md) + +### Deploying PostgreSQL reference implementation + +See [here](https://github.com/parodos-dev/orchestrator-helm-operator/blob/main/docs/postgresql/README.md) + +### ArgoCD and workflow namespace + +If you manually created the workflow namespaces (e.g., `$WORKFLOW_NAMESPACE`), run this command to add the required label that allows ArgoCD deploying instances there: + +```console +oc label ns $WORKFLOW_NAMESPACE argocd.argoproj.io/managed-by=$ARGOCD_NAMESPACE +``` + +### Workflow installation + +Follow [Workflows Installation](https://www.parodos.dev/serverless-workflows-config/) + +## Cleanup + +**\/!\\ Before removing the orchestrator, make sure you first removed installed workflows. Otherwise the deletion may hung in termination state** + +To remove the operator from the cluster, delete the subscription: + +```console +oc delete subscriptions.operators.coreos.com orchestrator-operator -n openshift-operators +``` + +Note that the CRDs created during the installation process will remain in the cluster. + +To clean the rest of the resources, run: +```console +oc get crd -o name | grep -e sonataflow -e rhdh | xargs oc delete +oc delete namespace orchestrator sonataflow-infra rhdh-operator +``` + +If you want to remove *knative* related resources, you may also run: +```console +oc get crd -o name | grep -e knative | xargs oc delete +``` diff --git a/docs/main/existing-rhdh.md b/docs/main/existing-rhdh.md new file mode 100644 index 00000000..bdc01faf --- /dev/null +++ b/docs/main/existing-rhdh.md @@ -0,0 +1,269 @@ +# Prerequisites +- RHDH instance deployed with IDP configured (github, gitlab,...) +- For using the Orchestrator's [software templates](https://github.com/parodos-dev/workflow-software-templates/tree/v1.2.x), OpenShift Gitops (ArgoCD) and OpenShift Pipelines (Tekton) should be installed and configured in RHDH (to enhance the CI/CD plugins) +- A secret in RHDH's namespace name `dynamic-plugins-npmrc` that points to the plugins npm registry (details will be provided below) + +# Installation steps + +## Install the Orchestrator Operator +In 1.2, the Orchestrator infrastructure is being installed using the orchestrator-operator. +- Install the orchestrator-operator from the OperatorHub. +- Create orchestrator resource (operand) instance - ensure `rhdhOperator: enabled: False` is set, e.g. + ``` + spec: + orchestrator: + namespace: sonataflow-infra + sonataflowPlatform: + resources: + limits: + cpu: 500m + memory: 1Gi + requests: + cpu: 250m + memory: 64Mi + postgres: + authSecret: + name: sonataflow-psql-postgresql + passwordKey: postgres-password + userKey: postgres-username + database: sonataflow + serviceName: sonataflow-psql-postgresql + serviceNamespace: sonataflow-infra + rhdhOperator: + enabled: false + ``` + +## Edit RHDH configuration +As part of RHDH deployed resources, there are two primary ConfigMaps that require modification, typically found under the *rhdh-operator* namespaces, or located in the same namespace as the Backstage CR. +Before enabling the Orchestrator and Notifications plugins, pls ensure a secret that points to the target npmjs registry exists in the same RHDH namespace, e.g.: +``` +cat <:registry= +``` + +### dynamic-plugins ConfigMap +This ConfigMap houses the configuration for enabling and configuring dynamic plugins. To incorporate the orchestrator plugins, append the following configuration to the **dynamic-plugins** ConfigMap: + +```yaml + - disabled: false + package: "@redhat/backstage-plugin-orchestrator-backend-dynamic@1.2.0" + integrity: sha512-lyw7IHuXsakTa5Pok8S2GK0imqrmXe3z+TcL7eB2sJYFqQPkCP5la1vqteL9/1EaI5eI6nKZ60WVRkPEldKBTg== + pluginConfig: + orchestrator: + dataIndexService: + url: http://sonataflow-platform-data-index-service.sonataflow-infra + - disabled: false + package: "@redhat/backstage-plugin-orchestrator@1.2.0" + integrity: sha512-FhM13wVXjjF39syowc4RnMC/gKm4TRlmh8lBrMwPXAw1VzgIADI8H6WVEs837poVX/tYSqj2WhehwzFqU6PuhA== + pluginConfig: + dynamicPlugins: + frontend: + janus-idp.backstage-plugin-orchestrator: + appIcons: + - importName: OrchestratorIcon + module: OrchestratorPlugin + name: orchestratorIcon + dynamicRoutes: + - importName: OrchestratorPage + menuItem: + icon: orchestratorIcon + text: Orchestrator + module: OrchestratorPlugin + path: /orchestrator +``` + +The versions of the plugins may undergo updates, leading to changes in their integrity values. To ensure you are utilizing the latest versions, please consult the values available [here](https://github.com/parodos-dev/orchestrator-helm-operator/blob/main/helm-charts/orchestrator/values.yaml). It's imperative to set both the version and integrity values accordingly. + +Additionally, ensure that the `dataIndexService.url` points to the service of the Data Index installed by the Chart/Operator. +When installed by the Helm chart, it should point to `http://sonataflow-platform-data-index-service.sonataflow-infra`: +```bash +oc get svc -n sonataflow-infra sonataflow-platform-data-index-service -o jsonpath='http://{.metadata.name}.{.metadata.namespace}' +``` + +### app-config ConfigMap +This ConfigMap used for configuring backstage. Please add/modify to include the following: +- A static access token (or a different method based on this [doc](https://backstage.io/docs/auth/service-to-service-auth/) to enable the workflows to send notifications to RHDH or to invoke scaffolder actions. +- Define csp and cors + +```yaml +app: + backend: + auth: + externalAccess: + - type: static + options: + token: ${BACKEND_SECRET} + subject: orchestrator + csp: + script-src: ["'self'", "'unsafe-inline'", "'unsafe-eval'"] + script-src-elem: ["'self'", "'unsafe-inline'", "'unsafe-eval'"] + connect-src: ["'self'", 'http:', 'https:', 'data:'] + cors: + origin: {{ URL to RHDH service or route }} +``` + +To enable the Notifications plugin, edit the same ConfigMaps. +For the `dynamic-plugins` ConfigMap add: +```yaml + - disabled: false + package: "@redhat/plugin-notifications-dynamic@0.2.0-rc.0-0" + integrity: sha512-wmISWN02G4OiBF7y8Jpl5KCbDfhzl70s+r0h2tdVh1IIwYmojH5pqXFQAhDd3FTlqYc8yqDG8gEAQ8v66qbU1g== + pluginConfig: + dynamicPlugins: + frontend: + redhat.plugin-notifications: + dynamicRoutes: + - importName: NotificationsPage + menuItem: + config: + props: + titleCounterEnabled: true + webNotificationsEnabled: false + importName: NotificationsSidebarItem + path: /notifications + - disabled: false + package: "@redhat/plugin-notifications-dynamic@1.2.0" + integrity: sha512-1mhUl14v+x0Ta1o8Sp4KBa02izGXHd+wsiCVsDP/th6yWDFJsfSMf/DyMIn1Uhat1rQgVFRUMg8QgrvbgZCR/w== + pluginConfig: + dynamicPlugins: + frontend: + redhat.plugin-signals: {} + - disabled: false + package: "@redhat/plugin-notifications-backend-dynamic@1.2.0" + integrity: sha512-pCFB/jZIG/Ip1wp67G0ZDJPp63E+aw66TX1rPiuSAbGSn+Mcnl8g+XlHLOMMTz+NPloHwj2/Tp4fSf59w/IOSw== + - disabled: false + package: "@redhat/plugin-signals-backend-dynamic@1.2.0" + integrity: sha512-DIISzxtjeJ4a9mX3TLcuGcavRHbCtQ5b52wHn+9+uENUL2IDbFoqmB4/9BQASaKIUSFkRKLYpc5doIkrnTVyrA== +``` + +For the `*-app-config` ConfigMap add the database configuration if isn't already provided. It is required for the notifications plugin: +```yaml + app: + title: Red Hat Developer Hub + baseUrl: {{ URL to RHDH service or route }} + backend: + database: + client: pg + connection: + password: ${POSTGRESQL_ADMIN_PASSWORD} + user: ${POSTGRES_USER} + host: ${POSTGRES_HOST} + port: ${POSTGRES_PORT} +``` +If persistence is enabled (which should be the default setting), ensure that the PostgreSQL environment variables are accessible. +The RHDH instance will be restarted automatically on ConfigMap changes. + +Optionally, include the plugin-notifications-backend-module-email-dynamic to fan-out notifications as emails. +The environment variables below need to be provided to the RHDH instance. +See more configuration options for the plugin [here](https://github.com/backstage/backstage/blob/master/plugins/notifications-backend-module-email/config.d.ts). +``` +- disabled: false # + package: "@redhat/plugin-notifications-backend-module-email-dynamic@1.2.0" + integrity: sha512-dtmliahV5+xtqvwdxP2jvyzd5oXTbv6lvS3c9nR8suqxTullxxj0GFg1uU2SQ2uKBQWhOz8YhSmrRwxxLa9Zqg== + pluginConfig: + notifications: + processors: + email: + transportConfig: # these values needs to be updated. + transport: smtp + hostname: ${NOTIFICATIONS_EMAIL_HOSTNAME} + port: 587 + secure: false + username: ${NOTIFICATIONS_EMAIL_USERNAME} + password: ${NOTIFICATIONS_EMAIL_PASSWORD} + sender: sender@mycompany.com + replyTo: no-reply@mycompany.com + broadcastConfig: + receiver: "none" + concurrencyLimit: 10 + cache: + ttl: + days: 1 +``` + +### Import Orchestrator's software templates +To import the Orchestrator software templates into the catalog via the Backstage UI, follow the instructions outlined in this [document](https://backstage.io/docs/features/software-templates/adding-templates). +Register new templates into the catalog from the +- [Workflow resources (group and system)](https://github.com/parodos-dev/workflow-software-templates/blob/v1.2.x/entities/workflow-resources.yaml) (optional) +- [Basic template](https://github.com/parodos-dev/workflow-software-templates/blob/v1.2.x/scaffolder-templates/basic-workflow/template.yaml) +- [Complex template - workflow with custom Java code](https://github.com/parodos-dev/workflow-software-templates/blob/v1.2.x/scaffolder-templates/complex-assessment-workflow/template.yaml) + +## Upgrade plugin versions - WIP +To perform an upgrade of the plugin versions, start by acquiring the new plugin version along with its associated integrity value. +The following script is useful to obtain the required information for updating the plugin version, however, make sure to select plugin version compatible with the Orchestrator operator version (e.g. 1.2.x for both operator and plugins). + +```bash +#!/bin/bash + +PLUGINS=( + "@redhat/backstage-plugin-orchestrator" + "@redhat/backstage-plugin-orchestrator-backend-dynamic" + "@redhat/plugin-notifications-dynamic" + "@redhat/plugin-notifications-backend-dynamic" + "@redhat/plugin-signals-dynamic" + "@redhat/plugin-signals-backend-dynamic" + "@redhat/plugin-notifications-backend-module-email-dynamic" +) + +for PLUGIN_NAME in "${PLUGINS[@]}" +do + echo "Retriving latest version for plugin: $PLUGIN_NAME\n"; + curl -s -q "https://npm.registry.redhat.com/${PLUGIN_NAME}/" | jq -r '.versions | keys_unsorted[-1] as $latest_version | .[$latest_version] | "package: \"\(.name)@\(.version)\"\nintegrity: \(.dist.integrity)"'; + echo "---" +done +``` + +A sample output should look like: +Retriving latest version for plugin: @redhat/backstage-plugin-orchestrator\n +package: "@redhat/backstage-plugin-orchestrator@1.2.0" +integrity: sha512-FhM13wVXjjF39syowc4RnMC/gKm4TRlmh8lBrMwPXAw1VzgIADI8H6WVEs837poVX/tYSqj2WhehwzFqU6PuhA== +--- +Retriving latest version for plugin: @redhat/backstage-plugin-orchestrator-backend-dynamic\n +package: "@redhat/backstage-plugin-orchestrator-backend-dynamic@1.2.0" +integrity: sha512-lyw7IHuXsakTa5Pok8S2GK0imqrmXe3z+TcL7eB2sJYFqQPkCP5la1vqteL9/1EaI5eI6nKZ60WVRkPEldKBTg== +--- +Retriving latest version for plugin: @redhat/plugin-notifications-dynamic\n +package: "@redhat/plugin-notifications-dynamic@1.2.0" +integrity: sha512-1mhUl14v+x0Ta1o8Sp4KBa02izGXHd+wsiCVsDP/th6yWDFJsfSMf/DyMIn1Uhat1rQgVFRUMg8QgrvbgZCR/w== +--- +Retriving latest version for plugin: @redhat/plugin-notifications-backend-dynamic\n +package: "@redhat/plugin-notifications-backend-dynamic@1.2.0" +integrity: sha512-pCFB/jZIG/Ip1wp67G0ZDJPp63E+aw66TX1rPiuSAbGSn+Mcnl8g+XlHLOMMTz+NPloHwj2/Tp4fSf59w/IOSw== +--- +Retriving latest version for plugin: @redhat/plugin-notifications-backend-module-email-dynamic\n +package: "@redhat/plugin-notifications-backend-module-email-dynamic@1.2.0" +integrity: sha512-dtmliahV5+xtqvwdxP2jvyzd5oXTbv6lvS3c9nR8suqxTullxxj0GFg1uU2SQ2uKBQWhOz8YhSmrRwxxLa9Zqg== +--- +Retriving latest version for plugin: @redhat/plugin-signals-backend-dynamic\n +package: "@redhat/plugin-signals-backend-dynamic@1.2.0" +integrity: sha512-DIISzxtjeJ4a9mX3TLcuGcavRHbCtQ5b52wHn+9+uENUL2IDbFoqmB4/9BQASaKIUSFkRKLYpc5doIkrnTVyrA== +--- +Retriving latest version for plugin: @redhat/plugin-signals-dynamic\n +package: "@redhat/plugin-signals-dynamic@1.2.0" +integrity: sha512-5tbZyRob0JDdrI97HXb7JqFIzNho1l7JuIkob66J+ZMAPCit+pjN1CUuPbpcglKyyIzULxq63jMBWONxcqNSXw== +--- + +After editing the version and integrity values in the *dynamic-plugins* ConfigMap, the RHDH instance will be restarted automatically. + +