Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Configuration examples for OIDC/security and metrics #1384

Merged
merged 7 commits into from
Jan 22, 2025
Merged
Show file tree
Hide file tree
Changes from 4 commits
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
4 changes: 2 additions & 2 deletions .github/workflows/playwright-tests.yml
Original file line number Diff line number Diff line change
Expand Up @@ -133,10 +133,10 @@ jobs:
# Display the resource
export KAFKA_NAMESPACE="${TARGET_NAMESPACE}"

cat examples/console/* | envsubst && echo
cat examples/console/010-Console-example.yaml | envsubst && echo

# Apply the resource
cat examples/console/* | envsubst | kubectl apply -n ${TARGET_NAMESPACE} -f -
cat examples/console/010-Console-example.yaml | envsubst | kubectl apply -n ${TARGET_NAMESPACE} -f -

kubectl wait console/example --for=condition=Ready --timeout=300s -n $TARGET_NAMESPACE

Expand Down
7 changes: 5 additions & 2 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -69,8 +69,9 @@ export NAMESPACE=default
cat install/operator-olm/*.yaml | envsubst | kubectl apply -n ${NAMESPACE} -f -
```

#### Console Custom Resource Example
#### Console Custom Resource Examples
Once the operator is ready, you may then create a `Console` resource in the namespace where the console should be deployed. This example `Console` is based on the example Apache Kafka<sup>®</sup> cluster deployed above in the [prerequisites section](#prerequisites). Also see [examples/console/010-Console-example.yaml](examples/console/010-Console-example.yaml).

```yaml
apiVersion: console.streamshub.github.com/v1alpha1
kind: Console
Expand Down Expand Up @@ -100,6 +101,8 @@ spec:
# This is optional if properties are used to configure the user
```

Additional Console resource examples can be found at [examples/console/](examples/console/), including OpenShift monitoring metrics configuration and OIDC security configuration.

### Deploy the operator directly
Deploying the operator without the use of OLM requires applying the component Kubernetes resources for the operator directly. These resources are bundled and attached to each StreamsHub Console release. The latest release can be found [here](https://github.com/streamshub/console/releases/latest). The resource file is named `streamshub-console-operator.yaml`.

Expand All @@ -113,7 +116,7 @@ curl -sL https://github.com/streamshub/console/releases/download/${VERSION}/stre
```
Note: if you are not using the Prometheus operator you may see an error about a missing `ServiceMonitor` custom resource type. This error may be ignored.

With the operator resources created, you may create a `Console` resource like the one shown in [Console Custom Resource Example](#console-custom-resource-example).
With the operator resources created, you may create a `Console` resource like the one shown in [Console Custom Resource Examples](#console-custom-resource-examples).

## Running locally

Expand Down
9 changes: 6 additions & 3 deletions examples/console-config.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -52,9 +52,8 @@ security:
roleNames:
MikeEdgar marked this conversation as resolved.
Show resolved Hide resolved
- administrators

# Roles and associate rules for global resources (currently only Kafka clusters) are given here in the `security.roles`
# section. Rules for Kafka-scoped resources are specified within the cluster configuration section below. That is,
# at paths `kafka.clusters[].security.rules[].
# Roles and associated rules for accessing global resources (currently limited to Kafka clusters) are defined in `security.roles`.
# Rules for individual Kafka clusters are specified under `kafka.clusters[].security.rules[]`.
roles:
# developers may perform any operation with clusters 'a' and 'b'.
- name: developers
Expand Down Expand Up @@ -123,6 +122,10 @@ kafka:
- privileges:
- get
- list
- resources:
- consumerGroups
- rebalances
- privileges:
- update

- name: my-kafka2
Expand Down
37 changes: 37 additions & 0 deletions examples/console/console-openshift-metrics.yaml
Original file line number Diff line number Diff line change
@@ -0,0 +1,37 @@
#
# This example demonstrates the use of OpenShift user-workload monitoring as
# a source of Kafka metrics for the console. This configuration makes use of the
# `openshift-monitoring` type of metrics source. The linkage between the entry
# under `metricsSources` and a `kafkaClusters` entry is the `metricsSource` given
# for the Kafka cluster.
MikeEdgar marked this conversation as resolved.
Show resolved Hide resolved
#
# See https://docs.openshift.com/container-platform/4.17/observability/monitoring/enabling-monitoring-for-user-defined-projects.html
# for details on how to enable monitoring of user-defined projects in OpenShift.
#
---
apiVersion: console.streamshub.github.com/v1alpha1
kind: Console
metadata:
name: example
spec:
hostname: example-console.${CLUSTER_DOMAIN}

metricsSources:
# Example metrics source using OpenShift's built-in monitoring.
# For `type: openshift-monitoring`, no additional attributes are required,
# but you can configure a truststore if needed.
- name: my-ocp-prometheus
type: openshift-monitoring
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

will this always be openshift-monitoring?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

For this example, yes. But, the type value is an enumeration. It could be one of openshift-monitoring, standalone, or embedded.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

would they require additional attributes? Just thinking of what would be needed in the docs.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The openshift-monitoring and embedded do not take any additional attributes. However, standalone requires a url and optionally takes authentication and a trustStore. The examples/console/console-standalone-prometheus.yaml includes them as well.


kafkaClusters:
# Kafka cluster configuration.
# The example uses the Kafka cluster configuration from `examples/kafka`.
# Adjust the values to match your environment.
- name: console-kafka # Name of the `Kafka` CR representing the cluster
namespace: ${KAFKA_NAMESPACE} # Namespace where the `Kafka` CR is deployed
listener: secure # Listener name from the `Kafka` CR to connect the console
metricsSource: my-ocp-prometheus # Name of the configured metrics source defined in `metricsSources`
credentials:
kafkaUser:
name: console-kafka-user1 # Name of the `KafkaUser` CR used by the console to connect to the Kafka cluster
# This is optional if properties are used to configure the user
116 changes: 116 additions & 0 deletions examples/console/console-security-oidc.yaml
Original file line number Diff line number Diff line change
@@ -0,0 +1,116 @@
#
# This example demonstrates the use of an OIDC provider at `.spec.security.oidc`
# for user authentication in the console. Any OIDC provider should work - such as
# Keycloak or dex with a suitable backend identity provider.
#
# In addition to the OIDC configuration, this example show how one may configure
MikeEdgar marked this conversation as resolved.
Show resolved Hide resolved
# subjects and roles for user authorization. Note that global resources (kafka clusters)
# are configured within `.spec.security` whereas resources within a specific Kafka
# cluster are configured within `.spec.kafkaClusters[].security`.
#
MikeEdgar marked this conversation as resolved.
Show resolved Hide resolved
---
apiVersion: console.streamshub.github.com/v1alpha1
kind: Console
metadata:
name: example
spec:
hostname: example-console.${CLUSTER_DOMAIN}

security:
oidc:
authServerUrl: <OIDC discovery URL> # URL for OIDC provider discovery
clientId: <client-id> # Client ID for OIDC authentication
clientSecret:
# For development use only: provide a secret directly (not recommended for production).
# value: <literal secret - development only!>
valueFrom:
secretKeyRef:
name: my-oidc-secret
key: client-secret

subjects:
# Subjects and their roles may be specified in terms of JWT claims or their subject name (user1, user100 below).
# Using claims is only supported when OIDC security is enabled.
- claim: groups
include:
- team-a
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

should subjects show placeholder values instead of specific examples (<team_name>) ?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We could do that - do you think in that case it's fine for several instrances like <team_name_1>, <team_name_2>?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yep. Makes sense for me.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Updated the subjects with placeholder values 👍

- team-b
roleNames:
- developers
- claim: groups
include:
- team-c
roleNames:
- administrators
- include:
# Match subjects by their name when no claim is specified.
# For JWT, this is typically `preferred_username`, `upn`, or `sub` claims.
# For per-Kafka authentication credentials, this is the user name used to authenticate.
- user1
- user200
roleNames:
- administrators

# Roles and associated rules for accessing global resources (currently limited to Kafka clusters) are defined in `security.roles`.
# Rules for individual Kafka clusters are specified under `kafka.clusters[].security.rules[]`.
roles:
# developers may perform any operation with clusters 'a' and 'b'.
- name: developers
rules:
- resources:
- kafkas
- resourceNames:
- dev-cluster-a
- dev-cluster-b
- privileges:
- '*'
# administrators may operate on any (unspecified) Kafka clusters
- name: administrators
rules:
- resources:
- kafkas
- privileges:
- '*'

kafkaClusters:
# Kafka cluster configuration.
# The example uses the Kafka cluster configuration from `examples/kafka`.
# Adjust the values to match your environment.
- name: console-kafka # Name of the `Kafka` CR representing the cluster
namespace: ${KAFKA_NAMESPACE} # Namespace where the `Kafka` CR is deployed
listener: secure # Listener name from the `Kafka` CR to connect the console
credentials:
kafkaUser:
name: console-kafka-user1 # Name of the `KafkaUser` CR used by the console to connect to the Kafka cluster
# This is optional if properties are used to configure the user
security:
roles:
# developers may only list and view some resources
- name: developers
rules:
- resources:
- topics
- topics/records
- consumerGroups
- rebalances
- privileges:
- get
- list

# administrators may list, view, and update an expanded set of resources
- name: administrators
rules:
- resources:
- topics
- topics/records
- consumerGroups
- rebalances
- nodes/configs
- privileges:
- get
- list
- resources:
- consumerGroups
- rebalances
- privileges:
- update
51 changes: 51 additions & 0 deletions examples/console/console-standalone-prometheus.yaml
Original file line number Diff line number Diff line change
@@ -0,0 +1,51 @@
#
# This example demonstrates the use of a user-supplied Prometheus instance as
# a source of Kafka metrics for the console. This configuration makes use of the
# `standalone` type of metrics source. The linkage between the entry
# under `metricsSources` and a `kafkaClusters` entry is the `metricsSource` given
# for the Kafka cluster.
MikeEdgar marked this conversation as resolved.
Show resolved Hide resolved
#
# See examples/prometheus for sample Prometheus instance resources configured for
# use with a Strimzi Kafka cluster.
#
---
apiVersion: console.streamshub.github.com/v1alpha1
kind: Console
metadata:
name: example
spec:
hostname: example-console.${CLUSTER_DOMAIN}

metricsSources:
# type=standalone when providing your own existing prometheus instance
- name: my-custom-prometheus
type: standalone
url: http://my-custom-prometheus.cloud2.example.com
authentication: # optional
# Either username + password or token
username: my-user
password: my-password
#token: my-token
trustStore: # optional
type: JKS
content:
valueFrom:
configMapKeyRef: # or secretKeyRef
name: my-prometheus-configmap
key: ca.jks
password:
# For development use only: if not provided through `valueFrom` properties, provide a password directly (not recommended for production).
value: changeit

kafkaClusters:
# Kafka cluster configuration.
# The example uses the Kafka cluster configuration from `examples/kafka`.
# Adjust the values to match your environment.
- name: console-kafka # Name of the `Kafka` CR representing the cluster
namespace: ${KAFKA_NAMESPACE} # Namespace where the `Kafka` CR is deployed
listener: secure # Listener name from the `Kafka` CR to connect the console
metricsSource: null # Name of the configured metrics source defined in `metricsSources`
credentials:
kafkaUser:
name: console-kafka-user1 # Name of the `KafkaUser` CR used by the console to connect to the Kafka cluster
# This is optional if properties are used to configure the user