Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Feature/add sso roles #480

Open
wants to merge 3 commits into
base: main
Choose a base branch
from
Open

Conversation

klinch0
Copy link
Contributor

@klinch0 klinch0 commented Nov 19, 2024

Summary by CodeRabbit

  • New Features

    • Updated application version from 1.5.0 to 1.6.0.
    • Introduced new role-based access control (RBAC) roles: view, use, admin, and super-admin, enhancing security and permissions management.
    • Added new Keycloak realm groups for view, use, admin, and super-admin roles, streamlining user management within the application.
    • Integrated keycloak-configure release into the deployment structure, establishing dependencies for improved configuration management.
  • Bug Fixes

    • Resolved versioning discrepancies in the tenant package.

@klinch0 klinch0 requested a review from kvaps as a code owner November 19, 2024 08:47
@dosubot dosubot bot added the size:L This PR changes 100-499 lines, ignoring generated files. label Nov 19, 2024
Copy link
Contributor

coderabbitai bot commented Nov 19, 2024

Walkthrough

The pull request includes updates to the tenant application's Helm chart and role-based access control (RBAC) configuration. The version of the application in Chart.yaml has been changed from 1.5.0 to 1.6.0. Additionally, new roles and role bindings have been added in tenant.yaml, enhancing access control with roles for viewing, using, administering, and super-administering resources. The versions_map has also been updated to reflect these version changes. A new Keycloak configuration has been introduced alongside updates to the keycloak-operator.

Changes

File Path Change Summary
packages/apps/tenant/Chart.yaml Version updated from 1.5.0 to 1.6.0.
packages/apps/tenant/templates/tenant.yaml Added four new roles (view, use, admin, super-admin) and corresponding RoleBindings for RBAC configuration.
packages/apps/versions_map Updated version for tenant from 1.5.0 HEAD to 1.5.0 48128743 and added new version 1.6.0 HEAD.
packages/apps/tenant/templates/keycloakgroups.yaml Introduced new KeycloakRealmGroup resources for view, use, admin, and super-admin.
packages/core/platform/bundles/distro-full.yaml Added new release configuration for keycloak-configure with dependencies.
packages/core/platform/bundles/distro-hosted.yaml Added new release configuration for keycloak-configure with dependencies.
packages/core/platform/bundles/paas-full.yaml Added new release configuration for keycloak-configure with dependencies.
packages/core/platform/bundles/paas-hosted.yaml Added new release configuration for keycloak-configure with dependencies.
packages/system/keycloak-configure/Chart.yaml Introduced new Helm chart configuration for cozy-keycloak-configure.
packages/system/keycloak-configure/templates/configure-kk.yaml Updated Keycloak resources to use ClusterKeycloak and ClusterKeycloakRealm.
packages/system/keycloak-operator/values.yaml Added new configuration option clusterReconciliationEnabled: true.

Possibly related PRs

  • Prepare release v0.14.1 #338: The changes in this PR involve updates to the versioning information for the tenant package, which is directly related to the version update from 1.5.0 to 1.6.0 in the main PR.
  • Prepare release v0.16.1 #390: This PR also includes an update to the version of the cozystack application, which aligns with the versioning changes in the main PR.
  • add keycloak #475: The addition of the keycloak release in this PR may relate to the overall application structure and versioning, although it does not directly impact the version update in the main PR.
  • add services to dashboard #482: This PR introduces a new Role resource for Redis services, which may be relevant in the context of the overall application updates, but it does not directly relate to the version changes in the main PR.

Suggested labels

size:M, lgtm

Suggested reviewers

  • kvaps

🐇 In the meadow, changes bloom,
A version rise, dispelling gloom.
Roles now guard with watchful eyes,
Access granted, oh how time flies!
With every hop, we celebrate,
A brighter path, we cultivate! 🌼


Thank you for using CodeRabbit. We offer it for free to the OSS community and would appreciate your support in helping us grow. If you find it useful, would you consider giving us a shout-out on your favorite social media?

❤️ Share
🪧 Tips

Chat

There are 3 ways to chat with CodeRabbit:

  • Review comments: Directly reply to a review comment made by CodeRabbit. Example:
    • I pushed a fix in commit <commit_id>, please review it.
    • Generate unit testing code for this file.
    • Open a follow-up GitHub issue for this discussion.
  • Files and specific lines of code (under the "Files changed" tab): Tag @coderabbitai in a new review comment at the desired location with your query. Examples:
    • @coderabbitai generate unit testing code for this file.
    • @coderabbitai modularize this function.
  • PR comments: Tag @coderabbitai in a new PR comment to ask questions about the PR branch. For the best results, please provide a very specific query, as very limited context is provided in this mode. Examples:
    • @coderabbitai gather interesting stats about this repository and render them as a table. Additionally, render a pie chart showing the language distribution in the codebase.
    • @coderabbitai read src/utils.ts and generate unit testing code.
    • @coderabbitai read the files in the src/scheduler package and generate a class diagram using mermaid and a README in the markdown format.
    • @coderabbitai help me debug CodeRabbit configuration file.

Note: Be mindful of the bot's finite context window. It's strongly recommended to break down tasks such as reading entire modules into smaller chunks. For a focused discussion, use review comments to chat about specific files and their changes, instead of using the PR comments.

CodeRabbit Commands (Invoked using PR comments)

  • @coderabbitai pause to pause the reviews on a PR.
  • @coderabbitai resume to resume the paused reviews.
  • @coderabbitai review to trigger an incremental review. This is useful when automatic reviews are disabled for the repository.
  • @coderabbitai full review to do a full review from scratch and review all the files again.
  • @coderabbitai summary to regenerate the summary of the PR.
  • @coderabbitai resolve resolve all the CodeRabbit review comments.
  • @coderabbitai configuration to show the current CodeRabbit configuration for the repository.
  • @coderabbitai help to get help.

Other keywords and placeholders

  • Add @coderabbitai ignore anywhere in the PR description to prevent this PR from being reviewed.
  • Add @coderabbitai summary to generate the high-level summary at a specific location in the PR description.
  • Add @coderabbitai anywhere in the PR title to generate the title automatically.

CodeRabbit Configuration File (.coderabbit.yaml)

  • You can programmatically configure CodeRabbit by adding a .coderabbit.yaml file to the root of your repository.
  • Please see the configuration documentation for more information.
  • If your editor has YAML language server enabled, you can add the path at the top of this file to enable auto-completion and validation: # yaml-language-server: $schema=https://coderabbit.ai/integrations/schema.v2.json

Documentation and Community

  • Visit our Documentation for detailed information on how to use CodeRabbit.
  • Join our Discord Community to get help, request features, and share feedback.
  • Follow us on X/Twitter for updates and announcements.

@dosubot dosubot bot added the enhancement New feature or request label Nov 19, 2024
Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 12

🧹 Outside diff range and nitpick comments (14)
packages/system/keycloak/templates/service.yaml (1)

11-12: Consider adding service annotations for health checks

Since the selector targets 'keycloak-ha' (high availability), consider adding service annotations for health checks to ensure proper load balancing and failover.

 metadata:
   name: keycloak-http
+  annotations:
+    service.alpha.kubernetes.io/tolerate-unready-endpoints: "true"
packages/system/keycloak/templates/db.yaml (1)

6-6: Consider documenting the HA setup rationale.

While having 2 instances provides high availability, it would be helpful to document the rationale for this choice in terms of expected load and failover requirements.

scripts/migrations/7 (1)

1-9: Consider enhancing migration framework

To improve the reliability and maintainability of migrations, consider implementing:

  1. Migration state tracking to handle partial completions
  2. Automated testing for migrations
  3. Explicit rollback procedures
  4. Progress logging and monitoring
  5. Pre-flight checks for prerequisites

This will help ensure safer upgrades and easier troubleshooting.

Would you like assistance in designing a more robust migration framework?

packages/system/keycloak/values.yaml (1)

9-14: Add CPU limit for complete resource constraints

While memory limits and requests are well-configured, consider adding a CPU limit to prevent potential resource contention:

 resources:
   limits:
     memory: 1500Mi
+    cpu: 1000m
   requests:
     memory: 500Mi
     cpu: 100m
packages/system/keycloak/templates/ingress.yaml (2)

11-19: Enhance ingress security with additional annotations

Consider adding security-related annotations to harden the ingress configuration.

Add these recommended security annotations:

   annotations:
     {{- if ne $issuerType "cloudflare" }}
     acme.cert-manager.io/http01-ingress-class: {{ $ingress }}
     {{- end }}
     cert-manager.io/cluster-issuer: letsencrypt-prod
+    nginx.ingress.kubernetes.io/ssl-redirect: "true"
+    nginx.ingress.kubernetes.io/force-ssl-redirect: "true"
+    nginx.ingress.kubernetes.io/proxy-buffer-size: "128k"
+    nginx.ingress.kubernetes.io/proxy-buffers-number: "4"
     {{- toYaml . | nindent 4 }}

Also, consider making the cluster-issuer configurable:

-    cert-manager.io/cluster-issuer: letsencrypt-prod
+    cert-manager.io/cluster-issuer: {{ .Values.certManager.issuer | default "letsencrypt-prod" }}

21-36: Improve production readiness of the ingress specification

Consider the following enhancements for a more robust production setup:

  1. Make the TLS secret name dynamic to support multi-tenant deployments:
   tls:
   - hosts:
       - keycloak.{{ $host }}
-    secretName: web-tls
+    secretName: {{ .Release.Name }}-keycloak-tls
  1. Add rate limiting annotations for sensitive endpoints:
    nginx.ingress.kubernetes.io/limit-rps: "10"
    nginx.ingress.kubernetes.io/limit-rpm: "100"
    nginx.ingress.kubernetes.io/configuration-snippet: |
      limit_req_zone $binary_remote_addr zone=auth_limit:10m rate=1r/s;
      location /auth/realms/*/protocol/openid-connect/token {
        limit_req zone=auth_limit burst=5 nodelay;
        proxy_pass http://upstream_balancer;
      }
packages/core/platform/bundles/distro-hosted.yaml (1)

128-132: Consider adding the 'optional' flag for consistency

Most other operator deployments in this bundle are marked as optional. Consider adding the optional: true flag to maintain consistency with other services and allow for flexible deployment configurations.

 - name: keycloak
   releaseName: keycloak
   chart: cozy-keycloak
   namespace: cozy-keycloak
+  optional: true
   dependsOn: [postgres-operator]
scripts/installer.sh (1)

Line range hint 9-20: Consider enhancing migration error handling

The run_migrations() function handles version upgrades but could benefit from additional safeguards:

  1. No validation of migration script existence before execution
  2. No rollback mechanism if migration fails
  3. No logging of migration steps

Consider implementing these improvements:

 run_migrations() {
   if ! kubectl get configmap -n cozy-system cozystack-version; then
+    echo "Initializing version configmap to $VERSION"
     kubectl create configmap -n cozy-system cozystack-version --from-literal=version="$VERSION" --dry-run=client -o yaml | kubectl create -f-
     return
   fi
   current_version=$(kubectl get configmap -n cozy-system cozystack-version -o jsonpath='{.data.version}') || true
   until [ "$current_version" = "$VERSION" ]; do
     echo "run migration: $current_version --> $VERSION"
+    migration_script="scripts/migrations/$current_version"
+    if [ ! -f "$migration_script" ]; then
+      echo "Error: Missing migration script: $migration_script"
+      exit 1
+    fi
+    echo "Running migration script: $migration_script"
+    if ! $migration_script; then
+      echo "Error: Migration failed"
+      exit 1
+    fi
     chmod +x scripts/migrations/$current_version
     scripts/migrations/$current_version
     current_version=$(kubectl get configmap -n cozy-system cozystack-version -o jsonpath='{.data.version}')
   done
 }
packages/core/platform/bundles/paas-hosted.yaml (1)

149-153: Consider additional dependencies for security and DNS management.

Since Keycloak is a critical SSO component, consider:

  1. Adding cert-manager as a dependency to ensure proper TLS certificate management
  2. Adding external-dns as a dependency if DNS records need to be managed
  3. Setting optional: true if this is a feature that should be optionally deployable
 - name: keycloak
   releaseName: keycloak
   chart: cozy-keycloak
   namespace: cozy-keycloak
-  dependsOn: [postgres-operator]
+  optional: true
+  dependsOn: [postgres-operator, cert-manager, external-dns]
packages/system/keycloak/templates/sts.yaml (3)

5-11: Enhance Secret security configuration

The Secret configuration could be improved with additional security metadata and labels.

Consider adding these enhancements:

apiVersion: v1
kind: Secret
metadata:
  name: {{ .Release.Name }}-credentials
+ type: Opaque
+ labels:
+   app.kubernetes.io/name: keycloak
+   app.kubernetes.io/instance: {{ .Release.Name }}
+ annotations:
+   password.rotation.schedule: "90d"
stringData:
  admin: {{ $password }}

14-33: Add High Availability configurations

The StatefulSet configuration needs additional HA safeguards for production readiness.

Consider adding:

  1. Pod Disruption Budget to ensure minimum availability during updates
  2. Pod anti-affinity rules to spread replicas across nodes
  3. Comprehensive labels following Kubernetes recommendations
metadata:
  name: keycloak
  labels:
-   app: keycloak-ha
+   app.kubernetes.io/name: keycloak
+   app.kubernetes.io/instance: {{ .Release.Name }}
+   app.kubernetes.io/version: {{ .Chart.AppVersion }}
spec:
  selector:
    matchLabels:
-     app: keycloak-ha
+     app.kubernetes.io/name: keycloak
+     app.kubernetes.io/instance: {{ .Release.Name }}
  template:
    spec:
+     affinity:
+       podAntiAffinity:
+         requiredDuringSchedulingIgnoredDuringExecution:
+           - labelSelector:
+               matchLabels:
+                 app.kubernetes.io/name: keycloak
+                 app.kubernetes.io/instance: {{ .Release.Name }}
+             topologyKey: "kubernetes.io/hostname"

52-61: Enhance container security context

While the current security context is good, it could be strengthened further.

Consider adding these security enhancements:

securityContext:
  runAsNonRoot: true
  runAsUser: 1000
+ runAsGroup: 1000
  capabilities:
    drop:
      - ALL
-     - CAP_NET_RAW  # Redundant as ALL is already dropped
  readOnlyRootFilesystem: false
  allowPrivilegeEscalation: false
+ seccompProfile:
+   type: RuntimeDefault
packages/core/platform/bundles/paas-full.yaml (1)

253-257: Consider making Keycloak optional

Since not all environments might require SSO capabilities, consider making this release optional similar to other optional components like external-dns and telepresence.

  name: keycloak
  releaseName: keycloak
  chart: cozy-keycloak
  namespace: cozy-keycloak
+ optional: true
  dependsOn: [postgres-operator]
packages/apps/tenant/templates/tenant.yaml (1)

91-223: Add documentation for role hierarchy and permissions.

While the role hierarchy is well-structured (view → use → admin → super-admin), adding comments to document the purpose and scope of each role would improve maintainability.

Consider adding YAML comments before each role definition explaining:

  • Role purpose and intended use cases
  • List of key permissions granted
  • Relationship to other roles
📜 Review details

Configuration used: CodeRabbit UI
Review profile: CHILL

📥 Commits

Reviewing files that changed from the base of the PR and between 47bd46c and 06bcdb3.

📒 Files selected for processing (16)
  • packages/apps/tenant/Chart.yaml (1 hunks)
  • packages/apps/tenant/templates/tenant.yaml (1 hunks)
  • packages/apps/versions_map (1 hunks)
  • packages/core/platform/bundles/distro-full.yaml (1 hunks)
  • packages/core/platform/bundles/distro-hosted.yaml (1 hunks)
  • packages/core/platform/bundles/paas-full.yaml (1 hunks)
  • packages/core/platform/bundles/paas-hosted.yaml (2 hunks)
  • packages/system/keycloak/Chart.yaml (1 hunks)
  • packages/system/keycloak/templates/db.yaml (1 hunks)
  • packages/system/keycloak/templates/ingress.yaml (1 hunks)
  • packages/system/keycloak/templates/service-headless.yaml (1 hunks)
  • packages/system/keycloak/templates/service.yaml (1 hunks)
  • packages/system/keycloak/templates/sts.yaml (1 hunks)
  • packages/system/keycloak/values.yaml (1 hunks)
  • scripts/installer.sh (1 hunks)
  • scripts/migrations/7 (1 hunks)
✅ Files skipped from review due to trivial changes (3)
  • packages/apps/tenant/Chart.yaml
  • packages/system/keycloak/Chart.yaml
  • packages/system/keycloak/templates/service-headless.yaml
🧰 Additional context used
🪛 yamllint
packages/system/keycloak/templates/ingress.yaml

[error] 1-1: syntax error: expected the node content, but found '-'

(syntax)

packages/system/keycloak/templates/sts.yaml

[error] 1-1: syntax error: expected the node content, but found '-'

(syntax)


[warning] 44-44: wrong indentation: expected 10 but found 12

(indentation)

🔇 Additional comments (16)
packages/system/keycloak/templates/service.yaml (1)

8-10: Verify the security configuration for the HTTP port

The service is exposing port 8080 for HTTP traffic. For SSO implementations, it's recommended to use HTTPS to secure authentication traffic.

✅ Verification successful

Port 8080 is correctly configured with TLS termination at ingress

The service configuration is secure as TLS termination is properly handled at the ingress layer with:

  • HTTPS enforced through cert-manager with letsencrypt-prod issuer
  • TLS certificate configured for keycloak.<root-host>
  • Internal HTTP traffic (8080) is acceptable as it's behind the ingress TLS termination
🏁 Scripts executed

The following scripts were executed for the analysis:

Script:

#!/bin/bash
# Description: Check if TLS termination is handled by ingress
# Expect: Find TLS configuration in ingress or other security measures

# Check for TLS configuration in ingress
rg -l 'tls:' packages/system/keycloak/templates/
# Check for any SSL/TLS related configurations
rg -l 'ssl|tls|https' packages/system/keycloak/templates/

Length of output: 248


Script:

#!/bin/bash
# Let's check the ingress configuration to verify TLS termination
cat packages/system/keycloak/templates/ingress.yaml

# Also check if there are any proxy configurations that might handle HTTPS
rg -A 5 'proxy|https' packages/system/keycloak/templates/

Length of output: 1661

packages/system/keycloak/templates/db.yaml (3)

1-4: LGTM! Resource definition looks correct.

The CloudNativePG operator configuration is properly defined with appropriate API version and resource type.


7-8: Verify storage requirements and add backup configuration.

The 20Gi storage allocation needs verification based on:

  • Expected user count
  • Session storage requirements
  • Audit log retention policy

Additionally, consider adding backup configuration for disaster recovery.

Example backup configuration to consider adding:

  backup:
    barmanObjectStore:
      destinationPath: "s3://backup-bucket/keycloak"
      endpointURL: "https://s3.example.com"
      s3Credentials:
        inheritFromIAMRole: true
    retentionPolicy: "30d"

10-12: Review API server access policy.

The label policy.cozystack.io/allow-to-apiserver: "true" grants API server access. Ensure this aligns with your security requirements and is necessary for the database pods.

✅ Verification successful

Let me gather more specific information about the network policies and their rules to better understand the security context.


Let me check one more aspect to understand if this is a common pattern for database resources and if there's any documentation about this policy.


API server access policy is appropriately applied

The policy.cozystack.io/allow-to-apiserver: "true" label is consistently used across the platform for database resources and critical system components that require API server access. This includes:

  • Database resources (Postgres, Keycloak DB, Grafana DB, etc.)
  • System controllers (cluster-autoscaler, CSI controllers)
  • Infrastructure components (ingress-nginx, monitoring components)

This appears to be a standard platform policy for components that need to interact with the Kubernetes API server, and its usage in the Keycloak database configuration aligns with the platform's security patterns.

🏁 Scripts executed

The following scripts were executed for the analysis:

Script:

#!/bin/bash
# Check for other resources using the same API server access policy
rg -l 'policy\.cozystack\.io/allow-to-apiserver.*true' --type yaml

# Check for any network policies that might affect this access
fd -e yaml -e yml | xargs grep -l "kind: NetworkPolicy"

Length of output: 2293


Script:

#!/bin/bash
# Check the content of network policies in the system
rg -A 10 "kind: NetworkPolicy" packages/system/keycloak --type yaml

# Check if there are any specific network policies for database access
rg "policy\.cozystack\.io/allow-to-apiserver.*true" -A 5 -B 5 packages/system/keycloak/templates/db.yaml

# Look for documentation or comments about this policy
rg "allow-to-apiserver" packages/system/keycloak --type yaml --with-filename

Length of output: 462


Script:

#!/bin/bash
# Check other database resources using this policy
rg -C 5 "policy\.cozystack\.io/allow-to-apiserver.*true" "packages/*/templates/db.yaml" "packages/*/templates/database.yaml"

# Look for any documentation about this policy in READMEs or docs
fd README.md | xargs rg "allow-to-apiserver"
fd -e md -p docs/ | xargs rg "allow-to-apiserver"

Length of output: 2372

scripts/migrations/7 (1)

8-9: ⚠️ Potential issue

Improve version management safety

The version update lacks proper validation and safety checks:

Suggested improvements:

-# Write version to cozystack-version config
-kubectl create configmap -n cozy-system cozystack-version --from-literal=version=8 --dry-run=client -o yaml | kubectl apply -f-
+# Verify current version
+current_version=$(kubectl get configmap -n cozy-system cozystack-version -o jsonpath='{.data.version}' 2>/dev/null || echo "0")
+if [[ "$current_version" != "7" ]]; then
+    echo "Error: Expected current version to be 7, found: $current_version"
+    exit 1
+fi
+
+# Verify all version 7 prerequisites
+if ! ./verify_v7_migrations.sh; then
+    echo "Error: Version 7 prerequisites not met"
+    exit 1
+fi
+
+# Update version
+version_yaml=$(kubectl create configmap -n cozy-system cozystack-version \
+    --from-literal=version=8 --dry-run=client -o yaml)
+if [[ -z "$version_yaml" ]]; then
+    echo "Error: Failed to generate version ConfigMap"
+    exit 1
+fi
+echo "$version_yaml" | kubectl apply -f - || {
+    echo "Error: Failed to update version"
+    exit 1
+}

Let's verify the migration prerequisites:

packages/system/keycloak/values.yaml (1)

1-1: Verify Keycloak image version security status

The specified Keycloak version should be verified for known vulnerabilities.

packages/system/keycloak/templates/ingress.yaml (1)

34-36: Verify Keycloak service configuration

Ensure that the service name and port configuration align with the Keycloak deployment.

✅ Verification successful

Service and port configuration are correctly aligned

The service name keycloak-http and port name http are properly configured and aligned between the ingress and service definitions:

  • Service exposes port 8080 named http
  • Ingress correctly references the service name keycloak-http and port name http
🏁 Scripts executed

The following scripts were executed for the analysis:

Script:

#!/bin/bash
# Check if the service name and port are correctly defined
rg -l 'name: keycloak-http' packages/system/keycloak/templates/
rg -l 'ports:.*name: http' packages/system/keycloak/templates/

Length of output: 226


Script:

#!/bin/bash
# Let's examine the service.yaml and ingress.yaml to verify the port configuration alignment
cat packages/system/keycloak/templates/service.yaml
echo "---"
cat packages/system/keycloak/templates/ingress.yaml

Length of output: 1339

packages/apps/versions_map (1)

84-85: LGTM! Version mapping follows best practices.

The version mapping changes properly handle the transition from 1.5.0 to 1.6.0 by:

  • Fixing version 1.5.0 to a specific commit
  • Creating version 1.6.0 as HEAD for the new SSO roles feature
packages/core/platform/bundles/distro-hosted.yaml (1)

128-132: Verify postgres-operator dependency availability

The Keycloak release depends on postgres-operator which is marked as optional. This could lead to deployment failures if postgres-operator is not installed.

✅ Verification successful

Dependency chain is properly configured

The verification shows that:

  • The postgres-operator is marked as optional: true in the distro-hosted bundle
  • All Keycloak configurations consistently depend on postgres-operator across all bundles
  • The dependency chain follows the same pattern in all bundle configurations

This confirms that the postgres-operator dependency is correctly configured and the deployment will work as expected, with the optional flag properly handling cases where postgres-operator is not installed.

🏁 Scripts executed

The following scripts were executed for the analysis:

Script:

#!/bin/bash
# Description: Check if postgres-operator is consistently required across related configurations

# Check for postgres-operator references in other bundle configurations
echo "Checking postgres-operator configuration in other bundles..."
rg -A 5 "postgres-operator" "packages/core/platform/bundles/"

# Check if Keycloak configurations consistently depend on postgres-operator
echo "Checking Keycloak dependencies in other bundles..."
rg -A 5 "name: keycloak" "packages/core/platform/bundles/"

Length of output: 4848

scripts/installer.sh (2)

6-6: Verify migration path for version increment

The VERSION increment from 7 to 8 will trigger migrations for existing installations. This is a critical change that requires careful verification.

Let's verify the migration path and required scripts:

#!/bin/bash
# Description: Verify migration scripts and dependencies
# Expected: Find migration script for version 7->8 and verify its contents

# Check if migration script exists
if ! fd "7$" "scripts/migrations"; then
  echo "Error: Missing migration script for version 7->8"
  exit 1
fi

# Check migration script contents
rg -A 10 "cozystack|root-host" "scripts/migrations/7"

# Verify no direct references to version 7 remain
rg "VERSION=7|version: 7" --type sh

Line range hint 1-93: Review SSO-related security implications

Given that this PR introduces SSO roles (based on PR title), and this is the main installer script:

  1. Verify that SSO configurations are properly secured during installation
  2. Ensure sensitive SSO credentials are handled securely

Let's check for potential security concerns:

✅ Verification successful

Based on the analysis of the provided code and configurations, I can now generate the final response:

SSO implementation follows security best practices

The Keycloak SSO implementation demonstrates secure configuration:

  • Runs as non-root user with restricted capabilities
  • Uses secure database credentials via Kubernetes secrets
  • Enforces TLS with proper certificate management
  • Implements secure session handling with cookies
  • Proper pod security context and resource limits defined

The installer script and platform configurations maintain security through:

  • Proper namespace isolation with pod security enforcement
  • RBAC controls for API access
  • Secure service account configurations
  • No sensitive credentials exposed in plain text
🏁 Scripts executed

The following scripts were executed for the analysis:

Script:

#!/bin/bash
# Description: Verify SSO-related security configurations
# Expected: Find secure handling of SSO credentials and proper RBAC setup

# Check for sensitive credential handling
rg -i "password|secret|key|token" --type yaml

# Check for proper RBAC configurations
rg -i "role|binding|sso|keycloak" --type yaml

# Verify proper namespace isolation
rg -i "namespace.*cozy|namespace.*sso|namespace.*keycloak" --type yaml

Length of output: 418


Script:

#!/bin/bash
# Let's check for SSO-related configurations and security measures

# Search for SSO-related files and configurations
fd -e yaml -e yml . packages/ | xargs rg -l -i "sso|auth|keycloak|oidc"

# Check for any secret management in platform setup
fd -e yaml -e yml . packages/core/platform/ | xargs cat

# Look for any auth-related configurations in the migrations
fd . scripts/migrations/ -x cat {}

# Check for any security-related make targets
fd Makefile . | xargs rg "auth|sso|security"

Length of output: 55099


Script:

#!/bin/bash
# Let's specifically check Keycloak and SSO-related configurations

# Check Keycloak chart configuration
fd -e yaml -e yml . packages/system/keycloak/ | xargs cat

# Check for any SSO/auth environment variables or secrets in the installer
rg -i "sso|auth|keycloak|oidc" scripts/installer.sh

# Look for any security-related configurations in the platform chart
rg -i "security|auth|sso|keycloak" packages/core/platform/templates/

# Check for any auth-related configurations in the cozystack API
fd -e yaml -e yml . packages/system/cozystack-api/ | xargs cat

Length of output: 16907

packages/core/platform/bundles/paas-hosted.yaml (1)

149-153: LGTM! The Keycloak release entry follows the established pattern.

The structure, indentation, and basic configuration align well with other platform components. The dependency on postgres-operator is correctly specified.

packages/core/platform/bundles/distro-full.yaml (1)

178-182: Verify if values configuration is needed

The Keycloak release might need default values configuration. Other releases in this bundle use valuesFiles or inline values for essential configurations.

Let's check if the chart has any required values:

✅ Verification successful

Let me gather more specific information about the Keycloak configuration in these bundles.


Let me check if there are any values defined in the chart itself or any documentation about required values.


Let me check the content of the Keycloak values file to understand if there are any required configurations.


No values configuration needed for Keycloak release

The chart has default values defined in packages/system/keycloak/values.yaml which include essential configurations for:

  • Container image and version
  • Ingress settings with session affinity
  • Resource limits and requests

The current configuration in the bundle is sufficient as the defaults are appropriate for most deployments. Other bundles also use the same minimal configuration without additional values.

🏁 Scripts executed

The following scripts were executed for the analysis:

Script:

#!/bin/bash
# Description: Check if the cozy-keycloak chart has values.yaml or similar files

# Look for values files in the chart
fd -t f "values.*\.yaml$" -d 3 | grep -i "keycloak"

# Check if other bundles configure values for keycloak
rg -l "chart:\s*cozy-keycloak" | xargs rg "values(Files)?:"

Length of output: 465


Script:

#!/bin/bash
# Check the actual values configuration for keycloak in other bundles
rg -A 10 "chart:\s*cozy-keycloak" packages/core/platform/bundles/

Length of output: 984


Script:

#!/bin/bash
# Look for chart documentation and values
fd -t f "README\.md|values\.yaml" | grep -i "keycloak"

# Also check for any values.yaml content mentioning required fields
rg -l "keycloak" | xargs rg -l "required:|mandatory"

Length of output: 229


Script:

#!/bin/bash
# Check the content of keycloak values file
cat packages/system/keycloak/values.yaml

Length of output: 441

packages/core/platform/bundles/paas-full.yaml (1)

253-257: Review dependencies for Keycloak deployment

While the postgres-operator dependency is correct for the database backend, Keycloak typically requires additional dependencies for proper SSO integration:

  • cert-manager: Required for TLS certificate management
  • external-dns (if present): Helpful for DNS management

Consider adding these dependencies:

  name: keycloak
  releaseName: keycloak
  chart: cozy-keycloak
  namespace: cozy-keycloak
- dependsOn: [postgres-operator]
+ dependsOn: [postgres-operator, cert-manager, external-dns]
packages/apps/tenant/templates/tenant.yaml (2)

91-120: LGTM! Well-structured view role with appropriate read-only permissions.

The role follows the principle of least privilege by limiting access to read-only operations.


192-223: Verify namespace isolation requirements for super-admin role.

The super-admin role grants unrestricted access to all resources within the namespace. Please verify if this aligns with your multi-tenancy security requirements.

✅ Verification successful

Let me analyze the namespace isolation for the super-admin role by checking for any potential cross-namespace access. Let me run one more verification script.


Let me try a different approach to verify the namespace isolation.


Let me try one final verification to check the resource permissions.


Namespace isolation is properly maintained in super-admin role configuration

The super-admin role configuration maintains proper namespace isolation:

  • All permissions are scoped to the tenant's namespace via namespace: {{ include "tenant.name" . }}
  • No cluster-wide resources or cross-namespace access is granted
  • The wildcard permissions (resources: ["*"]) are limited to specific API groups within the tenant's namespace
  • The role follows the principle of least privilege while providing necessary elevated access for tenant management
🏁 Scripts executed

The following scripts were executed for the analysis:

Script:

#!/bin/bash
# Description: Check for potential namespace isolation concerns

# Look for other RoleBindings that might grant cross-namespace access
rg -l "kind:\s+RoleBinding" | xargs rg "namespace:\s+{{ include \"tenant.name\""

# Check for ClusterRoles/ClusterRoleBindings that might affect namespace isolation
rg "kind:\s+(ClusterRole|ClusterRoleBinding)"

Length of output: 29616


Script:

#!/bin/bash
# Check for namespace-related configurations in tenant.yaml and related files

# Search for namespace-related configurations in tenant.yaml
rg -A 5 -B 5 "namespace:\s+{{" "packages/apps/tenant/templates/tenant.yaml"

# Look for any potential cross-namespace references
rg "namespace:\s+\$|namespace:\s+{{" "packages/apps/tenant/templates/"

Length of output: 399


Script:

#!/bin/bash
# Check for namespace-related configurations and potential cross-namespace access

# Search for namespace configurations in tenant.yaml
rg "namespace:" "packages/apps/tenant/templates/tenant.yaml"

# Look for any cluster-wide permissions or cross-namespace access
rg -B 3 -A 3 "resources.*[\"\'].*\*.*[\"\']" "packages/apps/tenant/templates/tenant.yaml"

Length of output: 992


Script:

#!/bin/bash
# Check for wildcard resources and permissions in the role

# Look for resources and verbs in the role definition
rg -B 2 -A 2 "resources:|verbs:" "packages/apps/tenant/templates/tenant.yaml"

# Look for any apiGroups definitions
rg -B 2 -A 2 "apiGroups:" "packages/apps/tenant/templates/tenant.yaml"

Length of output: 5608

Comment on lines 1 to 12
apiVersion: postgresql.cnpg.io/v1
kind: Cluster
metadata:
name: keycloak-db
spec:
instances: 2
storage:
size: 20Gi

inheritedMetadata:
labels:
policy.cozystack.io/allow-to-apiserver: "true"
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🛠️ Refactor suggestion

Add essential configurations for production readiness.

Several important configurations are missing:

  1. Resource limits and requests
  2. Security context
  3. Monitoring configuration
  4. Network policies

Consider adding these configurations:

  monitoring:
    enablePodMonitor: true
  
  resources:
    requests:
      memory: "2Gi"
      cpu: "500m"
    limits:
      memory: "4Gi"
      cpu: "2000m"

  securityContext:
    runAsUser: 999
    fsGroup: 999

  postgresql:
    parameters:
      max_connections: "100"
      shared_buffers: "256MB"

Comment on lines 1 to 4
#!/bin/sh
# Migration 7 --> 8


Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🛠️ Refactor suggestion

Enhance script robustness and documentation

Consider improving the script header with better error handling and documentation:

-#!/bin/sh
+#!/usr/bin/env bash
+
+set -euo pipefail
+
 # Migration 7 --> 8
+#
+# Purpose: Updates cozystack ConfigMap with root host and sets version to 8
+# Prerequisites: 
+#   - kubectl access with permissions to modify ConfigMaps
+#   - tenant-root HelmRelease must exist
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
#!/bin/sh
# Migration 7 --> 8
#!/usr/bin/env bash
set -euo pipefail
# Migration 7 --> 8
#
# Purpose: Updates cozystack ConfigMap with root host and sets version to 8
# Prerequisites:
# - kubectl access with permissions to modify ConfigMaps
# - tenant-root HelmRelease must exist

Comment on lines 5 to 6
host=$(kubectl get hr tenant-root -n tenant-root -o yaml | grep 'host:' | awk '{print $2}')
kubectl patch configmap -n cozy-system cozystack --type merge -p "{\"data\":{\"root-host\":\"$host\"}}"
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue

Add error handling and value validation

The current implementation has several reliability and security concerns:

  1. No validation of the retrieved host value
  2. Missing error handling for kubectl commands
  3. Unsafe variable interpolation in JSON patch
  4. No backup before modification

Suggested improvements:

-host=$(kubectl get hr tenant-root -n tenant-root -o yaml | grep 'host:' | awk '{print $2}')
-kubectl patch configmap -n cozy-system cozystack --type merge -p "{\"data\":{\"root-host\":\"$host\"}}"
+# Backup existing ConfigMap
+kubectl get configmap -n cozy-system cozystack -o yaml > cozystack-backup.yaml || {
+    echo "Failed to backup ConfigMap"
+    exit 1
+}
+
+# Get and validate host
+host=$(kubectl get hr tenant-root -n tenant-root -o jsonpath='{.spec.values.host}')
+if [[ -z "$host" ]]; then
+    echo "Error: Failed to retrieve host from tenant-root"
+    exit 1
+fi
+
+# Validate host format
+if ! echo "$host" | grep -qE '^[a-zA-Z0-9][a-zA-Z0-9.-]*[a-zA-Z0-9]$'; then
+    echo "Error: Invalid host format: $host"
+    exit 1
+fi
+
+# Update ConfigMap with proper JSON escaping
+kubectl patch configmap -n cozy-system cozystack --type merge \
+    --patch "$(jq -n --arg host "$host" '{"data":{"root-host": $host}}')" || {
+    echo "Failed to update ConfigMap"
+    kubectl apply -f cozystack-backup.yaml
+    exit 1
+}
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
host=$(kubectl get hr tenant-root -n tenant-root -o yaml | grep 'host:' | awk '{print $2}')
kubectl patch configmap -n cozy-system cozystack --type merge -p "{\"data\":{\"root-host\":\"$host\"}}"
# Backup existing ConfigMap
kubectl get configmap -n cozy-system cozystack -o yaml > cozystack-backup.yaml || {
echo "Failed to backup ConfigMap"
exit 1
}
# Get and validate host
host=$(kubectl get hr tenant-root -n tenant-root -o jsonpath='{.spec.values.host}')
if [[ -z "$host" ]]; then
echo "Error: Failed to retrieve host from tenant-root"
exit 1
fi
# Validate host format
if ! echo "$host" | grep -qE '^[a-zA-Z0-9][a-zA-Z0-9.-]*[a-zA-Z0-9]$'; then
echo "Error: Invalid host format: $host"
exit 1
fi
# Update ConfigMap with proper JSON escaping
kubectl patch configmap -n cozy-system cozystack --type merge \
--patch "$(jq -n --arg host "$host" '{"data":{"root-host": $host}}')" || {
echo "Failed to update ConfigMap"
kubectl apply -f cozystack-backup.yaml
exit 1
}

Comment on lines 3 to 8
ingress:
annotations:
nginx.ingress.kubernetes.io/affinity: "cookie"
nginx.ingress.kubernetes.io/session-cookie-expires: "86400"
nginx.ingress.kubernetes.io/session-cookie-max-age: "86400"
nginx.ingress.kubernetes.io/session-cookie-name: "keycloak-cookie"
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue

Add security-related ingress annotations for SSO

While the session management configuration is good, additional security-related annotations are recommended for SSO:

Apply these additional annotations:

 ingress:
   annotations:
     nginx.ingress.kubernetes.io/affinity: "cookie"
     nginx.ingress.kubernetes.io/session-cookie-expires: "86400"
     nginx.ingress.kubernetes.io/session-cookie-max-age: "86400"
     nginx.ingress.kubernetes.io/session-cookie-name: "keycloak-cookie"
+    nginx.ingress.kubernetes.io/ssl-redirect: "true"
+    nginx.ingress.kubernetes.io/session-cookie-secure: "true"
+    nginx.ingress.kubernetes.io/session-cookie-samesite: "Lax"
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
ingress:
annotations:
nginx.ingress.kubernetes.io/affinity: "cookie"
nginx.ingress.kubernetes.io/session-cookie-expires: "86400"
nginx.ingress.kubernetes.io/session-cookie-max-age: "86400"
nginx.ingress.kubernetes.io/session-cookie-name: "keycloak-cookie"
ingress:
annotations:
nginx.ingress.kubernetes.io/affinity: "cookie"
nginx.ingress.kubernetes.io/session-cookie-expires: "86400"
nginx.ingress.kubernetes.io/session-cookie-max-age: "86400"
nginx.ingress.kubernetes.io/session-cookie-name: "keycloak-cookie"
nginx.ingress.kubernetes.io/ssl-redirect: "true"
nginx.ingress.kubernetes.io/session-cookie-secure: "true"
nginx.ingress.kubernetes.io/session-cookie-samesite: "Lax"

Comment on lines 1 to 7
{{- $cozyConfig := lookup "v1" "ConfigMap" "cozy-system" "cozystack" }}
{{- $host := index $cozyConfig.data "root-host" }}
{{- $issuerType := (index $cozyConfig.data "clusterissuer") | default "http01" }}

{{- $rootns := lookup "v1" "Namespace" "" "tenant-root" }}
{{- $ingress := index $rootns.metadata.annotations "namespace.cozystack.io/ingress" }}

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🛠️ Refactor suggestion

Add error handling for configuration lookups

The template should handle cases where the ConfigMap or Namespace lookups fail, as this could lead to deployment failures.

Consider adding validation:

 {{- $cozyConfig := lookup "v1" "ConfigMap" "cozy-system" "cozystack" }}
+{{- if not $cozyConfig }}
+  {{- fail "ConfigMap 'cozystack' not found in 'cozy-system' namespace" }}
+{{- end }}
 {{- $host := index $cozyConfig.data "root-host" }}
+{{- if not $host }}
+  {{- fail "root-host must be configured in cozystack ConfigMap" }}
+{{- end }}
 {{- $issuerType := (index $cozyConfig.data "clusterissuer") | default "http01" }}

 {{- $rootns := lookup "v1" "Namespace" "" "tenant-root" }}
+{{- if not $rootns }}
+  {{- fail "Namespace 'tenant-root' not found" }}
+{{- end }}
 {{- $ingress := index $rootns.metadata.annotations "namespace.cozystack.io/ingress" }}
+{{- if not $ingress }}
+  {{- fail "Ingress class annotation not found in tenant-root namespace" }}
+{{- end }}
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
{{- $cozyConfig := lookup "v1" "ConfigMap" "cozy-system" "cozystack" }}
{{- $host := index $cozyConfig.data "root-host" }}
{{- $issuerType := (index $cozyConfig.data "clusterissuer") | default "http01" }}
{{- $rootns := lookup "v1" "Namespace" "" "tenant-root" }}
{{- $ingress := index $rootns.metadata.annotations "namespace.cozystack.io/ingress" }}
{{- $cozyConfig := lookup "v1" "ConfigMap" "cozy-system" "cozystack" }}
{{- if not $cozyConfig }}
{{- fail "ConfigMap 'cozystack' not found in 'cozy-system' namespace" }}
{{- end }}
{{- $host := index $cozyConfig.data "root-host" }}
{{- if not $host }}
{{- fail "root-host must be configured in cozystack ConfigMap" }}
{{- end }}
{{- $issuerType := (index $cozyConfig.data "clusterissuer") | default "http01" }}
{{- $rootns := lookup "v1" "Namespace" "" "tenant-root" }}
{{- if not $rootns }}
{{- fail "Namespace 'tenant-root' not found" }}
{{- end }}
{{- $ingress := index $rootns.metadata.annotations "namespace.cozystack.io/ingress" }}
{{- if not $ingress }}
{{- fail "Ingress class annotation not found in tenant-root namespace" }}
{{- end }}
🧰 Tools
🪛 yamllint

[error] 1-1: syntax error: expected the node content, but found '-'

(syntax)

Comment on lines 115 to 127
livenessProbe:
httpGet:
path: /
port: http
initialDelaySeconds: 120
timeoutSeconds: 5
readinessProbe:
httpGet:
path: /realms/master
port: http
initialDelaySeconds: 60
timeoutSeconds: 1
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🛠️ Refactor suggestion

Optimize probe configuration

The current probe configuration might not be optimal for all scenarios.

Consider these improvements:

  1. Add a startup probe to handle long startup times better
  2. Reduce liveness probe initial delay and add startup probe
+ startupProbe:
+   httpGet:
+     path: /
+     port: http
+   failureThreshold: 60
+   periodSeconds: 10
livenessProbe:
  httpGet:
    path: /
    port: http
-  initialDelaySeconds: 120
+  initialDelaySeconds: 60
  timeoutSeconds: 5
+ periodSeconds: 10
readinessProbe:
  httpGet:
    path: /realms/master
    port: http
-  initialDelaySeconds: 60
+  initialDelaySeconds: 30
  timeoutSeconds: 1
+ periodSeconds: 10
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
livenessProbe:
httpGet:
path: /
port: http
initialDelaySeconds: 120
timeoutSeconds: 5
readinessProbe:
httpGet:
path: /realms/master
port: http
initialDelaySeconds: 60
timeoutSeconds: 1
startupProbe:
httpGet:
path: /
port: http
failureThreshold: 60
periodSeconds: 10
livenessProbe:
httpGet:
path: /
port: http
initialDelaySeconds: 60
timeoutSeconds: 5
periodSeconds: 10
readinessProbe:
httpGet:
path: /realms/master
port: http
initialDelaySeconds: 30
timeoutSeconds: 1
periodSeconds: 10

Comment on lines 1 to 3
{{- $cozyConfig := lookup "v1" "ConfigMap" "cozy-system" "cozystack" }}
{{- $host := index $cozyConfig.data "root-host" }}
{{- $password := randAlphaNum 16 -}}
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🛠️ Refactor suggestion

Add error handling for ConfigMap lookup

The direct lookup of the ConfigMap in a specific namespace might fail if the ConfigMap doesn't exist or the namespace differs in other environments.

Consider adding error handling:

{{- $cozyConfig := lookup "v1" "ConfigMap" "cozy-system" "cozystack" }}
+{{- if not $cozyConfig }}
+  {{- fail "ConfigMap 'cozystack' not found in namespace 'cozy-system'" }}
+{{- end }}
{{- $host := index $cozyConfig.data "root-host" }}
+{{- if not $host }}
+  {{- fail "root-host key not found in ConfigMap" }}
+{{- end }}
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
{{- $cozyConfig := lookup "v1" "ConfigMap" "cozy-system" "cozystack" }}
{{- $host := index $cozyConfig.data "root-host" }}
{{- $password := randAlphaNum 16 -}}
{{- $cozyConfig := lookup "v1" "ConfigMap" "cozy-system" "cozystack" }}
{{- if not $cozyConfig }}
{{- fail "ConfigMap 'cozystack' not found in namespace 'cozy-system'" }}
{{- end }}
{{- $host := index $cozyConfig.data "root-host" }}
{{- if not $host }}
{{- fail "root-host key not found in ConfigMap" }}
{{- end }}
{{- $password := randAlphaNum 16 -}}
🧰 Tools
🪛 yamllint

[error] 1-1: syntax error: expected the node content, but found '-'

(syntax)

Comment on lines 178 to 183
- name: keycloak
releaseName: keycloak
chart: cozy-keycloak
namespace: cozy-keycloak
dependsOn: [postgres-operator]
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🛠️ Refactor suggestion

Consider adding the optional flag and additional dependencies

The Keycloak release configuration might need some adjustments:

  1. Consider adding optional: true to align with other operator deployments in the bundle
  2. Since Keycloak typically requires TLS for secure communication, consider adding cert-manager to the dependsOn list

Apply this diff to enhance the configuration:

 - name: keycloak
   releaseName: keycloak
   chart: cozy-keycloak
   namespace: cozy-keycloak
-  dependsOn: [postgres-operator]
+  optional: true
+  dependsOn: [postgres-operator, cert-manager]
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
- name: keycloak
releaseName: keycloak
chart: cozy-keycloak
namespace: cozy-keycloak
dependsOn: [postgres-operator]
- name: keycloak
releaseName: keycloak
chart: cozy-keycloak
namespace: cozy-keycloak
optional: true
dependsOn: [postgres-operator, cert-manager]

Comment on lines 122 to 156
kind: Role
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: {{ include "tenant.name" . }}-use
namespace: {{ include "tenant.name" . }}
rules:
- apiGroups: ["apps.cozystack.io"]
resources: ["*"]
verbs: ["get", "list", "watch"]
- apiGroups: ["helm.toolkit.fluxcd.io"]
resources: ["helmreleases"]
verbs: ["get", "list", "watch"]
- apiGroups: [""]
resources: ["pods/log"]
verbs: ["get", "list", "watch"]
- apiGroups: ["subresources.kubevirt.io"]
resources: ["virtualmachines/console", "virtualmachines/vnc", "virtualmachines"]
verbs: ["create", "get", "list"]
---
kind: RoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: {{ include "tenant.name" . }}-use
namespace: {{ include "tenant.name" . }}
subjects:
- kind: Group
name: {{ include "tenant.name" . }}-use
apiGroup: rbac.authorization.k8s.io
roleRef:
kind: Role
name: {{ include "tenant.name" . }}-use
apiGroup: rbac.authorization.k8s.io
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🛠️ Refactor suggestion

Consider adding pod exec permissions for VM console access.

The role includes VM console and VNC access, but might need additional pod exec permissions for proper console functionality. Consider adding:

  - apiGroups: [""]
-   resources: ["pods/log"]
+   resources: ["pods/log", "pods/exec"]
    verbs: ["get", "list", "watch"]
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
kind: Role
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: {{ include "tenant.name" . }}-use
namespace: {{ include "tenant.name" . }}
rules:
- apiGroups: ["apps.cozystack.io"]
resources: ["*"]
verbs: ["get", "list", "watch"]
- apiGroups: ["helm.toolkit.fluxcd.io"]
resources: ["helmreleases"]
verbs: ["get", "list", "watch"]
- apiGroups: [""]
resources: ["pods/log"]
verbs: ["get", "list", "watch"]
- apiGroups: ["subresources.kubevirt.io"]
resources: ["virtualmachines/console", "virtualmachines/vnc", "virtualmachines"]
verbs: ["create", "get", "list"]
---
kind: RoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: {{ include "tenant.name" . }}-use
namespace: {{ include "tenant.name" . }}
subjects:
- kind: Group
name: {{ include "tenant.name" . }}-use
apiGroup: rbac.authorization.k8s.io
roleRef:
kind: Role
name: {{ include "tenant.name" . }}-use
apiGroup: rbac.authorization.k8s.io
kind: Role
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: {{ include "tenant.name" . }}-use
namespace: {{ include "tenant.name" . }}
rules:
- apiGroups: ["apps.cozystack.io"]
resources: ["*"]
verbs: ["get", "list", "watch"]
- apiGroups: ["helm.toolkit.fluxcd.io"]
resources: ["helmreleases"]
verbs: ["get", "list", "watch"]
- apiGroups: [""]
resources: ["pods/log", "pods/exec"]
verbs: ["get", "list", "watch"]
- apiGroups: ["subresources.kubevirt.io"]
resources: ["virtualmachines/console", "virtualmachines/vnc", "virtualmachines"]
verbs: ["create", "get", "list"]
---
kind: RoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: {{ include "tenant.name" . }}-use
namespace: {{ include "tenant.name" . }}
subjects:
- kind: Group
name: {{ include "tenant.name" . }}-use
apiGroup: rbac.authorization.k8s.io
roleRef:
kind: Role
name: {{ include "tenant.name" . }}-use
apiGroup: rbac.authorization.k8s.io

Comment on lines 155 to 193
kind: Role
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: {{ include "tenant.name" . }}-admin
namespace: {{ include "tenant.name" . }}
rules:
- apiGroups: ["apps.cozystack.io"]
resources: ["*"]
verbs: ["get", "list", "watch"]
- apiGroups: ["helm.toolkit.fluxcd.io"]
resources: ["helmreleases"]
verbs: ["get", "list", "watch", "create", "update", "patch", "delete"]
- apiGroups: [""]
resources: ["pods/log", "pods"]
verbs: ["get", "list", "watch", "delete"]
- apiGroups: ["subresources.kubevirt.io"]
resources: ["virtualmachines/console", "virtualmachines/vnc"]
verbs: ["create", "get"]
- apiGroups: ["apps.cozystack.io"]
resources: ["*"]
verbs: ["get", "list", "watch", "create", "update", "patch", "delete"]
resourceNames: ["tenants", "monitorings", "etcds", "ingresses"]
---
kind: RoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: {{ include "tenant.name" . }}-admin
namespace: {{ include "tenant.name" . }}
subjects:
- kind: Group
name: {{ include "tenant.name" . }}-admin
apiGroup: rbac.authorization.k8s.io
roleRef:
kind: Role
name: {{ include "tenant.name" . }}-admin
apiGroup: rbac.authorization.k8s.io
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue

Consolidate duplicate rules for apps.cozystack.io resources.

There are two overlapping rules for the apps.cozystack.io API group. The first rule (lines 161-163) appears to grant broader access than intended, while the second rule (lines 173-176) properly restricts access to specific resources.

Consolidate these rules to avoid confusion and potential security issues:

  rules:
-  - apiGroups: ["apps.cozystack.io"]
-    resources: ["*"]
-    verbs: ["get", "list", "watch"]
   # ... other rules ...
   - apiGroups: ["apps.cozystack.io"]
     resources: ["*"]
     verbs: ["get", "list", "watch", "create", "update", "patch", "delete"]
     resourceNames: ["tenants", "monitorings", "etcds", "ingresses"]
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
kind: Role
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: {{ include "tenant.name" . }}-admin
namespace: {{ include "tenant.name" . }}
rules:
- apiGroups: ["apps.cozystack.io"]
resources: ["*"]
verbs: ["get", "list", "watch"]
- apiGroups: ["helm.toolkit.fluxcd.io"]
resources: ["helmreleases"]
verbs: ["get", "list", "watch", "create", "update", "patch", "delete"]
- apiGroups: [""]
resources: ["pods/log", "pods"]
verbs: ["get", "list", "watch", "delete"]
- apiGroups: ["subresources.kubevirt.io"]
resources: ["virtualmachines/console", "virtualmachines/vnc"]
verbs: ["create", "get"]
- apiGroups: ["apps.cozystack.io"]
resources: ["*"]
verbs: ["get", "list", "watch", "create", "update", "patch", "delete"]
resourceNames: ["tenants", "monitorings", "etcds", "ingresses"]
---
kind: RoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: {{ include "tenant.name" . }}-admin
namespace: {{ include "tenant.name" . }}
subjects:
- kind: Group
name: {{ include "tenant.name" . }}-admin
apiGroup: rbac.authorization.k8s.io
roleRef:
kind: Role
name: {{ include "tenant.name" . }}-admin
apiGroup: rbac.authorization.k8s.io
kind: Role
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: {{ include "tenant.name" . }}-admin
namespace: {{ include "tenant.name" . }}
rules:
- apiGroups: ["helm.toolkit.fluxcd.io"]
resources: ["helmreleases"]
verbs: ["get", "list", "watch", "create", "update", "patch", "delete"]
- apiGroups: [""]
resources: ["pods/log", "pods"]
verbs: ["get", "list", "watch", "delete"]
- apiGroups: ["subresources.kubevirt.io"]
resources: ["virtualmachines/console", "virtualmachines/vnc"]
verbs: ["create", "get"]
- apiGroups: ["apps.cozystack.io"]
resources: ["*"]
verbs: ["get", "list", "watch", "create", "update", "patch", "delete"]
resourceNames: ["tenants", "monitorings", "etcds", "ingresses"]
---
kind: RoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: {{ include "tenant.name" . }}-admin
namespace: {{ include "tenant.name" . }}
subjects:
- kind: Group
name: {{ include "tenant.name" . }}-admin
apiGroup: rbac.authorization.k8s.io
roleRef:
kind: Role
name: {{ include "tenant.name" . }}-admin
apiGroup: rbac.authorization.k8s.io

releaseName: keycloak
chart: cozy-keycloak
namespace: cozy-keycloak
dependsOn: [postgres-operator]
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Missing cert-manager

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

postgres-operator dependsOn: [cilium,cert-manager]

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 5

📜 Review details

Configuration used: CodeRabbit UI
Review profile: CHILL

📥 Commits

Reviewing files that changed from the base of the PR and between 06bcdb3 and 8da6ab1.

📒 Files selected for processing (3)
  • packages/apps/tenant/Chart.yaml (1 hunks)
  • packages/apps/tenant/templates/tenant.yaml (1 hunks)
  • packages/apps/versions_map (1 hunks)
🚧 Files skipped from review as they are similar to previous changes (2)
  • packages/apps/tenant/Chart.yaml
  • packages/apps/versions_map
🔇 Additional comments (1)
packages/apps/tenant/templates/tenant.yaml (1)

91-230: Verify no privilege escalation paths exist

Let's verify there are no unintended permissions that could allow privilege escalation between roles.

Comment on lines +167 to +169
- apiGroups: [""]
resources: ["pods/log", "pods"]
verbs: ["get", "list", "watch", "delete"]
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🛠️ Refactor suggestion

Standardize permissions between related resources

The verbs for pods and virtualmachines are inconsistent. If admins can delete pods, they should likely have similar control over VMs.

  - apiGroups: [""]
    resources: ["pods/log", "pods"]
    verbs: ["get", "list", "watch", "delete"]
  - apiGroups: ["kubevirt.io"]
    resources: ["virtualmachines"]
-   verbs: ["get", "list"]
+   verbs: ["get", "list", "watch", "delete"]

Also applies to: 170-172

Comment on lines +204 to +206
- apiGroups: [""]
resources: ["pods/log", "pods"]
verbs: ["get", "list", "watch", "delete"]
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🛠️ Refactor suggestion

Standardize permissions between related resources

Same inconsistency as admin role between pods and virtualmachines permissions.

  - apiGroups: [""]
    resources: ["pods/log", "pods"]
    verbs: ["get", "list", "watch", "delete"]
  - apiGroups: ["kubevirt.io"]
    resources: ["virtualmachines"]
-   verbs: ["get", "list"]
+   verbs: ["get", "list", "watch", "delete"]

Also applies to: 207-209

Comment on lines +134 to +136
- apiGroups: [""]
resources: ["pods", "pods/log"]
verbs: ["get", "list", "watch"]
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue

Add pod exec permissions for VM console access

The role includes VM console and VNC access, but needs pod exec permissions for proper console functionality.

  - apiGroups: [""]
-   resources: ["pods", "pods/log"]
+   resources: ["pods", "pods/log", "pods/exec"]
    verbs: ["get", "list", "watch"]

Committable suggestion skipped: line range outside the PR's diff.

Comment on lines +98 to +100
- apiGroups: ["apps.cozystack.io"]
resources: ["*"]
verbs: ["get", "list", "watch"]
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue

Restrict wildcard access to specific resources

Using resources: ["*"] for apps.cozystack.io is overly permissive, even for read-only access. Consider explicitly listing the allowed resources as done in the admin role.

  - apiGroups: ["apps.cozystack.io"]
-   resources: ["*"]
+   resources: ["buckets", "clickhouses", "ferretdb", "foos", "httpcaches", "kafkas", "kuberneteses", "mysqls", "natses", "postgreses", "rabbitmqs", "redises", "seaweedfses", "tcpbalancers", "virtualmachines", "vmdisks", "vminstances"]
    verbs: ["get", "list", "watch"]
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
- apiGroups: ["apps.cozystack.io"]
resources: ["*"]
verbs: ["get", "list", "watch"]
- apiGroups: ["apps.cozystack.io"]
resources: ["buckets", "clickhouses", "ferretdb", "foos", "httpcaches", "kafkas", "kuberneteses", "mysqls", "natses", "postgreses", "rabbitmqs", "redises", "seaweedfses", "tcpbalancers", "virtualmachines", "vmdisks", "vminstances"]
verbs: ["get", "list", "watch"]

Comment on lines +128 to +130
- apiGroups: ["apps.cozystack.io"]
resources: ["*"]
verbs: ["get", "list", "watch"]
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue

Restrict wildcard access to specific resources

Same issue as view role - wildcard access should be replaced with explicit resource list.

  - apiGroups: ["apps.cozystack.io"]
-   resources: ["*"]
+   resources: ["buckets", "clickhouses", "ferretdb", "foos", "httpcaches", "kafkas", "kuberneteses", "mysqls", "natses", "postgreses", "rabbitmqs", "redises", "seaweedfses", "tcpbalancers", "virtualmachines", "vmdisks", "vminstances"]
    verbs: ["get", "list", "watch"]
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
- apiGroups: ["apps.cozystack.io"]
resources: ["*"]
verbs: ["get", "list", "watch"]
- apiGroups: ["apps.cozystack.io"]
resources: ["buckets", "clickhouses", "ferretdb", "foos", "httpcaches", "kafkas", "kuberneteses", "mysqls", "natses", "postgreses", "rabbitmqs", "redises", "seaweedfses", "tcpbalancers", "virtualmachines", "vmdisks", "vminstances"]
verbs: ["get", "list", "watch"]

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 0

🧹 Outside diff range and nitpick comments (1)
packages/apps/tenant/templates/tenant.yaml (1)

91-230: Well-structured RBAC hierarchy with room for improvement

The role hierarchy follows good security practices with clear privilege separation. However, there are some areas for improvement:

  1. Standardize the approach to resource access:
    • View/Use roles: Currently using wildcards
    • Admin role: Using explicit resource lists
    • Super-admin role: Using wildcards
  2. Ensure consistent permission patterns across similar resources (pods vs VMs)

Consider creating a base role with common permissions that other roles can extend, reducing duplication and ensuring consistency.

📜 Review details

Configuration used: CodeRabbit UI
Review profile: CHILL

📥 Commits

Reviewing files that changed from the base of the PR and between 8da6ab1 and efa3005.

📒 Files selected for processing (3)
  • packages/apps/tenant/Chart.yaml (1 hunks)
  • packages/apps/tenant/templates/tenant.yaml (1 hunks)
  • packages/apps/versions_map (1 hunks)
🚧 Files skipped from review as they are similar to previous changes (2)
  • packages/apps/tenant/Chart.yaml
  • packages/apps/versions_map
🔇 Additional comments (6)
packages/apps/tenant/templates/tenant.yaml (6)

98-100: Restrict wildcard access to specific resources

The wildcard access to all apps.cozystack.io resources is overly permissive. Consider explicitly listing allowed resources as done in the admin role.


128-130: Restrict wildcard access to specific resources

The wildcard access to all apps.cozystack.io resources is overly permissive. Consider explicitly listing allowed resources as done in the admin role.


134-142: Add pod/exec permissions for VM console access

The role includes VM console and VNC access but requires pod/exec permissions for proper console functionality.


170-175: Standardize permissions between related resources

The permissions for VMs are more restricted than pods, which may limit administrative capabilities. Consider granting similar control levels.


207-212: Standardize permissions between related resources

The permissions for VMs are more restricted than pods, which may limit super-admin capabilities. Consider granting similar control levels.


176-178: Verify completeness of resource list

The explicit list of resources is a good practice, but we should verify it covers all necessary resources.

@kvaps kvaps marked this pull request as draft November 25, 2024 15:00
@klinch0 klinch0 marked this pull request as ready for review November 25, 2024 17:25
@dosubot dosubot bot added the bug Something isn't working label Nov 25, 2024
Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 0

🧹 Outside diff range and nitpick comments (2)
packages/apps/tenant/templates/keycloakgroups.yaml (2)

1-45: Consider documenting the group hierarchy and permissions.

While the group structure is clear, it would be beneficial to document:

  1. The hierarchy between these groups (if any)
  2. The specific permissions/access levels each group grants
  3. The relationship with the corresponding RBAC roles

Consider adding these as YAML comments at the top of the file:

+# Keycloak realm groups for tenant access control:
+# - view: Read-only access to tenant resources
+# - use: Basic operational access to tenant resources
+# - admin: Full administrative access to tenant resources
+# - super-admin: Elevated privileges for tenant management
+
 apiVersion: v1.edp.epam.com/v1
🧰 Tools
🪛 yamllint (1.29.0-1)

[error] 4-4: syntax error: expected , but found ''

(syntax)


40-42: Review super-admin group security implications.

The super-admin group typically has the highest privileges. Ensure that:

  1. The group creation is restricted
  2. Group membership is tightly controlled
  3. Proper audit logging is in place for this group's activities

Consider implementing additional security controls for the super-admin group:

  1. Add group attributes for enhanced security policies
  2. Implement strict membership validation
  3. Configure detailed audit logging
📜 Review details

Configuration used: CodeRabbit UI
Review profile: CHILL

📥 Commits

Reviewing files that changed from the base of the PR and between efa3005 and 3cd803a.

📒 Files selected for processing (1)
  • packages/apps/tenant/templates/keycloakgroups.yaml (1 hunks)
🧰 Additional context used
🪛 yamllint (1.29.0-1)
packages/apps/tenant/templates/keycloakgroups.yaml

[error] 4-4: syntax error: expected , but found ''

(syntax)

🔇 Additional comments (2)
packages/apps/tenant/templates/keycloakgroups.yaml (2)

1-45: LGTM! The Keycloak realm groups are well-structured.

The implementation follows a consistent pattern for all four groups (view, use, admin, super-admin), properly using Helm templating for dynamic naming and correctly referencing the Keycloak realm.

Let's verify the corresponding RBAC roles in tenant.yaml:

✅ Verification successful

✓ Keycloak groups align perfectly with RBAC roles

The Keycloak realm groups in keycloakgroups.yaml have corresponding RBAC roles defined in tenant.yaml. The naming convention is consistent across both files, with matching roles for all four groups: view, use, admin, and super-admin. Each group has its corresponding Role and RoleBinding definitions with the same naming pattern {{ include "tenant.name" . }}-<group>.

🏁 Scripts executed

The following scripts were executed for the analysis:

Script:

#!/bin/bash
# Description: Verify that the corresponding RBAC roles exist in tenant.yaml
# and that the naming convention matches these Keycloak groups

# Search for role definitions in tenant.yaml that match our group names
rg "kind:\s+Role.*?name:.*?(view|use|admin|super-admin)" -A 5 packages/apps/tenant/templates/tenant.yaml

Length of output: 106


Script:

#!/bin/bash
# Let's try a broader search to find any role definitions in tenant.yaml
rg "kind:\s+Role" -A 5 packages/apps/tenant/templates/tenant.yaml

# Also search for any files that might contain role definitions
fd "tenant.ya?ml" --type f

# Let's also check if there are any other YAML files in the tenant templates
fd ".ya?ml$" packages/apps/tenant/templates/

Length of output: 3055

🧰 Tools
🪛 yamllint (1.29.0-1)

[error] 4-4: syntax error: expected , but found ''

(syntax)


7-9: Verify the existence and configuration of the referenced Keycloak realm.

All groups reference keycloakrealm-cozy. We should verify its existence and configuration to ensure proper integration.

Also applies to: 19-21, 31-33, 43-45

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 2

🧹 Outside diff range and nitpick comments (4)
packages/core/platform/bundles/distro-hosted.yaml (1)

142-147: LGTM! Consider adding configuration documentation

The new keycloak-configure release is well-structured and properly integrated into the dependency chain. The optional flag and namespace alignment with other Keycloak components is correct.

Consider adding a comment block above this release to document its purpose in the SSO roles feature, similar to other configuration files in the codebase.

packages/core/platform/bundles/distro-full.yaml (1)

191-197: Consider additional security dependencies

The keycloak-configure release might need access to secure communication channels. Consider adding dependencies on security-related components:

  1. cert-manager for TLS certificate management
  2. external-secrets-operator (if using external secrets for configuration)
  name: keycloak-configure
  releaseName: keycloak-configure
  chart: cozy-keycloak-configure
  namespace: cozy-keycloak
  optional: true
- dependsOn: [keycloak-operator]
+ dependsOn: [keycloak-operator, cert-manager]
packages/apps/tenant/templates/tenant.yaml (2)

213-215: Document super-admin privileges

While using wildcard resources for super-admin is common, please consider:

  1. Adding comments explaining the broad access requirement
  2. Documenting the security implications in the role documentation

91-230: LGTM: Well-structured RBAC hierarchy

The role hierarchy is well-designed with:

  • Clear progression of privileges from view → use → admin → super-admin
  • Consistent naming and templating
  • Good separation of concerns

Consider documenting the role hierarchy and their intended use cases in the README.

📜 Review details

Configuration used: CodeRabbit UI
Review profile: CHILL

📥 Commits

Reviewing files that changed from the base of the PR and between 3cd803a and f02bbf0.

📒 Files selected for processing (11)
  • packages/apps/tenant/Chart.yaml (1 hunks)
  • packages/apps/tenant/templates/keycloakgroups.yaml (1 hunks)
  • packages/apps/tenant/templates/tenant.yaml (1 hunks)
  • packages/apps/versions_map (1 hunks)
  • packages/core/platform/bundles/distro-full.yaml (1 hunks)
  • packages/core/platform/bundles/distro-hosted.yaml (1 hunks)
  • packages/core/platform/bundles/paas-full.yaml (1 hunks)
  • packages/core/platform/bundles/paas-hosted.yaml (1 hunks)
  • packages/system/keycloak-configure/Chart.yaml (1 hunks)
  • packages/system/keycloak-configure/templates/configure-kk.yaml (3 hunks)
  • packages/system/keycloak-operator/values.yaml (1 hunks)
✅ Files skipped from review due to trivial changes (2)
  • packages/system/keycloak-configure/Chart.yaml
  • packages/system/keycloak-operator/values.yaml
🚧 Files skipped from review as they are similar to previous changes (2)
  • packages/apps/tenant/Chart.yaml
  • packages/apps/versions_map
🧰 Additional context used
🪛 yamllint (1.29.0-1)
packages/apps/tenant/templates/keycloakgroups.yaml

[error] 4-4: syntax error: expected , but found ''

(syntax)

packages/system/keycloak-configure/templates/configure-kk.yaml

[warning] 23-23: wrong indentation: expected 2 but found 1

(indentation)

🔇 Additional comments (14)
packages/apps/tenant/templates/keycloakgroups.yaml (4)

1-11: Verify ClusterKeycloakRealm existence and configuration

The configuration consistently references keycloakrealm-cozy as a ClusterKeycloakRealm. This is a critical integration point for SSO functionality.

Also applies to: 22-23, 35-36, 48-49

✅ Verification successful

ClusterKeycloakRealm reference is properly configured and integrated

The referenced keycloakrealm-cozy exists and is well-defined in packages/system/keycloak-configure/templates/configure-kk.yaml with proper configuration including:

  • Realm name set to "cozy"
  • OpenID Connect protocol configuration
  • Group membership support
  • Authorization services enabled
🏁 Scripts executed

The following scripts were executed for the analysis:

Script:

#!/bin/bash
# Description: Verify the existence and configuration of the referenced ClusterKeycloakRealm

# Search for ClusterKeycloakRealm definition
rg -l --type yaml "kind:\s*ClusterKeycloakRealm" 

# Search for specific realm configuration
rg -A 5 "name:\s*keycloakrealm-cozy"

Length of output: 3403

🧰 Tools
🪛 yamllint (1.29.0-1)

[error] 4-4: syntax error: expected , but found ''

(syntax)


1-2: Verify API version compatibility

The configuration uses v1.edp.epam.com/v1 API version for KeycloakRealmGroup resources. Ensure this version is supported by the installed Keycloak operator.

Also applies to: 14-15, 27-28, 40-41

✅ Verification successful

Based on the results, I can generate the final response as we have sufficient evidence. The search results show that:

  1. The v1.edp.epam.com/v1 API version is extensively used in the Keycloak operator's CRD examples, particularly for KeycloakRealmGroup resources.
  2. The operator's Chart.yaml explicitly lists this API version in its supported versions.
  3. There are consistent examples of this API version being used across multiple Keycloak-related resources.

API version is correctly specified

The v1.edp.epam.com/v1 API version used in the KeycloakRealmGroup resources is fully supported by the Keycloak operator, as evidenced by the operator's CRD examples and Chart specifications.

🏁 Scripts executed

The following scripts were executed for the analysis:

Script:

#!/bin/bash
# Description: Verify API version and CRD existence

# Search for CRD definition
fd -t f "keycloak" -x rg -l "v1\.edp\.epam\.com/v1.*KeycloakRealmGroup"

# Check for version references in operator deployment
rg "v1\.edp\.epam\.com/v1" --type yaml

Length of output: 5814


4-5: Verify tenant name templating implementation

The namespace isolation is correctly implemented using the tenant.name template helper. Ensure this helper function is properly defined and tested.

Also applies to: 17-18, 30-31, 43-44

✅ Verification successful

Let me check the helper template definition to complete the verification.


Tenant name templating is properly implemented with robust validation

The tenant.name helper is well-defined with proper validation and formatting:

  • Validates release name format (must start with "tenant-" and contain exactly one dash)
  • Validates release namespace (must start with "tenant-")
  • Handles special case for "tenant-root" namespace
  • Consistently formats tenant names across all resources
🏁 Scripts executed

The following scripts were executed for the analysis:

Script:

#!/bin/bash
# Description: Verify tenant name templating implementation

# Search for tenant.name template definition
fd -t f "_helpers.tpl" -x rg -l "tenant\.name"

# Check usage patterns
rg "include \"tenant.name\"" --type yaml

Length of output: 7775


Script:

#!/bin/bash
# Get the tenant.name helper implementation
cat ./packages/apps/tenant/templates/_helpers.tpl

Length of output: 697

🧰 Tools
🪛 yamllint (1.29.0-1)

[error] 4-4: syntax error: expected , but found ''

(syntax)


4-8: Document group purposes and verify role alignment

The groups follow a clear hierarchy (view -> use -> admin -> super-admin), but their specific permissions and use cases should be documented. Additionally, ensure these groups align with corresponding RBAC roles.

Consider adding comments to document:

  • Purpose and scope of each group
  • Expected permissions
  • Relationship with RBAC roles
 metadata:
   name: {{ include "tenant.name" . }}-view
   namespace: {{ include "tenant.name" . }}
+  annotations:
+    description: "Group for users with read-only access to tenant resources"

Also applies to: 17-21, 30-34, 43-47

✅ Verification successful

Groups are properly aligned with RBAC roles but could benefit from documentation

The verification shows that the Keycloak groups are correctly mapped to corresponding RBAC roles with clear permissions:

  • view: read-only access to tenant resources (get, list, watch)
  • use: same as view permissions
  • admin: additional permissions for managing HelmReleases and accessing pod logs
  • super-admin: highest level of access with full management capabilities

The suggestion to add documentation via annotations is valid and would improve maintainability. The original review comment's hierarchy observation and documentation suggestion are accurate.

🏁 Scripts executed

The following scripts were executed for the analysis:

Script:

#!/bin/bash
# Description: Check for corresponding RBAC roles

# Search for related RBAC role definitions
rg -A 5 "kind:\s*(Role|ClusterRole).*{{ include \"tenant.name\" . }}-(view|use|admin|super-admin)"

Length of output: 296


Script:

#!/bin/bash
# Search for role definitions with escaped template syntax
rg -A 5 'kind:\s*(Role|ClusterRole).*\{\{ include "tenant\.name" \. \}}-(view|use|admin|super-admin)'

# Also search for any role bindings or cluster role bindings
rg -A 5 'kind:\s*(RoleBinding|ClusterRoleBinding).*\{\{ include "tenant\.name" \. \}}-(view|use|admin|super-admin)'

# Look for any RBAC-related files
fd 'role|rbac' --type f

# Check for any documentation about roles and groups
rg -i '(group|role|permission|access)' README.md docs/ 2>/dev/null

Length of output: 13856


Script:

#!/bin/bash
# Let's check the keycloak templates directory for any role or permission definitions
rg -A 10 'kind:' packages/apps/tenant/templates/

# Check for any documentation in the tenant package
fd README.md -p packages/apps/tenant/

# Look for any keycloak-related configuration files that might define permissions
fd -e yaml -p packages/apps/tenant/ | xargs rg -l 'keycloak|role|permission|group' 2>/dev/null

Length of output: 32207

🧰 Tools
🪛 yamllint (1.29.0-1)

[error] 4-4: syntax error: expected , but found ''

(syntax)

packages/system/keycloak-configure/templates/configure-kk.yaml (4)

6-13: Verify compatibility with v1alpha1 API version

The migration to v1alpha1 API version and ClusterKeycloak kind indicates usage of an alpha API, which may be subject to breaking changes. Ensure that:

  1. The Keycloak operator version supports these changes
  2. There's a migration plan if the API changes in future releases
✅ Verification successful

API version v1.edp.epam.com/v1alpha1 is supported for ClusterKeycloak

The CRD definitions and examples in the repository confirm that the v1.edp.epam.com/v1alpha1 API version is fully supported by the Keycloak operator version 1.23.0. The migration from Keycloak to ClusterKeycloak kind is an intentional change, with proper CRD definitions and example configurations available in:

  • packages/system/keycloak-operator/charts/keycloak-operator/crds/v1.edp.epam.com_clusterkeycloaks.yaml
  • packages/system/keycloak-operator/charts/keycloak-operator/_crd_examples/clusterkeycloak.yaml
🏁 Scripts executed

The following scripts were executed for the analysis:

Script:

#!/bin/bash
# Check if the Keycloak operator CRDs support the new API version
kubectl get crd -o json | jq -r '.items[] | select(.spec.group == "v1.edp.epam.com") | .spec.versions[].name'

Length of output: 209


Script:

#!/bin/bash
# Let's search for any operator-related configuration or CRD definitions in the repository
fd -e yaml -e yml | xargs grep -l "v1.edp.epam.com" || true

# Also search for any documentation or references about the API version
rg -l "v1.edp.epam.com" || true

# Look for any operator version or compatibility information
fd "Chart.yaml" | xargs grep -A5 -B5 "keycloak" || true

Length of output: 21387


35-36: Verify client scope reference compatibility

The change of realmRef.kind to ClusterKeycloakRealm needs verification to ensure the KeycloakClientScope controller supports referencing cluster-scoped realms.

✅ Verification successful

Change to ClusterKeycloakRealm is supported

The CRD definition explicitly allows both KeycloakRealm and ClusterKeycloakRealm in the realmRef.kind enum field, confirming that the KeycloakClientScope controller fully supports referencing cluster-scoped realms.

🏁 Scripts executed

The following scripts were executed for the analysis:

Script:

#!/bin/bash
# Check if KeycloakClientScope CRD supports referencing ClusterKeycloakRealm
kubectl get crd -o json | jq -r '.items[] | select(.spec.names.kind == "KeycloakClientScope") | .spec.versions[].schema.openAPIV3Schema.properties.spec.properties.realmRef'

Length of output: 272


Script:

#!/bin/bash
# Let's search for KeycloakClientScope CRD definition in the codebase
fd ".*\.yaml$" | xargs rg -l "kind:\s*KeycloakClientScope" -A 10

# Also search for any references to ClusterKeycloakRealm in relation to KeycloakClientScope
rg "KeycloakClientScope.*ClusterKeycloakRealm|ClusterKeycloakRealm.*KeycloakClientScope" -A 5

Length of output: 499


Script:

#!/bin/bash
# Let's examine the CRD definition and example to understand the realmRef structure
cat packages/system/keycloak-operator/charts/keycloak-operator/crds/v1.edp.epam.com_keycloakclientscopes.yaml | rg -A 10 "realmRef:"

# Also check the example usage
cat packages/system/keycloak-operator/charts/keycloak-operator/_crd_examples/keycloakclientscope.yaml

Length of output: 1273


70-71: Verify OAuth2 configuration and realm reference

The client configuration includes important security features (service account, authorization services). Please verify:

  1. The change to ClusterKeycloakRealm reference is supported
  2. The OAuth2 callback URL configuration remains valid with the new cluster-scoped setup
✅ Verification successful

OAuth2 configuration and realm reference changes are valid and secure

The verification shows that:

  1. The ClusterKeycloakRealm reference is fully supported - the CRD definitions explicitly include it as a valid kind in the realmRef field for KeycloakClient and other resources
  2. The OAuth2 callback configuration is secure and compatible with the cluster setup:
    • The client uses https:// protocol for callback URL
    • The redirect URIs are properly scoped to specific paths
    • The configuration is consistent with other cluster-scoped Keycloak resources in the codebase
🏁 Scripts executed

The following scripts were executed for the analysis:

Script:

#!/bin/bash
# Check if KeycloakClient CRD supports the new realm reference
kubectl get crd -o json | jq -r '.items[] | select(.spec.names.kind == "KeycloakClient") | .spec.versions[].schema.openAPIV3Schema.properties.spec.properties.realmRef'

# Verify no other clients are using potentially conflicting redirect URIs
kubectl get keycloakclients -o json | jq -r '.items[] | select(.spec.redirectUris != null) | {name: .metadata.name, uris: .spec.redirectUris}'

Length of output: 510


Script:

#!/bin/bash
# Let's analyze the repository content to find CRD definitions and client configurations
# Search for CRD definitions related to KeycloakClient
fd "keycloakclient.*\.yaml" --type f --exec cat {}

# Search for any realm reference configurations
rg -A 5 -B 5 "realmRef|ClusterKeycloakRealm" --type yaml

# Search for OAuth callback/redirect URI configurations
rg -A 5 -B 5 "redirectUris|callbackURL" --type yaml

Length of output: 96031


17-24: ⚠️ Potential issue

Fix indentation and verify realm configuration

  1. The realmName field has incorrect indentation (should be 2 spaces)
  2. The change from keycloakRef to clusterKeycloakRef needs verification to ensure proper resource linking

Fix the indentation:

 spec:
-realmName: cozy
-clusterKeycloakRef: keycloak-cozy
+  realmName: cozy
+  clusterKeycloakRef: keycloak-cozy
✅ Verification successful

Fix indentation and update reference field

  1. The indentation issue needs to be fixed as specified
  2. The change from keycloakRef to clusterKeycloakRef is correct and aligns with the CRD definition:
    • ClusterKeycloakRealm CRD requires clusterKeycloakRef field
    • The field is mandatory as shown in the required section of the CRD schema

Fix the indentation:

 spec:
-realmName: cozy
-clusterKeycloakRef: keycloak-cozy
+  realmName: cozy
+  clusterKeycloakRef: keycloak-cozy
🏁 Scripts executed

The following scripts were executed for the analysis:

Script:

#!/bin/bash
# Verify the ClusterKeycloakRealm CRD definition includes the new reference field
kubectl get crd -o json | jq -r '.items[] | select(.spec.names.kind == "ClusterKeycloakRealm") | .spec.versions[].schema.openAPIV3Schema.properties.spec.properties'

Length of output: 264


Script:

#!/bin/bash
# Search for ClusterKeycloakRealm CRD definition files
fd -e yaml -e yml | xargs rg -l "kind:\s*ClusterKeycloakRealm"

# Search for any other realm resources to compare the structure
fd -e yaml -e yml | xargs rg -l "kind:\s*(Cluster)?KeycloakRealm"

# Look for any documentation or examples
fd README.md | xargs rg -l "KeycloakRealm|ClusterKeycloakRealm"

Length of output: 2883


Script:

#!/bin/bash
# Check the CRD definition for ClusterKeycloakRealm
cat packages/system/keycloak-operator/charts/keycloak-operator/crds/v1.edp.epam.com_clusterkeycloakrealms.yaml

# Check example usage
cat packages/system/keycloak-operator/charts/keycloak-operator/_crd_examples/clusterkeycloakrealm.yaml

# Compare with old KeycloakRealm CRD
cat packages/system/keycloak-operator/charts/keycloak-operator/crds/v1.edp.epam.com_keycloakrealms.yaml

Length of output: 23935

🧰 Tools
🪛 yamllint (1.29.0-1)

[warning] 23-23: wrong indentation: expected 2 but found 1

(indentation)

packages/core/platform/bundles/distro-hosted.yaml (1)

142-147: Verify dependency chain completeness

While the immediate dependency on keycloak-operator is correct, consider whether cert-manager should be explicitly included in the dependency chain since it's required by upstream components (postgres-operator).

✅ Verification successful

Dependency chain is correctly configured

The dependency chain is properly set up and doesn't require explicit cert-manager dependency:

  • keycloak-configure depends on keycloak-operator
  • keycloak-operator depends on keycloak
  • postgres-operator (used by keycloak) already depends on cert-manager
  • All components maintain consistent optional flags

The transitive dependencies ensure cert-manager is available when needed, making an explicit dependency unnecessary.

🏁 Scripts executed

The following scripts were executed for the analysis:

Script:

#!/bin/bash
# Check the dependency chain in other similar configurations
rg -A 5 "name: keycloak-configure" packages/core/platform/bundles/

Length of output: 1785


Script:

#!/bin/bash
# Check the dependency chain for keycloak-operator and postgres-operator
rg "name: (keycloak-operator|postgres-operator)" -A 5 packages/core/platform/bundles/distro-hosted.yaml

Length of output: 439

packages/core/platform/bundles/paas-hosted.yaml (2)

161-165: LGTM! The new release configuration follows best practices.

The keycloak-configure release is well-structured with:

  • Proper dependency chain through keycloak-operator
  • Consistent namespace with related components
  • Adherence to naming conventions

161-165: Verify chart existence and version compatibility.

Let's ensure the referenced chart exists and is compatible with the dependency chain.

packages/core/platform/bundles/distro-full.yaml (1)

191-197: Verify the dependency chain for the Keycloak components

The new keycloak-configure release introduces a three-level dependency chain:

postgres-operator <- keycloak <- keycloak-operator <- keycloak-configure

While the configuration follows the established pattern, please verify:

  1. The order of deployment is correct for your use case
  2. The keycloak service is fully ready before keycloak-operator starts
  3. The keycloak-operator is fully ready before keycloak-configure starts

Consider implementing readiness probes or health checks in your charts to ensure proper sequencing of deployments.

✅ Verification successful

Let me analyze the results and generate one more verification script to complete our analysis.

This script will help us verify the complete dependency chain and ensure there are no missing or incorrect dependencies in the sequence. The previous results showed the direct dependencies, but we need to confirm the complete chain including postgres-operator to ensure the deployment order is correct.


Dependency chain for keycloak-configure is correctly configured

The verification confirms that the dependency chain is properly structured across all bundle configurations:

  1. postgres-operator (depends on cert-manager)
  2. keycloak (depends on postgres-operator)
  3. keycloak-operator (depends on keycloak)
  4. keycloak-configure (depends on keycloak-operator)

The sequential deployment order is maintained consistently across all bundle variants (distro-full, distro-hosted, paas-full, paas-hosted), ensuring that each component will be fully ready before the next one starts.

🏁 Scripts executed

The following scripts were executed for the analysis:

Script:

#!/bin/bash
# Description: Check for potential circular dependencies and verify the dependency chain

# Search for any references to these components that might indicate circular dependencies
rg -l 'keycloak-configure|keycloak-operator|keycloak' | while read -r file; do
  echo "=== $file ==="
  rg 'dependsOn.*\[(.*keycloak.*)\]' "$file" || true
done

Length of output: 19269


Script:

#!/bin/bash
# Let's verify the complete dependency chain by checking the dependencies of keycloak and postgres-operator

# Check all bundle files for the complete dependency chain
echo "=== Checking bundle files for complete dependency chain ==="
for file in packages/core/platform/bundles/*.yaml; do
  echo "=== $file ==="
  rg "name: (keycloak|postgres-operator|keycloak-operator|keycloak-configure)" "$file" -A 5 || true
done

Length of output: 3836

packages/apps/tenant/templates/tenant.yaml (2)

108-120: LGTM: Role binding structure is correct

The role binding correctly associates the view role with the corresponding group.


176-178: LGTM: Good practice using explicit resource list

Explicitly listing resources instead of using wildcards follows security best practices by implementing the principle of least privilege.

Comment on lines +264 to +269

- name: keycloak-configure
releaseName: keycloak-configure
chart: cozy-keycloak-configure
namespace: cozy-keycloak
dependsOn: [keycloak-operator]
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

💡 Codebase verification

Based on the gathered information, I can now provide the final response:

The keycloak-configure component configuration is incomplete

The configuration needs the following adjustments based on the patterns observed in the codebase:

  1. Add optional: true as this is consistently set for all Keycloak components in the distro bundles
  2. No privileged flag is required as none of the other Keycloak components use it
  3. No values configuration is needed in the bundle as the default values in the component's values.yaml file are sufficient

Example:

- name: keycloak-configure
  releaseName: keycloak-configure
  chart: cozy-keycloak-configure
  namespace: cozy-keycloak
  optional: true
  dependsOn: [keycloak-operator]
🔗 Analysis chain

Verify configuration requirements for keycloak-configure.

The basic structure and dependency chain look good. However, please verify the following requirements:

  1. Does this component require privileged access? If yes, add privileged: true
  2. Should this be an optional component like telepresence or external-dns? If yes, add optional: true
  3. Are there any values that need to be configured, such as environment-specific settings?

Let's check other similar components in the codebase:


Let me gather more specific information about the Keycloak components and their configurations.


Let me check the values.yaml files to understand if any specific configuration is needed.

🏁 Scripts executed

The following scripts were executed for the analysis:

Script:

#!/bin/bash
# Check if other Keycloak-related components use privileged or optional flags
rg -A 5 "name: keycloak" | grep -E "privileged:|optional:"

# Check if there are any values configurations in other files
fd -t f . | grep -i keycloak | xargs rg "values:"

Length of output: 721


Script:

#!/bin/bash
# Check the complete configuration of existing Keycloak components
rg -A 10 "name: keycloak" packages/core/platform/bundles/

# Check if there are any values.yaml files for Keycloak components
fd values.yaml | grep -i keycloak

# Look for any helm chart dependencies related to Keycloak
fd Chart.yaml | xargs rg -l "keycloak"

Length of output: 5842


Script:

#!/bin/bash
# Check the content of values.yaml files for Keycloak components
cat packages/system/keycloak/values.yaml packages/system/keycloak-configure/values.yaml 2>/dev/null

# Check if there are any environment-specific values in the bundle files
rg "values:" -A 5 packages/core/platform/bundles/paas-full.yaml

Length of output: 867

Comment on lines +137 to +142
- apiGroups: ["kubevirt.io"]
resources: ["virtualmachines"]
verbs: ["get", "list"]
- apiGroups: ["subresources.kubevirt.io"]
resources: ["virtualmachineinstances/console", "virtualmachineinstances/vnc"]
verbs: ["get", "list"]
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

💡 Codebase verification

VM console access permissions need to be expanded

Based on the kubevirt-operator's role definition, the current permission set for VM console access is incomplete. The role should include:

  • Additional subresources like virtualmachineinstances/portforward for proper console functionality
  • pods/exec permissions which are required by the kubevirt implementation
  • create verb in addition to current get and list verbs
🔗 Analysis chain

Verify VM console access functionality

The role includes VM console and VNC access permissions but should be tested to ensure proper functionality with the current permission set.

🏁 Scripts executed

The following scripts were executed for the analysis:

Script:

#!/bin/bash
# Description: Check if any existing implementations require additional permissions
# for VM console access

# Search for VM console/VNC related code
rg -A 5 "virtualmachineinstances/(console|vnc)"

# Search for pod exec usage in console access
rg -A 5 "pods/exec.*console"

Length of output: 3457


Script:

#!/bin/bash
# Let's search for any console/VNC related code implementations to understand the required permissions

# Search for console/VNC related code in Go/JavaScript files
rg -t go -t js -A 10 "console.*virtualmachine|vnc.*virtualmachine"

# Search for any console-related Kubernetes API calls
rg -A 10 "k8s.*console.*virtual|kubevirt.*console"

# Look for any console-related configuration or documentation
fd -e md -e yaml -e yml -X rg -A 5 "console.*virtual|vnc.*virtual"

Length of output: 1889


Script:

#!/bin/bash
# Let's check the kubevirt operator's role to understand the complete set of permissions needed for console access

# Search for the complete role definition in kubevirt operator
rg -A 20 "virtualmachineinstances/console" packages/system/kubevirt-operator/templates/kubevirt-operator.yaml

# Look for any documentation about console access requirements
fd README.md -X rg -A 10 "console.*virtual|vnc.*virtual"

# Search for any pod exec related permissions in kubevirt roles
rg -A 5 "pods/exec" packages/system/kubevirt-operator/templates/

Length of output: 2109

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working enhancement New feature or request size:L This PR changes 100-499 lines, ignoring generated files.
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants