Skip to content

Commit

Permalink
New Feature: Use KubeDownscaler Without Cluster Wide Access (Namespac…
Browse files Browse the repository at this point in the history
…ed Access) (#73)

* facilitates backward compatibility

* docs refactoring

* refactored chart to include constrainted-namespace installation

* refactored chart to include constrainted-namespace installation

* added constrainted downscaler arg inside code

* refactor --namespace argument to be able to handle a list of strings

* refactore autoscale_jobs function

* --namespace object could be only be used with constrainted_downscaler api calls. rbac refactored to only include namespaced resources

* fixed constraintedNamesapce test namespaces inside values.yaml

* added try-except to better log RBAC errors

* helm chart will automaticall add --constrainted-downscaler arg if constraintedDownscaler value is set to true

* added tests

* added log to inform user that using --namespace argument will deactivate --exclude-namespaces

* added error handling inside scaler.py

* refactored namespace argument to namespaces to clarify the variable could contain more than 1 element, refactored helm chart

* refactored docs

* refactored migrate from codeberg docs section

* removed --constrainted-downscaler argument to better help user configurations

* refactored docs

* refactored docs

* refactored docs and constrained variable name

* refactored docs

* added release namespace inside helm chart (deplyoment, configmap and serviceaccount)

* refactored variable names inside helm chart

* refactored variable names inside docs

* improved 403 error handling, changed log.error to log.warning
  • Loading branch information
samuel-esp authored Jul 22, 2024
1 parent 9aa2078 commit c217f31
Show file tree
Hide file tree
Showing 9 changed files with 534 additions and 64 deletions.
126 changes: 115 additions & 11 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -20,10 +20,14 @@ Scale down / "pause" Kubernetes workload (`Deployments`, `StatefulSets`,
- [Helm Chart](#helm-chart)
- [Example configuration](#example-configuration)
- [Notes](#notes)
- [Installation](#installation)
- [Cluster Wide Access Installation](#cluster-wide-access-installation)
- [Limited Access Installation](#limited-access-installation)
- [Configuration](#configuration)
- [Uptime / downtime spec](#uptime--downtime-spec)
- [Alternative Logic, Based on Periods](#alternative-logic-based-on-periods)
- [Command Line Options](#command-line-options)
- [Constrained Mode (Limited Access Mode)](#constrained-mode-limited-access-mode)
- [Scaling Jobs: Overview](#scaling-jobs-overview)
- [Scaling Jobs Natively](#scaling-jobs-natively)
- [Scaling Jobs With Admission Controller](#scaling-jobs-with-admission-controller)
Expand Down Expand Up @@ -188,6 +192,62 @@ $ kubectl annotate hpa nginx 'downscaler/downtime-replicas=1'
$ kubectl annotate hpa nginx 'downscaler/uptime=Mon-Fri 09:00-17:00 America/Buenos_Aires'
```

## Installation

KubeDownscaler offers two installation methods.
- **Cluster Wide Access**: This method is dedicated for users who have total access to the Cluster and aspire to adopt the
tool throughout the cluster
- **Limited Access**: This method is dedicated to users who only have access to a limited number of namespaces and can
adopt the tool only within them

### Cluster Wide Access Installation

**RBAC-Prerequisite**: This installation mode requires permission to deploy Service Account, ClusterRole, ClusterRoleBinding, CRDs

The basic Cluster Wide installation is very simple

```bash
$ helm install py-kube-downscaler py-kube-downscaler/py-kube-downscaler
```

This command will deploy:

- **Deployment**: main deployment
- **ConfigMap**: used to supply parameters to the deployment
- **ServiceAccount**: represents the Cluster Idenity of the KubeDownscaler
- **ClusterRole**: needed to access all the resources that can be modified by the KubeDownscaler
- **ClusterRoleBinding**: links the ServiceAccount used by KubeDownscaler to the ClusterRole

It is possible to further customize it by changing the parameters present in the values.yaml file of the Chart

### Limited Access Installation

**RBAC-Prerequisite**: This installation mode requires permission to deploy Service Account, Role and RoleBinding

The Limited Access installation requires the user to fill the following parameters inside values.yaml
- **constrainedDownscaler**: true (mandatory)
- **constrainedNamespaces**: [namespace1,namespace2,namespace3,...] (list of namespaces - mandatory)

It is also recommended to explicitly set the namespace where KubeDownscaler will be installed

```bash
$ helm install py-kube-downscaler py-kube-downscaler/py-kube-downscaler --namespace my-release-namespace --set constrainedDownscaler=true --set "constrainedNamespaces={namespace1,namespace2,namespace3}"
```

This command will deploy:

- **Deployment**: main deployment
- **ConfigMap**: used to supply parameters to the deployment
- **ServiceAccount**: represents the Cluster Idenity of the KubeDownscaler

For each namespace inside constrainedNamespaces, the chart will deploy

- **Role**: needed to access all the resources that can be modified by the KubeDownscaler (inside that namespace)
- **RoleBinding**: links the ServiceAccount used by KubeDownscaler to the Role inside that namespace

If RBAC permissions are misconfigured and the KubeDownscaler is unable to access resources in one of the specified namespaces,
a warning message will appear in the logs indicating a `403 Error`

## Configuration

### Uptime / downtime spec
Expand Down Expand Up @@ -264,11 +324,18 @@ Available command line options:
`--namespace`
: Restrict the downscaler to work only in a single namespace (default:
: Restrict the downscaler to work only in some namespaces (default:
all namespaces). This is mainly useful for deployment scenarios
where the deployer of py-kube-downscaler only has access to a given
namespace (instead of cluster access). If used simultaneously with
`--exclude-namespaces`, none is applied.
where the deployer of py-kube-downscaler only has access to some
namespaces (instead of cluster wide access). If used simultaneously with
`--exclude-namespaces`, `--namespace` will take precedence overriding
its value. This argument takes a comma separated list of namespaces
(example: --namespace=default,test-namespace1,test-namespace2)
> [!IMPORTANT]
> It's strongly not advised to use this argument in a Cluster
> Wide Access installation, see the [Constrained Mode](#constrained-mode-limited-access-mode) section
`--include-resources`
Expand Down Expand Up @@ -314,7 +381,8 @@ Available command line options:
: Exclude namespaces from downscaling (list of regex patterns,
default: kube-system), can also be configured via environment
variable `EXCLUDE_NAMESPACES`. If used simultaneously with
`--namespace`, none is applied.
`--exclude-namespaces`, `--namespace` will take precedence
overriding its value.
`--exclude-deployments`
Expand Down Expand Up @@ -349,13 +417,27 @@ Available command line options:
`--admission-controller`
: Optional: admission controller used by the kube-downscaler to downscale and upscale
jobs. Required only if "jobs" are specified inside "--include-resources" arg.
jobs. This argument won't take effect if used in conjunction with `--namespace` argument.
Supported Admission Controllers are
\[gatekeeper, kyverno*\]
> [!IMPORTANT]
> Make sure to read the dedicated section below to understand how to use the
> `--admission-controller` feature correctly
> Make sure to read the [Scaling Jobs With Admission Controller](#scaling-jobs-with-admission-controller) section
> to understand how to use the `--admission-controller` feature correctly
### Constrained Mode (Limited Access Mode)
The Constrained Mode (also known as Limited Access Mode) is designed for users who do not have full cluster access.
It is automatically activated when the `--namespace` argument is specified.
This mode utilizes a different set of API calls optimized for environments where users cannot deploy ClusterRole and ClusterRoleBinding.
Additionally, this mode disables the ability to scale Jobs via Admission Controllers since the
"[Scaling Jobs With Admission Controllers](#scaling-jobs-with-admission-controller)" feature requires Full
Cluster Access to operate.
If you are installing KubeDownscaler with full cluster access,
it is strongly recommended to use the `--exclude-namespaces` parameter instead of `--namespace`. Using `--namespace`
in a cluster wide access installation will make API calls less efficient and
will disable the ability to scale Jobs with Admission Controllers for the reason specified above.
### Scaling Jobs: Overview
Expand All @@ -366,7 +448,7 @@ within the job's yaml file. The spec.suspend parameter will be set to True and t
will be automatically deleted.
2) Downscaling Jobs With Admission Controllers: Kube Downscaler will block the creation of all new Jobs using
Admission Policies created with an Admission Controller (Kyverno or Gatekeeper, depending on the user's choice)
Admission Policies created with an Admission Controller (Kyverno or Gatekeeper, depending on the user's choice).
In both cases, all Jobs created by CronJob will not be modified unless the user specifies via the
`--include-resources` argument that they want to turn off both Jobs and CronJobs
Expand Down Expand Up @@ -402,6 +484,11 @@ arg to exclude `"kyverno"` or `"gatekeeper-system"` namespaces.
Alternatively `EXCLUDE_DEPLOYMENTS` environment variable
or `--exclude-deployments` arg to exclude only certain resources inside `"kyverno"` or `"gatekeeper-system"` namespaces
**<u>Important</u>**: `--admission-controller` argument won't take effect if used in conjunction with --namespace argument.
if you specified `jobs` inside the `--include-resources` argument KubeDonwscaler will
still [downscale jobs natively](#scaling-jobs-natively).
Please read the [Constrained Mode](#constrained-mode-limited-access-mode) section to understand why
The workflow for blocking jobs is different if you use Gatekeeper or Kyverno, both are described below
**Blocking Jobs: Gatekeeper Workflow**
Expand Down Expand Up @@ -559,7 +646,12 @@ The following annotations are supported on the Namespace level:
## Migrate From Codeberg
For all users who come from the Codeberg repository (no longer maintained by the original author)
it is possible to migrate to this new version of the kube-downscaler by installing the Helm chart in this way:
it is possible to migrate to this new version of the kube-downscaler by installing the Helm chart in these ways that will
preserve the old nomenclature already present inside your cluster
### Migrate From Codeberg - Cluster Wide Installation
Read [Installation](#installation) section to understand what is meant for **Cluster Wide Installation**
```bash
$ helm install kube-downscaler py-kube-downscaler/py-kube-downscaler --set nameOverride=kube-downscaler --set configMapName=kube-downscaler
Expand All @@ -571,7 +663,19 @@ or extracting and applying the template manually:
$ helm template kube-downscaler py-kube-downscaler/py-kube-downscaler --set nameOverride=kube-downscaler --set configMapName=kube-downscaler
```
Installing the chart in this way will preserve the old nomenclature already present in your cluster
### Migrate From Codeberg - Limited Access Installation
Read [Installation](#installation) section to understand what is meant for **Limited Access Installation**
```bash
$ helm install kube-downscaler py-kube-downscaler/py-kube-downscaler --set nameOverride=kube-downscaler --set configMapName=kube-downscaler --set constrainedDownscaler=true --set "constrainedNamespaces={namespace1,namespace2,namespace3}"
```
or extracting and applying the template manually:
```bash
$ helm template kube-downscaler py-kube-downscaler/py-kube-downscaler --set nameOverride=kube-downscaler --set configMapName=kube-downscaler --set constrainedDownscaler=true --set "constrainedNamespaces={namespace1,namespace2,namespace3}"
```
## Contributing
Expand Down
1 change: 1 addition & 0 deletions chart/templates/configmap.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -2,6 +2,7 @@ apiVersion: v1
kind: ConfigMap
metadata:
name: {{ .Values.configMapName }}
namespace: {{ .Release.Namespace }}
data:
# downscale for non-work hours
EXCLUDE_NAMESPACES: '{{- join "," .Values.excludedNamespaces }}'
Expand Down
4 changes: 4 additions & 0 deletions chart/templates/deployment.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -4,6 +4,7 @@ metadata:
labels:
{{- include "py-kube-downscaler.labels" . | nindent 4 }}
name: {{ include "py-kube-downscaler.name" . }}
namespace: {{ .Release.Namespace }}
spec:
{{- if not .Values.autoscaling.enabled }}
replicas: {{ .Values.replicaCount }}
Expand All @@ -22,6 +23,9 @@ spec:
image: {{ .Values.image.repository }}:{{ .Values.image.tag | default .Chart.AppVersion }}
args:
{{- toYaml .Values.arguments | nindent 8 }}
{{- if .Values.constrainedDownscaler }}
- --namespace={{ join "," .Values.constrainedNamespaces }}
{{- end }}
envFrom:
- configMapRef:
name: {{ .Values.configMapName }}
Expand Down
126 changes: 125 additions & 1 deletion chart/templates/rbac.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -2,7 +2,131 @@ apiVersion: v1
kind: ServiceAccount
metadata:
name: {{ include "py-kube-downscaler.serviceAccountName" . }}
namespace: {{ .Release.Namespace }}
---
{{- if .Values.constrainedDownscaler }}
{{- if not .Values.constrainedNamespaces }}
{{- fail "Error: 'constrainedNamespaces' must not be empty or null when 'constrainedDownscaler' is true." }}
{{- end }}

{{- $releaseNamespace := .Release.Namespace }}
{{- $name := include "py-kube-downscaler.name" . }}
{{- $serviceAccountName := include "py-kube-downscaler.serviceAccountName" . }}
{{- range $namespace := .Values.constrainedNamespaces }}
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
name: {{ $name }}
namespace: {{ $namespace }}
rules:
- apiGroups:
- ""
resources:
- pods
- namespaces
verbs:
- get
- watch
- list
- apiGroups:
- apps
resources:
- deployments
- statefulsets
- daemonsets
verbs:
- get
- watch
- list
- update
- patch
- apiGroups:
- argoproj.io
resources:
- rollouts
verbs:
- get
- watch
- list
- update
- patch
- apiGroups:
- autoscaling
resources:
- horizontalpodautoscalers
verbs:
- get
- watch
- list
- update
- patch
- apiGroups:
- batch
resources:
- cronjobs
verbs:
- get
- watch
- list
- update
- patch
- apiGroups:
- keda.sh
resources:
- scaledobjects
verbs:
- get
- watch
- list
- update
- patch
- apiGroups:
- zalando.org
resources:
- stacks
verbs:
- get
- watch
- list
- update
- patch
- apiGroups:
- ""
resources:
- events
verbs:
- get
- create
- watch
- list
- update
- patch
- apiGroups:
- policy
resources:
- poddisruptionbudgets
verbs:
- get
- watch
- list
- update
- patch
---
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
name: {{ $name }}
namespace: {{ $namespace }}
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: Role
name: {{ $name }}
subjects:
- kind: ServiceAccount
name: {{ $serviceAccountName }}
namespace: {{ $releaseNamespace }}
{{- end }}
{{- else }}
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
Expand Down Expand Up @@ -53,7 +177,6 @@ rules:
- batch
resources:
- cronjobs
- jobs
verbs:
- get
- watch
Expand Down Expand Up @@ -167,3 +290,4 @@ subjects:
- kind: ServiceAccount
name: {{ include "py-kube-downscaler.serviceAccountName" . }}
namespace: {{ .Release.Namespace }}
{{- end }}
3 changes: 3 additions & 0 deletions chart/values.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -13,6 +13,9 @@ imagePullSecrets: []
nameOverride: ""
fullnameOverride: ""

constrainedDownscaler: false
constrainedNamespaces: []

serviceAccount:
# Specifies whether a service account should be created
create: true
Expand Down
6 changes: 5 additions & 1 deletion kube_downscaler/cmd.py
Original file line number Diff line number Diff line change
Expand Up @@ -44,7 +44,11 @@ def get_parser():
parser.add_argument(
"--interval", type=int, help="Loop interval (default: 30s)", default=30
)
parser.add_argument("--namespace", help="Namespace")
parser.add_argument(
"--namespace",
help="Namespace",
default=os.getenv("NAMESPACE", "")
)
parser.add_argument(
"--include-resources",
type=check_include_resources,
Expand Down
Loading

0 comments on commit c217f31

Please sign in to comment.