The github-app-operator
is a Kubernetes operator that generates an access token for a GitHub App and stores it in a secret for authenticated requests to GitHub. It reconciles a new access token before expiry (1 hour).
- Uses a custom resource
GithubApp
in your destination namespace. - Reads
appId
,installId
, and eitherprivateKeySecret
,googlePrivateKeySecret
orvaultPrivateKey
defined in aGithubApp
resource to request an access token from GitHub. - Stores the access token in a secret specified by
accessTokenSecret
.
Tip
There is a sample constraint template and constraint for Gatekeeper to restrict the type of private key source in the gatekeeper-policy
folder if you dont want to use the validating webhook built-in.
- Configuration:
- Use
privateKeySecret
- refers to an existing secret in the namespace holding the base64 encoded PEM of the GitHub App's private key. - The secret expects the field
data.privateKey
.
- Use
- Configuration:
- Note: You must base64 encode your private key before saving it in Secret Manager.
- Configure with
googlePrivateKeySecret
- the full secret path in Secret Manager for your GitHub App secret, e.g.projects/xxxxxxxxxx/secrets/my-gh-app/versions/latest
. - Configure Workload Identity to bind secret access permissions to the operator's Kubernetes Service Account.
- Tested with the role
roles/secretmanager.secretAccessor
.
- Configuration:
- Note: You must base64 encode your private key before saving it in Vault.
- The operator uses a short-lived JWT (10 minutes TTL) via Kubernetes Token Request API, with a defined audience.
- It uses the JWT and Vault role to authenticate with Vault and pull the secret containing the private key.
- Configure with the
vaultPrivateKey
block:spec.vaultPrivateKey.mountPath
- Secret mount path, e.g.,secret
spec.vaultPrivateKey.secretPath
- Secret path, e.g.,githubapps/{App ID}
spec.vaultPrivateKey.secretKey
- Secret key, e.g.,privateKey
- Configure Kubernetes auth with Vault.
- Define a role and optionally audience, service account, namespace, etc., bound to the role.
- Configure environment variables in the controller deployment spec:
VAULT_ROLE
- The role bound for Kubernetes auth for the operator.VAULT_ROLE_AUDIENCE
- The audience bound in Vault.VAULT_ADDR
- FQDN of your Vault server, e.g.,http://vault.default:8200
.- Additional Vault env vars can be set, e.g.,
VAULT_NAMESPACE
for enterprise Vault (see Vault API).
- Cleans-up the the access token secret it owned by a
GithubApp
object if deleted. - Reconciles an access token for a
GithubApp
when:- Modifications are made to the access token secret owned by a
GithubApp
. - Modifications are made to a
GithubApp
object. - The access token secret does not exist or lacks a
status.expiresAt
value.
- Modifications are made to the access token secret owned by a
- Periodically checks the expiry time of the access token and reconciles a new one if the threshold is met or if the access token is invalid (checked against GitHub API).
- Stores the expiry time of the access token in the
status.expiresAt
field of theGithubApp
object. - Sets errors in the
status.error
field of theGithubApp
object during reconciliation. - Skips requesting a new access token if the expiry threshold is not reached/exceeded.
- Allows overriding the check interval and expiry threshold using deployment env vars:
CHECK_INTERVAL
- e.g., to check every 5 minutes, set the value to5m
(default:5m
).EXPIRY_THRESHOLD
- e.g., to reconcile a new access token if there is less than 10 minutes left from expiry, set the value to10m
(default:15m
).
- Specify a proxy for GitHub and Vault using the env vars:
GITHUB_PROXY
- e.g.,http://myproxy.com:8080
.VAULT_PROXY_ADDR
- e.g.,http://myproxy.com:8080
.
- Optionally enable rolling upgrade to deployments in the same namespace as the
GithubApp
that match any of the labels defined inspec.rolloutDeployment.labels
.- Useful for recreating pods to pick up new secret data.
- By default, logs are JSON formatted, and log level is set to info and error.
- Set
DEBUG_LOG
totrue
in the manager deployment environment variable for debug level logs.
- The CRD includes extra data printed with
kubectl get
:- App ID
- Install ID
- Expires At
- Error
- Access Token Secret
- Events are recorded for:
- Any error on reconcile for a
GithubApp
. - Creation of an access token secret.
- Updating an access token secret.
- Updating a deployment for rolling upgrade.
- Any error on reconcile for a
Here is an example of how to define the GithubApp
resource:
apiVersion: githubapp.samir.io/v1
kind: GithubApp
metadata:
name: example-githubapp
spec:
appId: <your-github-app-id>
installId: <your-github-app-installation-id>
privateKeySecret: <your-private-key-secret-name> # If using Kubernetes secret
googlePrivateKeySecret: <your-google-secret-path> # If using GCP Secret Manager
vaultPrivateKey: # If using Hashicorp Vault
mountPath: <your-vault-mount-path>
secretPath: <your-vault-secret-path>
secretKey: <your-vault-secret-key>
accessTokenSecret: <your-access-token-secret-name>
- Get your GithubApp private key and encode to base64
base64 -w 0 private-key.pem
- Create a secret to hold the private key
apiVersion: v1
kind: Secret
metadata:
name: github-app-secret
namespace: YOUR_NAMESPACE
type: Opaque
data:
privateKey: BASE64_ENCODED_PRIVATE_KEY
- Below example will setup a GithubApp and reconcile an access token in the
team-1
namespace, the access token will be available to use in the secretgithub-app-access-token-123123
- It authenticates with GitHub API using your private key secret like above example.
kubectl apply -f - <<EOF
apiVersion: githubapp.samir.io/v1
kind: GithubApp
metadata:
name: GithubApp-sample
namespace: team-1
spec:
appId: 123123
installId: 12312312
privateKeySecret: github-app-secret
accessTokenSecret: github-app-access-token-123123
EOF
- Below example will upgrade deployments in the
team-1
namespace when the github token is modified, matching any of labels:- foo: bar
- foo2: bar2
kubectl apply -f - <<EOF
apiVersion: githubapp.samir.io/v1
kind: GithubApp
metadata:
name: GithubApp-sample
namespace: team-1
spec:
appId: 123123
installId: 12312312
privateKeySecret: github-app-secret
accessTokenSecret: github-app-access-token-123123
rolloutDeployment:
labels:
foo: bar
foo2: bar2
EOF
- Below example will request a new JWT from Kubernetes and use it to fetch the private key from Vault when the github access token expires
kubectl apply -f - <<EOF
apiVersion: githubapp.samir.io/v1
kind: GithubApp
metadata:
name: GithubApp-sample
namespace: team-1
spec:
appId: 123123
installId: 12312312
accessTokenSecret: github-app-access-token-123123
vaultPrivateKey:
mountPath: secret
secretPath: githubapp/123123
secretKey: privateKey
EOF
- Below example will fetch the private key from GCP Secret Manager when the github access token expires
- It requires that the Kubernetes Service Account has permissions on the secret in SEcret Manager, i.e. via Workload Identity
kubectl apply -f - <<EOF
apiVersion: githubapp.samir.io/v1
kind: GithubApp
metadata:
name: GithubApp-sample
namespace: team-1
spec:
appId: 123123
installId: 12312312
accessTokenSecret: github-app-access-token-123123
googlePrivateKeySecret: "projects/123123123123/secrets/gh-app-123123/versions/latest"
EOF
- go version v1.22.0+
- docker version 17.03+.
- kubectl version v1.22.0+.
- Access to a Kubernetes v1.22.0+ cluster.
A helm chart is generated using make helm
when a new tag is pushed, i.e a release.
This chart will have webhooks and cert manager enabled.
If you want to install without webhooks and cert manager required use the local manual chart.
cd charts/github-app-operator
helm upgrade --install -n github-app-operator-system <release_name> . --create-namespace \
--set webhook.enabled=false \
--set controllerManager.manager.env.enableWebhooks="false"
You can pull the automatically built helm chart from this repos packages
- See the packages
- Pull with helm:
-
helm pull oci://ghcr.io/samirtahir91/github-app-operator/helm-charts/github-app-operator --version <TAG>
-
- Untar the chart and edit the
values.yaml
as required. - You can use the latest public image on DockerHub -
samirtahir91076/github-app-operator:latest
- See tags
- Deploy the chart with Helm.
Build and push your image to the location specified by IMG
:
make docker-build docker-push IMG=<some-registry>/github-app-operator:tag
Install the CRDs into the cluster:
make install
Deploy the Manager to the cluster with the image specified by IMG
:
make deploy IMG=<some-registry>/github-app-operator:tag
Create instances of your solution You can apply the samples (examples) from the config/sample:
kubectl apply -k config/samples/
Current integration tests cover the scenarios:
- Modifying an access token secret triggers reconcile of new access token.
- Deleting an access token secret triggers reconcile of a new access token secret.
- Reconcile of access token is valid.
- Reconcile error is recorded in a
GithubApp
object'sstatus.error
field - The
status.error
field is cleared on succesful reconcile for aGithubApp
object. - Deployments are upgraded for rolling upgrade matching a label if defined in
spec.rolloutDeployment.labels
for aGithubApp
(requires USE_EXISTING_CLUSTER=true). - Vault integration for private key secret using Kubernetes auth (requires USE_EXISTING_CLUSTER=true).
Run the controller in the foreground for testing:
# PRIVATE_KEY_CACHE_PATH folder to temp store private keys in local file system
# /tmp/github-test is fine for testing
export PRIVATE_KEY_CACHE_PATH=/tmp/github-test/
# run
make run
Run integration tests against a real cluster, i.e. Minikube:
- Export your GitHub App private key as a
base64
string and then run the tests
export GITHUB_PRIVATE_KEY=<YOUR_BASE64_ENCODED_GH_APP_PRIVATE_KEY>
export GH_APP_ID=<YOUR GITHUB APP ID>
export GH_INSTALL_ID=<YOUR GITHUB APP INSTALL ID>
export "VAULT_ADDR=http://localhost:8200" # this can be local k8s Vault or some other Vault
export "VAULT_ROLE_AUDIENCE=githubapp"
export "VAULT_ROLE=githubapp"
export "ENABLE_WEBHOOKS=false"
- This uses Vault, you can spin up a simple Vault server using this script.
- It will use Helm and configure the Vault server with a test private key as per the env var ${GITHUB_PRIVATE_KEY}.
cd scripts
./install_and_setup_vault_k8s.sh
# Run vault port forward
kubectl port-forward vault-0 8200:8200
- Run tests
cd ..
USE_EXISTING_CLUSTER=true make test
Run integration tests using env test (without a real cluster):
- Export your GitHub App private key as a
base64
string and then run the tests. - This will skip the Vault and Deployment Rollout test cases.
export GITHUB_PRIVATE_KEY=<YOUR_BASE64_ENCODED_GH_APP_PRIVATE_KEY>
export GH_APP_ID=<YOUR GITHUB APP ID>
export GH_INSTALL_ID=<YOUR GITHUB APP INSTALL ID>
USE_EXISTING_CLUSTER=false make test
USE_EXISTING_CLUSTER=false make test-webhooks
Generate coverage html report:
go tool cover -html=cover.out -o coverage.html
Delete the instances (CRs) from the cluster:
kubectl delete -k config/samples/
Delete the APIs(CRDs) from the cluster:
make uninstall
UnDeploy the controller from the cluster:
make undeploy
NOTE: Run make --help
for more information on all potential make
targets
More information can be found via the Kubebuilder Documentation