Skip to content

Commit

Permalink
Merge pull request #56 from ibuildthecloud/master
Browse files Browse the repository at this point in the history
Update documentation
  • Loading branch information
ibuildthecloud authored Sep 1, 2020
2 parents 0def054 + 0b7db59 commit 418d678
Show file tree
Hide file tree
Showing 75 changed files with 1,067 additions and 3,810 deletions.
3 changes: 3 additions & 0 deletions Dockerfile.docs
Original file line number Diff line number Diff line change
@@ -0,0 +1,3 @@
FROM squidfunk/mkdocs-material
RUN pip install mkdocs-markdownextradata-plugin
RUN apk add -U git openssh
9 changes: 9 additions & 0 deletions Makefile
Original file line number Diff line number Diff line change
Expand Up @@ -7,6 +7,15 @@ TARGETS := $(shell ls scripts)
@./.dapper.tmp -v
@mv .dapper.tmp .dapper

serve-docs: mkdocs
docker run --net=host --rm -it -v $${PWD}:/docs mkdocs serve

deploy-docs: mkdocs
docker run -v $${HOME}/.ssh:/root/.ssh --rm -it -v $${PWD}:/docs mkdocs gh-deploy -r rancher

mkdocs:
docker build -t mkdocs -f Dockerfile.docs .

$(TARGETS): .dapper
./.dapper $@

Expand Down
148 changes: 68 additions & 80 deletions README.md
Original file line number Diff line number Diff line change
@@ -1,89 +1,77 @@
Fleet
=============
# Introduction

### Status: early-ALPHA (actively looking for feedback)
### Fleet is currently alpha quality and actively being developed.

![](docs/arch.png)
![](./docs/arch.png)

Fleet is GitOps at scale. Fleet is designed to manage up to a million clusters. It's also lightweight
enought that is works great for a [single cluster](./single-cluster-install.md) too, but it really shines
when you get to a large scale. By large scale we mean either a lot of clusters, a lot of deployments, or a lot of
teams in a single organization.

Fleet can manage deployments from git of raw Kubernetes YAML, Helm charts, or Kustomize or any combination of the three.
Regardless of the source all resources are dynamically turned into Helm charts and Helm is used as the engine to
deploy everything in the cluster. This give a high degree of control, consistency, and auditability. Fleet focuses not only on
the ability to scale, but to give one a high degree of control and visibility to exactly what is installed on the cluster.# Quick Start

# Quick Start
Who needs documentation, lets just run this thing!

## Install

Get helm if you don't have it. Helm 3 is just a CLI and won't do bad insecure
things to your cluster.

```
brew install helm
```

Install the Fleet Helm charts (there's two because we separate out CRDs for ultimate flexibility.)

```shell
helm -n fleet-system install --create-namespace --wait \
fleet-crd https://github.com/rancher/fleet/releases/download/{{fleet.version}}/fleet-crd-{{fleet.helmversion}}.tgz
helm -n fleet-system install --create-namespace --wait \
fleet https://github.com/rancher/fleet/releases/download/{{fleet.version}}/fleet-{{fleet.helmversion}}.tgz
```

## Add a Git Repo to watch

Change `spec.repo` to your git repo of choice. Kubernetes manifest files that should
be deployed should be in `/manifests` in your repo.

```bash
cat > example.yaml << "EOF"
apiVersion: fleet.cattle.io/v1alpha1
kind: GitRepo
metadata:
name: sample
# This namespace is special and auto-wired to deploy to the local cluster
namespace: fleet-local
spec:
# Everything from this repo will be ran in this cluster. You trust me right?
repo: "https://github.com/fleet-demo/simple"
EOF

kubectl apply -f example.yaml
```
$ kubectl get fleet

NAME CLUSTERS-READY CLUSTERS-DESIRED STATUS
bundle.fleet.cattle.io/helm-download 0 3 NotApplied: 3 (default-bobby-group/cluster-93d18642-217a-486b-9a5d-be06762443b2... )
bundle.fleet.cattle.io/fleet-agent 3 3
bundle.fleet.cattle.io/helm-kustomize 0 3 NotApplied: 3 (default-bobby-group/cluster-93d18642-217a-486b-9a5d-be06762443b2... )
bundle.fleet.cattle.io/helm 0 3 NotApplied: 3 (default-bobby-group/cluster-93d18642-217a-486b-9a5d-be06762443b2... )
bundle.fleet.cattle.io/kustomize 0 3 NotApplied: 3 (default-bobby-group/cluster-93d18642-217a-486b-9a5d-be06762443b2... )
bundle.fleet.cattle.io/yaml 0 3 NotApplied: 3 (default-bobby-group/cluster-93d18642-217a-486b-9a5d-be06762443b2... )
## Get Status

Get status of what fleet is doing

NAME CLUSTER-COUNT NONREADY-CLUSTERS BUNDLES-READY BUNDLES-DESIRED STATUS
clustergroup.fleet.cattle.io/othergroup 1 [cluster-f6a0e6da-ff49-4aab-9a21-fbe4687dd25b] 1 6 NotApplied: 5 (helm... )
clustergroup.fleet.cattle.io/bobby 2 [cluster-93d18642-217a-486b-9a5d-be06762443b2 cluster-d7b5d925-fc56-45ca-92d5-de98f6728dd5] 2 12 NotApplied: 10 (helm... )
```shell
kubectl -n fleet-local get fleet
```

You should see something like this get created in your cluster.

```
kubectl get deploy frontend
```
```
NAME READY UP-TO-DATE AVAILABLE AGE
frontend 3/3 3 3 116m
```

## Introduction

Fleet is a Kubernetes cluster fleet controller specifically designed to address the challenges of running
thousands to millions of clusters across the world. While it's designed for massive scale the concepts still
apply for even small deployments of less than 10 clusters. Fleet is lightweight enough to run on the smallest of
deployments too and even has merit in a single node cluster managing only itself. The primary use case of Fleet is
to ensure that deployments are consistent across clusters. One can deploy applications or easily enforce standards
such as "every cluster must have X security tool installed."

Fleet has two simple high level concepts: cluster groups and bundles. Bundles are collections of resources that
are deployed to clusters. Bundles are defined in the fleet controller and are then deployed to target cluster using
selectors and per target customization. While bundles can be deployed to any cluster using powerful selectors,
each cluster is a member of one cluster group. By looking at the status of bundles and cluster groups one can
get a quick overview of that status of large deployments. After a bundle is deployed it is then constantly monitored
to ensure that its Ready and resource have not been modified.

A bundle can be plain Kubernetes YAML, Helm, or kustomize based. Helm and kustomize can be combined to create very
powerful workflows too. Regardless of the approach chosen to create bundles all resources are deployed to a cluster as
helm charts. Using Fleet to manage clusters means all your clusters are easily auditable because every resource is
carefully managed in a chart and a simple `helm -n fleet-system ls` will give you an accurate overview of what is
installed.

Combining Fleet with a Git based workflow like Github Actions one can automate massive scale with ease.

## Documentation

1. [Understanding Bundles](./docs/bundles.md) - Very important read
1. [Example Bundles](./docs/examples.md)
1. [CLI](./docs/cli.md)
1. [Architecture and Installation](./docs/install.md)
1. [GitOps and CI/CD](./docs/gitops.md)

## Quick Start

1. Download `fleet` CLI from [releases](https://github.com/rancher/fleet/releases/latest).
Or run
```bash
curl -sfL https://raw.githubusercontent.com/rancher/fleet/master/install.sh | sh -
```

2. Install Fleet Manager on Kubernetes cluster. The `fleet` CLI will use your current `kubectl` config
to access the cluster.
```shell
# Kubeconfig should point to CONTROLLER cluster
fleet install manager | kubectl apply -f -
```
3. Generate cluster group token to register clusters
```shell script
# Kubeconfig should point to CONTROLLER cluster
fleet install agent-token > token
```
4. Apply token to clusters to register
```shell script
# Kubeconfig should point to AGENT cluster
kubectl apply -f token
```
5. Deploy some bundles
```shell script
# Kubeconfig should point to CONTROLLER cluster
fleet apply ./examples/helm-kustomize
```
6. Check status
```shell script
kubectl get fleet
```
Enjoy and read the [docs](https://fleet.rancher.io/).
9 changes: 1 addition & 8 deletions charts/fleet/templates/rbac.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -26,16 +26,9 @@ rules:
- ""
resources:
- secrets
verbs:
- '*'
- apiGroups:
- ""
resources:
- configmaps
verbs:
- list
- watch
- get
- '*'
- apiGroups:
- rbac.authorization.k8s.io
resources:
Expand Down
1 change: 1 addition & 0 deletions docs/CNAME
Original file line number Diff line number Diff line change
@@ -0,0 +1 @@
fleet.rancher.io
71 changes: 71 additions & 0 deletions docs/agent-initiated.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,71 @@
# Agent Initiated

Refer to the [overview page](./cluster-overview.md#agent-initiated-registration) for a background information on the agent initiated registration style.

## Cluster Registration Token and Client ID

An downstream cluster is registered using two pieces of information, the **cluster registration token** and the **client ID**.

The **cluster registration token** is a credential that will authorize the downstream cluster agent to be
able to initiate the registration process. Refer to the [cluster registration token page](./cluster-tokens.md) for more information
on how to create tokens and obtain the values. The cluster registration token is manifested as a `values.yaml` file that will
be passed to the `helm install` process.

The **client ID** is a unique string that will identify the cluster. This string is user generated and opaque to the Fleet manager and
agent. It is only assumed to be sufficiently unique. For security reason one should probably not be able to easily guess this value
as then one cluster could impersonate another. The client ID is optional and if not specific the UID field of the `kube-system` namespace
resource will be used as the client ID. Upon registration if the client ID is found on a `Cluster` resource in the Fleet manager it will associate
the agent with that `Cluster`. If no `Cluster` resource is found with that client ID a new `Cluster` resource will be created with the specific
client ID. Client IDs are mostly important such that when a cluster is registered it can immediately be identified, assigned labels, and git
repos can be deployed to it.

## Install agent for registration

The Fleet agent is installed as a Helm chart. The only parameters to the helm chart installation should be the cluster registration token, which
is represented by the `values.yaml` file and the client ID. The client ID is optional.

First follow the [cluster registration token page](./cluster-tokens.md) to obtain the `values.yaml` file to be used.

Second setup your environment to use use a client ID.

```shell
# If no client ID is going to be used then leave the value blank
CLUSTER_CLIENT_ID="a-unique-value-for-this-cluster"
```

Finally, install the agent using Helm.

!!! hint "Use proper namespace and release name"
For the agent chart the namespace must be `fleet-system` and the release name `fleet-agent`

!!! hint "Ensure you are installing to the right cluster"
Helm will use the default context in `${HOME}/.kube/config` to deploy the agent. Use `--kubeconfig` and `--kube-context`
to change which cluster Helm is installing to.

```shell
helm -n fleet-system install --create-namespace --wait \
--set clientID="${CLUSTER_CLIENT_ID}" \
--values values.yaml \
fleet-agent https://github.com/rancher/fleet/releases/download/{{fleet.version}}/fleet-agent-{{fleet.helmversion}}.tgz
```

The agent should now be deployed. You can check that status of the fleet pods by running the below commands.

```shell
# Ensure kubectl is pointing to the right cluster
kubectl -n fleet-system logs -l app=fleet-agent
kubectl -n fleet-system get pods -l app=fleet-agent
```

Additionally you should see a new cluster registered in the Fleet manager. Below is an example of checking that a new cluster
was registered in the `clusters` [namespace](./namespaces.md). Please ensure your `${HOME}/.kube/config` is pointed to the Fleet
manager to run this command.

```shell
kubectl -n clusters get clusters.fleet.cattle.io
```
```
NAME BUNDLES-READY NODES-READY SAMPLE-NODE LAST-SEEN STATUS
cluster-ab13e54400f1 1/1 1/1 k3d-cluster2-server-0 2020-08-31T19:23:10Z
```

45 changes: 45 additions & 0 deletions docs/architecture.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,45 @@
# Architecture

![](./arch.png)

Fleet has two primary components. The Fleet manager and the cluster agents. These
components work in a two-stage pull model. The Fleet manager will pull from git and the
cluster agents will pull from the Fleet manager.

## Fleet Manager

The Fleet manager is a set of Kubernetes controllers running in any standard Kubernetes
cluster. The only API exposed by the Fleet manages is the Kubernetes API, there is no
custom API for the fleet controller.

## Cluster Agents

One cluster agent runs in each cluster and is responsible for talking to the Fleet manager.
The only communication from cluster to Fleet manager is by this agent and all communication
goes from the managed cluster to the Fleet manager. The fleet manager does not initiate
connections to downstream clusters. This means managed clusters can run in private networks and behind
NATs. The only requirement is the cluster agent needs to be able to communicate with the
Kubernetes API of the cluster running the Fleet manager. The one exception to this is if you use
the [manager initiated](./manager-initiated.md) cluster registration flow. This is not required, but
an optional patter.

The cluster agents are not assumed to have an "always on" connection. They will resume operation as
soon as they can connect. Future enhancements will probably add the ability to schedule times of when
the agent checks in, as it stands right now they will always attempt to connect.

## Security

The Fleet manager dynamically creates service account, manages their RBAC and then gives the
tokens to the downstream clusters. Clusters are registered by optionally expiring cluster registration tokens.
The cluster registration token is used only during the registration process to generate a credential specific
to that cluster. After the cluster credential is established the cluster "forgets" the cluster registration
token.

The service accounts given to the clusters only have privileges to list `BundleDeployment` in the namespace created
specifically for that cluster. It can also update the `status` subresource of `BundleDeployment` and the `status`
subresource of it's `Cluster` resource.

## Scalability

Fleet is designed to scale up to 1 million clusters. There are more details to come here on how we expect to scale
a Kubernetes controller based architecture to 100's of millions of objects and beyond.
1 change: 1 addition & 0 deletions docs/assets/logo.svg
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
1 change: 0 additions & 1 deletion docs/bundleflow.drawio

This file was deleted.

Loading

0 comments on commit 418d678

Please sign in to comment.