Skip to content

Commit

Permalink
docs: Initial user facing docs
Browse files Browse the repository at this point in the history
Signed-off-by: jannfis <[email protected]>
  • Loading branch information
jannfis committed Jan 14, 2025
1 parent d4f380d commit 097e6d8
Show file tree
Hide file tree
Showing 28 changed files with 294 additions and 0 deletions.
10 changes: 10 additions & 0 deletions .github/workflows/ci.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -114,6 +114,16 @@ jobs:
- name: Compile all packages
run: make build

build-docs:
name: Build the documentation
runs-on: ubuntu-22.04
steps:
- name: Checkout code
uses: actions/checkout@b4ffde65f46336ab88eb53be808477a3936bae11 # v4.1.1
- name: Build the documentation
run: |
make build-docs
test:
name: Run unit tests
if: ${{ needs.changes.outputs.code == 'true' }}
Expand Down
14 changes: 14 additions & 0 deletions Makefile
Original file line number Diff line number Diff line change
Expand Up @@ -7,6 +7,10 @@ IMAGE_NAME_PRINCIPAL=argocd-agent-principal
IMAGE_PLATFORMS?=linux/amd64
IMAGE_TAG?=latest

# mkdocs related configuration
MKDOCS_DOCKER_IMAGE?=squidfunk/mkdocs-material:9
MKDOCS_RUN_ARGS?=

# Binary names
BIN_NAME_AGENT=argocd-agent-agent
BIN_NAME_PRINCIPAL=argocd-agent-principal
Expand Down Expand Up @@ -138,10 +142,20 @@ image-agent:
image-principal:
$(DOCKER_BIN) build -f Dockerfile.principal --platform $(IMAGE_PLATFORMS) -t $(IMAGE_REPOSITORY)/$(IMAGE_NAME_PRINCIPAL):$(IMAGE_TAG) .

.PHONY: push-images
push-images:
$(DOCKER_BIN) push $(IMAGE_REPOSITORY)/$(IMAGE_NAME_AGENT):$(IMAGE_TAG)
$(DOCKER_BIN) push $(IMAGE_REPOSITORY)/$(IMAGE_NAME_PRINCIPAL):$(IMAGE_TAG)

.PHONY: serve-docs
serve-docs:
${DOCKER_BIN} run ${MKDOCS_RUN_ARGS} --rm -it -p 8000:8000 -v ${current_dir}:/docs ${MKDOCS_DOCKER_IMAGE} serve -a 0.0.0.0:8000

.PHONY: build-docs
build-docs:
${DOCKER_BIN} run ${MKDOCS_RUN_ARGS} --rm -v ${current_dir}:/docs ${MKDOCS_DOCKER_IMAGE} build


.PHONY: help
help:
@echo "Not yet, sorry."
Binary file added docs/assets/01-architecture.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
25 changes: 25 additions & 0 deletions docs/assets/extra.css
Original file line number Diff line number Diff line change
@@ -0,0 +1,25 @@
.codehilite {
background-color: hsla(0,0%,92.5%,.5);
overflow: auto;
-webkit-overflow-scrolling: touch;
}

.codehilite pre {
background-color: transparent;
padding: .525rem .6rem;
}

@media only screen and (min-width: 76.25em) {
.md-main__inner {
max-width: none;
}
.md-sidebar--primary {
left: 0;
}
.md-sidebar--secondary {
right: 0;
margin-left: 0;
-webkit-transform: none;
transform: none;
}
}
Binary file added docs/assets/logo.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
13 changes: 13 additions & 0 deletions docs/concepts/agent-modes/autonomous.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,13 @@
# Autonomous mode

## Overview of autonomous mode

In *autonomous mode*, the workload cluster is wholly responsible for maintaining its own configuration. As opposed to the [managed mode](./managed.md), all configuration is first created on the workload cluster. The agent on this workload cluster will observe creation, modification and deletion of configuration and transmit them to the principal on the control plane cluster.

The principal will then create, update or delete this configuration on the control plane cluster. Users can use the Argo CD UI, CLI or API to inspect the status of the configuration, but they are not able to perform changes to the configuration or delete it.

## Architectural considerations

## Why chose this mode

## Why not chose this mode
7 changes: 7 additions & 0 deletions docs/concepts/agent-modes/index.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,7 @@
# Agent modes

The main purpose of the [agent](../components-terminology.md#agent) and the [principal](../components-terminology.md#principal) components is to keep configuration in sync between the [workload clusters](../components-terminology.md#workload-cluster) and the [control plane cluster](../components-terminology.md#control-plane-cluster).

Each agent can operate in one of two distinct configuration modes: *managed* or *autonomous*. These modes define the general sync direction: From the workload cluster to the control plane cluster (*autonomous*), or from the control plane cluster to the workload cluster (*managed*).

Please refer to the sub-chapters [Managed mode](./managed.md) and [Autonomous mode](./autonomous.md) for detailed information, architectural considerations and constraints to chose the mode most appropriate for your agents.
33 changes: 33 additions & 0 deletions docs/concepts/agent-modes/managed.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,33 @@
# Managed mode

## Overview

In *managed mode*, the control plane cluster is responsible for maintaining the configuration of an agent. The agent receives all of its configuration (e.g. `Applications` and `AppProjects`) from the [principal](../components-terminology.md#principal).

For example, to create a new Argo CD `Application` on the workload cluster, it must be created on the control plane cluster. The principal will observe the creation of a new `Application` and emit a creation event that the agent on the workload cluster will pick up. The agent then will create the `Application` in its local cluster. From there, it will be picked up by the Argo CD *application controller* for reconciliation. The agent will observe any changes to the `Application`'s status field and transmits them to the principal, which merges them into the leading copy of the `Application`.

Likewise, if an `Application` is to be deleted, it must be deleted on the control plane cluster. Once the principal observes the deletion event, it will emit a deletion event that the agent on the workload cluster will pick up. The agent then deletes the `Application` from its local cluster, and transmits the result back to the principal.

Similar procedures apply to modifications of an `Application` in this mode.

Changes to `Application` resources on the workload cluster that are not originating from the principal will be reverted.

## Architectural considerations

* The minimum requirement on any workload cluster in *managed* mode is to have an agent and the Argo CD *application-controller* installed
* The *application-controller* can be configured to use the *repository-server* and *redis-server* on either the control plane cluster, or on the local workload cluster.
* The Argo CD *applicationset-controller* must be running on the control plane cluster, if you intend to use `ApplicationSets`.
* If the Argo CD *application-controller* is configured to use the *redis-server* or the *repository-server* on the control plane cluster, the control plane cluster becomes a single point of failure (SPoF) for the workload cluster.

## Why chose this mode

* Provides the classical Argo CD experience
* Create and manage applications from the Argo CD UI, CLI or API
* Allows use of ApplicationSet generators that span over multiple clusters, such as cluster or cluster-decision generators

## Why not chose this mode

* Very limited support for the app-of-apps pattern
* In the case the control plane cluster is compromised, it will affect workload clusters in managed mode, too
* As noted [previously](#architectural-considerations), the control plane cluster might become a SPoF

50 changes: 50 additions & 0 deletions docs/concepts/architecture.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,50 @@
# Architectural overview

This section of the documentation gives a broad overview about *argocd-agent*'s architecture, the terminology used and how things will fit together. In order to get started, and to understand the functionality and limitations of *argocd-agent*, it is important to get familiar with the architecture and the components that make up the project.

## Architectural diagram

The following diagram shows a very simplified overview of *argocd-agent*'s architecture. In this particular example, there are three [workload clusters](./components-terminology.md#workload-cluster) connected to a single [control plane cluster](./components-terminology.md#control-plane-cluster). The light blue boxes are existing Argo CD components and assets, while the light green boxes are components added by *argocd-agent*.

![Architectural overview](../assets/01-architecture.png)

In the context of the diagram, the term "Argo CD components" means one or more Argo CD workloads (such as, application controller, applicationset controller, repository server etc) depending on the concrete setup, and the term "configuration" means the configuration required to reconcile resources with Argo CD, e.g. `Applications`, `AppProjects` etc. What this exactly means in which scenario is described in more detail [here TODO](TODO)

As can be seen in the diagram, there is no connection between the central control plane cluster and the workload clusters except for the components of *argocd-agent*. Or in particular, a connection from the workload cluster's *agent* to the control plane's *principal* component. The reconciliation will happen local to the [workload clusters](./components-terminology.md#workload-cluster), as an *application controller* will be running on each of them.

## Scope and function

The agent-based architecture is a major shift from the classical Argo CD architecture, in which the central *application controller* establishes and maintains connections to all remote clusters. For being able to do so, the *application controller* will need a highly privileged service account on the remote cluster, and must maintain the credentials for this as well. Also, it's no secret that connecting a remote cluster to Argo CD is one of the most expensive operations. The *application controller* will watch and cache all resources, on all remote clusters. This process takes a significant amount of time and compute resources.

The connection between an [agent](./components-terminology.md#agent) and the [principal](./components-terminology.md#principal) is much cheaper to establish and maintain than the connection from an *application controller* to the Kubernetes API endpoint of a remote cluster, as it does not need to watch or cache every resource on the cluster. Instead, it focuses on Argo CD configuration such as `Applications`, `AppProjects`, repository configuration and the likes. It is also much more resilient to bad/slow networks, connection drops, or high latency transmissions. And last but not least, the [control plane cluster](./components-terminology.md#control-plane-cluster) does not need to maintain credentials to any of the workload clusters anymore. Instead, the workload clusters will authenticate to the central control plane.

*argocd-agent* is not designed to nor does it intend to replace any existing functionality in Argo CD. Its scope is to change the way how Applications are being deployed in a multi-cluster scenario, especially when there are more than a couple of clusters involved. And the project intends to require as minimal changes to Argo CD as possible, using any out-of-the-box Argo CD installation as the ultimate target.

Under the hood, *argocd-agent* uses a message based protocol to establish a *bi-directional* exchange of messages. Bi-directional in this context means that both, the [principal](./components-terminology.md#principal) and the [agents](./components-terminology.md#agent) can send and receive messages simultaneously using the same connection, which is established exclusively by the agents. As of today, the underlying transport is gRPC based, but there are [plans](https://github.com/argoproj-labs/argocd-agent/issues/260) to make this extensible. The vision for this is that one could use a message bus implementation such as Kafka for the transport of messages if they wanted to.

## Design principles

The following describes the guiding design principles upon which *argocd-agent* is built. All enhancements and contributions should follow those principles.

**A permanent network connection is neither expected nor required**

It is understood that workload clusters can be everywhere: In your dark-fibre connected data centres, across different cloud providers, regions, and availability zones, in moving things such as cars, ships, or trains or wherever they are needed. Not all these locations will have a permanent, reliable and low-latency network connection.

Thus, *argocd-agent* is designed around the assumption that the connection between [workload clusters](./components-terminology.md#workload-cluster) and the [control plane cluster](./components-terminology.md#control-plane-cluster) is not always available and that it might not be possible to keep up a stable, good performing network connection between the components. The system will benefit from a stable network connection with low latency, however it will not require it to function properly.

**Workload clusters are and will stay autonomous**

When the [agent](./components-terminology.md#agent) cannot communicate with the [principal](./components-terminology.md#principal) for whatever reason, the [workload cluster](./components-terminology.md#workload-cluster) will still be able to perform its operations (i.e. reconciliation) in an autonomous way, if setup correctly. Also, depending on the agent's mode of operation, cluster admins may still be able to perform configuration tasks (i.e. create, update and delete applications) but those changes will only take effect once the agent is connected again.

There are architectural variants in which a workload cluster will be dependent upon the availability of the control plane, for example when the workload cluster uses a repository server or Redis cache on the control plane. However, there will always be a variant where fully autonomous workload clusters are supported.

**The initiating component is always the agent, not the control plane**

Connections are established in one direction only: from the agent to the control plane. Neither the control plane nor the agents need to know exact details about the topology of the system, as long as the agents know which control plane to connect to. In some parts of this documentation, we mention something called a _bi-directional stream_. This refers to a gRPC mechanisms where both parties may randomly transmit and receive data from its peer, all while the connection is established only in one direction.

**Be lightweight by default but keep extensibility in mind**

*argocd-agent* should not impose any mandatory, heavy runtime dependencies or operational patterns. The hurdle of getting started should be as low as possible. The project should stay unencumbered of requirements such as persistent storage or relational databases by default.

We are aware that at some point in time we may hit a scaling limit, especially when it comes to etcd and the Kubernetes API. Thus, major parts such as Application backends on the principal are designed to be pluggable, so users can contribute and use different kinds of backends according to their scalability requirements and operational preferences.

1 change: 1 addition & 0 deletions docs/concepts/argocd-integration.md
Original file line number Diff line number Diff line change
@@ -0,0 +1 @@
# Integration with Argo CD
35 changes: 35 additions & 0 deletions docs/concepts/components-terminology.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,35 @@
# Components and terminology

This chapter introduces the reader to the components and terminology of *argocd-agent*.

## Control plane cluster

The *control plane* or *control plane cluster* is the heart of the architecture. It holds all configuration, is responsible for distributing the configuration among agents and hosts all central components used for observability and management. At the minimum, this cluster will host the *argocd-agent*'s [principal](#principal) component and the Argo CD API server, including the Argo CD web UI.

The control plane cluster itself will not perform reconciliation of Argo CD `Applications` to any cluster, including itself.

Typically, there exists only one control plane cluster in any given setup.

## Workload cluster

A *workload cluster* is a cluster that is the target for applications. In the classical Argo CD architecture, a workload cluster is one that you connect your Argo CD installation to, i.e. using `argocd cluster add` or its declarative equivalent.

In the case of *argocd-agent*, the workload cluster will host at least the *argocd-agent*'s [agent](#agent) component and Argo CD's application controller for local reconciliation. In a nutshell, each workload cluster needs to have a low footprint version of Argo CD installed to it.

## Principal

The *principal* is a central component of *argocd-agent*. It runs on the control plane cluster, and its main tasks are to distribute configuration and receive status information from and to the [agents](#agent), respectively.

*Agents* need to be registered with the *principal*, and the *principal* will authenticate agents once they connect.

The *principal* provides additional services to the Argo CD API server, so that users will be able to view live resources and container logs from workload clusters.

The *principal* typically requires a limited set of privileges on the control plane cluster.

## Agent

The *agent* is a component that gets installed to each workload cluster. It will be configured to connect to a specific *principal*, from which it will receive all configuration and to which it will send status updates.

Depending on the features an *agent* should provide, it will require limited to extended set of privileges on the workload cluster.

Each *agent* can run in one of the following modes: *managed* or *autonomous*. For more information, refer to the chapter about [agent modes](./agent-modes/index.md).
Empty file.
Empty file.
2 changes: 2 additions & 0 deletions docs/contributing/code.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,2 @@
# Code contributions

Empty file added docs/contributing/docs.md
Empty file.
30 changes: 30 additions & 0 deletions docs/contributing/index.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,30 @@
# Contributing to argocd-agent

## Preface

*argocd-agent* is a community-driven Open Source project, published under the liberal Apache-2.0 license.

The project lives from the contributions of its community, and everybody is invited to contribute to it and to become part of its community.

Contributions can come in many forms, for example, but not limited to:

* Contributing code to implement new features or fix existing bugs,
* Contributing documentation, so the project becomes easier to use for everyone,
* Taking part in technical discussions, so the project becomes better,
* Raising bug reports or feature requests
* Review other people's PRs
* Triage and participate in bug resolution
* Engage with the community to discuss use cases, issues, etc

To keep the project focused and on track, all contributions must adhere to a set of simple rules in order to be accepted. This ensures that everyone's time is used efficiently—both for potential contributors whose submissions may be rejected, and for reviewers and maintainers involved in the process.

A pre-requisite for all existing and would-be contributors is to abide by the Argo project's code of conduct. In general that means: Please be kind to each other and keep things focused, constructive and on-topic.

This chapter of the documentation will guide any contributor, whether brand new to the project or a seasoned veteran, to make a good contribution to the project.

## Connecting with the community

Currently, the ways to connect with the community are:

* Join the [#argo-cd-agent channel](https://cloud-native.slack.com/archives/C07L5SX6A9J) on the CNCF slack. If you don't have an account yet, you can easily request one for free [here](https://communityinviter.com/apps/cloud-native/cncf)
* Open an [issue](https://github.com/argoproj-labs/argocd-agent/issues) or [discussion](https://github.com/argoproj-labs/argocd-agent/discussions) on our GitHub project page
Empty file added docs/contributing/prs.md
Empty file.
1 change: 1 addition & 0 deletions docs/contributing/review.md
Original file line number Diff line number Diff line change
@@ -0,0 +1 @@
# Code reviewing guide
Empty file added docs/faq/index.md
Empty file.
7 changes: 7 additions & 0 deletions docs/getting-started/index.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,7 @@
# Getting started

!!! warning
The argocd-agent project is currently rather complex to set-up and operate. Right now, tt is not for the faint of heart. As the project progresses and hopefully gains more contributors, we will come up with means to smoothen both, the initial installation and day 2 operations.

## Prerequisites

Empty file.
Empty file.
Empty file.
Loading

0 comments on commit 097e6d8

Please sign in to comment.