Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add reference implementation example #138

Merged
merged 8 commits into from
Feb 19, 2024
Merged
Show file tree
Hide file tree
Changes from 3 commits
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
2 changes: 1 addition & 1 deletion Makefile
Original file line number Diff line number Diff line change
Expand Up @@ -39,7 +39,7 @@ endif

.PHONY: generate
generate: controller-gen ## Generate code containing DeepCopy, DeepCopyInto, and DeepCopyObject method implementations.
$(CONTROLLER_GEN) object:headerFile="hack/boilerplate.go.txt" paths="./..."
$(CONTROLLER_GEN) object:headerFile="hack/boilerplate.go.txt" paths="./api/..."

.PHONY: manifests
manifests: controller-gen ## Generate WebhookConfiguration, ClusterRole and CustomResourceDefinition objects.
Expand Down
121 changes: 121 additions & 0 deletions examples/ref-implementation/README.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,121 @@
# Reference implementation

This example creates a local version of the CNOE reference implementation.

## Installation
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Can you also add a prerequisite section as I suspect that we need:

  • kubectl installed + kube OIDC login plugin (?),
  • idpbuilder version 0.x (the x should be defined ;-))
  • An account on Amazon and S3 bucket,

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

also do we need something to put in there with regards to system specs? or at least memory requirements. my 16G core i7 cant handle it well.


Run the following command from the root of this repository.

```bash
idpbuilder create --package-dir examples/ref-implementation
```

This will take ~6 minutes for everything to come up. To track the progress, you can go to the [ArgoCD UI](https://argocd.cnoe.localtest.me:8443/applications).

### What was installed?

1. **Argo Workflows** to enable workflow orchestrations.
2. **Backstage** as the UI for software catalog and templating. Source is available [here](https://github.com/cnoe-io/backstage-app).
3. **Crossplane**, AWS providers, and basic compositions for deploying cloud related resources (needs your credentials for this to work)
4. **External Secrets** to generate secrets and coordinate secrets between applications.
5. **Keycloak** as the identity provider for applications.
6. **Spark Operator** to demonstrate an example Spark workload through Backstage.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

if someone doesnt want Spark for example, is it as easy as removing the manifests for it? or is there extra configuration in other components that they need to tweak?

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

also do we need something to put in there with regards to system specs? or at least memory requirements. my 16G core i7 cant handle it well.

That's actually concerning. On my Ubuntu machine it consumes about 5GB memory. How much resources are you allocating to Docker Desktop VM?

if someone doesnt want Spark for example, is it as easy as removing the manifests for it? or is there extra configuration in other components that they need to tweak?

yea just need to remove the argocd app spec

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Why do you use Apache Spark as example ? AFAIK, there is no backstage plugin, so what could be the added value to propose it as example ?
Remark: Should we instead propose as example a microservice such as Spring Boot, Quarkus, Nodejs or combination ?


#### Accessing UIs
- Argo CD: https://argocd.cnoe.localtest.me:8443
- Argo Workflows: https://argo.cnoe.localtest.me:8443
- Backstage: https://backstage.cnoe.localtest.me:8443
- Gitea: https://gitea.cnoe.localtest.me:8443
- Keycloak: https://keycloak.cnoe.localtest.me:8443/admin/master/console/

# Using it

For this example, we will walk through a few demonstrations. Once applications are ready, go to the [backstage URL](https://backstage.cnoe.localtest.me:8443).

Click on the Sign-In button, you will be asked to log into the Keycloak instance. There are two users set up in this
nabuskey marked this conversation as resolved.
Show resolved Hide resolved
configuration, and their password can be retrieved with the following command:

```bash
kubectl -n keycloak get secret keycloak-config \
-o go-template='{{ range $key, $value := .data }}{{ printf "%s: %s\n" $key ($value | base64decode) }}{{ end }}'
```

Use the username **`user1`** and the password value given by `USER_PASSWORD` field to login to the backstage instance.

## Basic Deployment

Let's start by deploying a simple application to the cluster through Backstage.

Click on the `Create...` button on the left, then select the `Create a Basic Deployment` template.

![img.png](images/backstage-templates.png)


In the next screen, type `demo` for the name field, then click create.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
In the next screen, type `demo` for the name field, then click create.
In the next screen, type `demo` for the name field, then click Review, then Create.

Once steps run, click the Open In Catalog button to go to the entity page.

![img.png](images/basic-template-flow.png)

In the demo entity page, you will notice a ArgoCD overview card associated with this entity.
You can click on the ArgoCD Application name to see more details.

![img.png](images/demo-entity.png)

### What just happened?

1. Backstage created [a git repository](https://gitea.cnoe.localtest.me:8443/giteaAdmin/demo), then pushed templated contents to it.
2. Backstage created [an ArgoCD Application](https://argocd.cnoe.localtest.me:8443/applications/argocd/demo?) and pointed it to the git repository.
3. Backstage registered the application as [a component](https://gitea.cnoe.localtest.me:8443/giteaAdmin/demo/src/branch/main/catalog-info.yaml) in Backstage.
4. ArgoCD deployed the manifests stored in the repo to the cluster.
5. Backstage retrieved application health from ArgoCD API, then displayed it.

![image.png](images/basic-deployment.png)


## Argo Workflows and Spark Operator

In this example, we will deploy a simple Apache Spark job through Argo Workflows.

Click on the `Create...` button on the left, then select the `Basic Argo Workflow witha Spark Job` template.

![img.png](images/backstage-templates-spark.png)

Type `demo2` for the name field, then click create. You will notice that the Backstage templating steps are very similar to the basic example above.
Click on the Open In Catalog button to go to the entity page.

![img.png](images/demo2-entity.png)

Deployment processes are the same as the first example. Instead of deploying a pod, we deployed a workflow to create a Spark job.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

can we link to the location in the git repo where this workflow can be found


In the entity page, there is a card for Argo Workflows, and it should say running or succeeded.
You can click the name in the card to go to the Argo Workflows UI to view more details about this workflow run.
When prompted to log in, click the login button under single sign on. Argo Workflows is configured to use SSO with Keycloak allowing you to login with the same credentials as Backstage login.

Note that Argo Workflows are not usually not deployed this way. This is just an example to show you how you can integrate workflows, backstage, and spark.
nabuskey marked this conversation as resolved.
Show resolved Hide resolved

Back in the entity page, you can view more details about Spark jobs by navigating to the Spark tab.

## Application with cloud resources.

Similarly to the above, we can deploy an application with cloud resources using Backstage templates.
In this example, we will create an application with a S3 Bucket.

Choose a template named `App with S3 bucket`, type `demo3` as the name, then choose a region to create this bucket in.

Once you click the create button, you will have a very similar setup as the basic example.
The only difference is we now have a resource for a S3 Bucket which is managed by Crossplane.

Note that Bucket is **not** created because Crossplane doesn't have necessary credentials to do so.
If you'd like it to actually create a bucket, update [the credentials secret file](crossplane-providers/provider-secret.yaml), then run `idpbuilder create --package-dir examples/ref-implementation`.

In this example, we used Crossplane to provision resources, but you can use other cloud resource management tools such as Terraform instead.
Regardless of your tool choice, concepts are the same. We use Backstage as the templating mechanism and UI for users, then use Kubernetes API with GitOps to deploy resources.

## Notes

- In these examples, we have used the pattern of creating a new repository for every app, then having ArgoCD deploy it.
This is done for convenience and demonstration purposes only. There are alternative actions that you can use.
For example, you can create a PR to an existing repository, create a repository but not deploy them yet, etc.

- If Backstage's pipelining and templating mechanisms is too simple, you can use more advanced workflow engines like Tekton or Argo Workflows.
You can invoke them in Backstage templates, then track progress similar to how it was described above.
23 changes: 23 additions & 0 deletions examples/ref-implementation/argo-workflows.yaml
Original file line number Diff line number Diff line change
@@ -0,0 +1,23 @@
apiVersion: argoproj.io/v1alpha1
kind: Application
metadata:
name: argo-workflows
namespace: argocd
labels:
env: dev
finalizers:
- resources-finalizer.argocd.argoproj.io
spec:
project: default
source:
repoURL: cnoe://argo-workflows/manifests
targetRevision: HEAD
path: "dev"
destination:
server: "https://kubernetes.default.svc"
namespace: argo
syncPolicy:
syncOptions:
- CreateNamespace=true
automated:
selfHeal: true
Loading
Loading