-
Notifications
You must be signed in to change notification settings - Fork 4
/
index.json
113 lines (113 loc) · 96.2 KB
/
index.json
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
[
{
"uri": "https://devsecops-workshop.github.io/1-intro/",
"title": "The DevSecOps Workshop",
"tags": [],
"description": "",
"content": "Intro This is the storyline you are going to follow:\n Create an application using the browser based development environment Red Hat OpenShift Dev Spaces Setting up the Inner Development Loop for the individual developer Use the cli tool odo to create, push, change apps on the fly Setting up the Outer Development Loop for the CI/CD team Learn to work with OpenShift Pipelines based on Tekton Use OpenShift GitOps based on ArgoCD Secure your app and OpenShift cluster with ACS Introduction to ACS Example use cases Add ACS scanning to a Tekton Pipeline What to Expect This workshop is for intermediate OpenShift users. A good understanding of how OpenShift works along with hands-on experience is expected. For example we will not tell you how to log in with oc to your cluster or tell you what it is\u0026hellip; ;)\n We try to balance guided workshop steps and challenge you to use your knowledge to learn new skills. This means you\u0026rsquo;ll get detailed step-by-step instructions for every new chapter/task, later on the guide will become less verbose and we\u0026rsquo;ll weave in some challenges.\nWorkshop Environment As Part of a Red Hat Workshop For Attendees As part of the workshop you will be provided with freshly installed OpenShift 4.10 clusters. Depending on attendee numbers we might ask you to gather in teams. Some workshop tasks must be done only once for the cluster (e.g. installing Operators), others like deploying and securing the application can be done by every team member separately in their own Project. This will be mentioned in the guide.\nYou\u0026rsquo;ll get all access details for your lab cluster from the facilitators. This includes the URL to the OpenShift console and information about how to SSH into your bastion host to run oc if asked to.\nFor Facilitators The easiest way to provide this environment is through the Red Hat Demo System. Provision catalog item Red Hat OpenShift Container Platform 4 Demo for the the attendees.\nSelf Hosted While the workshop is designed to be run on Red Hat Demo System (RHDS) and the environment AWS with OpenShift Open Environment, you should be able to run the workshop on a 4.14 cluster of your own.\nJust make sure :\n You have cluster admin privileges Sizing 3 Controlplane Nodes (Similar to AWS m5.2x.large) 3 Worker (Similar to AWS m5.4x.large) Authentication htpasswd enabled For the ACM chapter you will need AWS credentials to automatically deploy a SingleNode OpenShift Some names in the workshop may need to be customized for your environment (e.g. storage naming) This workshop was tested with these versions :\n Red Hat OpenShift : 4.14.18 Red Hat Advanced Cluster Security for Kubernetes: 4.4.0 Red Hat OpenShift Dev Spaces : 3.12.0 Red Hat OpenShift Pipelines: 1.14.3 Red Hat OpenShift GitOps: 1.12.0 Red Hat Quay: 3.8.15 Red Hat Quay Bridge Operator: 3.7.11 Red Hat Data Foundation : 4.14.06 Gitea Operator: 1.3.0 Web Terminal: 1.9.0 Workshop Flow We\u0026rsquo;ll tackle the topics at hand step by step with an introduction covering the things worked on before each section.\nAnd finally a sprinkle of JavaScript magic You\u0026rsquo;ll notice placeholders for cluster access details, mainly the part of the domain that is specific to your cluster. There are two options:\n Whenever you see the placeholder \u0026lt;DOMAIN\u0026gt; replace it with the value for your environment This is the part to the right of apps. e.g. for console-openshift-console.apps.cluster-t50z9.t50z9.sandbox4711.opentlc.com replace with cluster-t50z9.t50z9.sandbox4711.opentlc.com Use a query parameter in the URL of this lab guide to have all occurences replaced automagically, e.g.: http://devsecops-workshop.github.io/?domain=cluster-t50z9.t50z9.sandbox4711.opentlc.com You can use the Link Generator in the next chapter to create the URL for you URL Generator for Custom Lab Guide Enter your OpenShift url after the apps part (e.g. cluster-t50z9.t50z9.sandbox4711.opentlc.com ) and click the button to generate a link that will customize your lab guide.\nClick the generated link once to apply it the the current guide.\n function replaceURLParameter(url, parameter) { //prefer to use l.search if you have a location/link object console.log(\"ReplaceURLParameter in - \" + url + \" \" + parameter); var urlparts = url.split('?'); if (urlparts.length = 2) { var prefix = encodeURIComponent(\"domain\") + '='; var pars = urlparts[1].split(/[\u0026;]/g); //reverse iteration as may be destructive for (var i = pars.length; i-- 0;) { //idiom for string.startsWith if (pars[i].lastIndexOf(prefix, 0) !== -1) { pars.splice(i, 1); } } pars.push(\"domain=\" + parameter) return urlparts[0] + (pars.length 0 ? '?' + pars.join('\u0026') : ''); } else { url = url + \"?domain=\" + parameter; } console.log(\"Returning - \" + url) return url; } function get_domain() { var domainVal = document.getElementById(\"domain\").value; var url = replaceURLParameter(window.location.href, domainVal) var a = document.createElement('a'); var linkText = document.createTextNode(url); a.appendChild(linkText); a.title = \"Custom Lab Guide\"; a.href = url; a.id = \"dynamicUrl\" var parentElem = document.getElementById(\"dynamicLink\"); var elem = parentElem.childNodes.item(\"dynamicUrl\"); if (elem != null) { console.log(\"Replacing in - \" + parentElem + \" \" + elem); parentElem.replaceChild(a, elem); } else { parentElem.appendChild(a); } } Generate URL Current Domain replacement Check to see if replacement is active -\u0026gt; \u0026lt;DOMAIN\u0026gt;\n"
},
{
"uri": "https://devsecops-workshop.github.io/2-install-prerequisites/",
"title": "Install Prerequisites",
"tags": [],
"description": "",
"content": "Install Prerequisites During this workshop you\u0026rsquo;ll install and use a good number of software components. The first one is OpenShift Data Foundation for providing storage. We\u0026rsquo;ll start with it because the install takes a fair amount of time. Number two is Gitea for providing Git services in your cluster with more to follow in subsequent chapters.\nBut fear not, all are managed by Kubernetes Operators on OpenShift.\nInstall OpenShift Data Foundation Let\u0026rsquo;s install OpenShift Data Foundation which you might know under the old name OpenShift Container Storage. It is engineered as the data and storage services platform for OpenShift and provides software-defined storage for containers.\n Login to the OpenShift Webconsole with your cluster admin credentials In the Web Console, go to Operators \u0026gt; OperatorHub and search for the OpenShift Data Foundation operator Click image to enlarge Install the operator with default settings After the operator has been installed it will inform you to install a StorageSystem. From the operator overview page click Create StorageSystem with the following settings:\n Backing storage: Leave Deployment Type Full deployment and for Backing storage type make sure gp2 is selected. Click Next Capacity and nodes: Leave the Requested capacity as is (2 TiB) and select all nodes. Click Next Security and network: Leave set to Default (SDN) Click Next You\u0026rsquo;ll see a review of your settings, hit Create StorageSystem. Don\u0026rsquo;t worry if you see a temporary 404 Page. Just releod the browser page once and you will see the System Overview\n Click image to enlarge As mentioned already this takes some time, so go ahead and install the other prerequisites. We\u0026rsquo;ll come back later.\nPrepare to run oc commands You will be asked to run oc (the OpenShift commandline tool) commands a couple of times. We will do this by using the OpenShift Web Terminal. This is the easiest way because you don\u0026rsquo;t have to install oc or an SSH client.\nInstall OpenShift Web Terminal To extend OpenShift with the Web Terminal option, install the Web Terminal operator:\n Login to the OpenShift Webconsole with your cluster admin credentials In the Web Console, go to Operators \u0026gt; OperatorHub and search for the Web Terminal operator Install the operator with the default settings This will take some time and installs another operator as a dependency.\nAfter the operator has installed, reload the OCP Web Console browser window. You will now have a new button (\u0026gt;_) in the upper right. Click it to start a new web terminal. From here you can run the oc commands when the lab guide requests it (copy/paste might depend on your laptop OS and browser settings, e.g. try Ctrl-Shift-V for pasting).\n Click image to enlarge The terminal is not persistent, so if it was closed for any reason anything you did in the terminal is gone after re-opening.\n If for any reason you can\u0026rsquo;t use the webterminal, your options are:\n Install and run oc on your laptop SSH into the bastion host, if running on a Red Hat RHDP lab environment. From here you can just run oc without login. TODO: Change yaml applies to direct git download\nInstall and Prepare Gitea We\u0026rsquo;ll need Git repository services to keep our app and infrastructure source code, so let\u0026rsquo;s just install trusted Gitea using an operator:\nGitea is an OpenSource Git Server similar to GitHub. A team at Red Hat was so nice to create an Operator for it. This is a good example of how you can integrate an operator into your catalog that is not part of the default OperatorHub already.\n To integrate the Gitea operator into your Operator catalog you need to access your cluster with the oc client. You can do this in two ways:\n If you don\u0026rsquo;t already have the oc client installed, you can download the matching version for your operating system here Login to the OpenShift Web Console with you cluster admin credentials On the top right click on your username and then Copy login command to copy your login token On you local machine open a terminal and login with the oc command you copied above, you may need to add \u0026ndash;insecure-skip-tls-verify at the end of the line Or, if working on a Red Hat RHPDS environment:\n Use the information provided to login to your bastion host via SSH When logged in as lab-user you will be able to run oc commands without additional login. Now using oc add the Gitea Operator to your OpenShift OperatorHub catalog:\noc apply -f https://raw.githubusercontent.com/rhpds/gitea-operator/ded5474ee40515c07211a192f35fb32974a2adf9/catalog_source.yaml In the Web Console, go to Operators \u0026gt; OperatorHub and search for Gitea (You may need to disable search filters) Install the Gitea Operator with default settings Go to Installed Operators \u0026gt; Gitea Operator Create a new OpenShift project called git with the Project selection menu at the top TODO : Screenshot Make sure you are in the git project via the top Project selection menu ! Click on Create new instance ( while in project git) Click image to enlarge TODO : Replace Screenshot\n On the Create Gitea page switch to the YAML view and add the following spec values : spec: giteaAdminUser: gitea giteaAdminPassword: \u0026quot;gitea\u0026quot; giteaAdminEmail: [email protected] Click Create After creation has finished:\n Access the route URL (you\u0026rsquo;ll find it e.g. in Networking \u0026gt; Routes \u0026gt; repository \u0026gt; Location). If the Route is not yet there just wait a couple of minutes. This will take you to the Gitea web UI Sign-In to Gitea with user gitea and password gitea If your Gitea UI appears in a language other then English (depending on your locale settings), switch it to English. Change the language in your Gitea UI, the example below shows a German example: Click image to enlarge Click image to enlarge Import the Required Repositories Now we will clone a git repository of a sample application into our Gitea, so we have some code to works with\n Clone the example repo: Click the + dropdown and choose New Migration As type choose Git URL: https://github.com/devsecops-workshop/quarkus-build-options.git Click Migrate Repository In the cloned repository you\u0026rsquo;ll find a devfile.yaml. We will need the URL to the file soon, so keep the tab open.\nIn later chapters we will need a second repository to hold your GitOps yaml resources. Let\u0026rsquo;s create this now as well\n In Gitea create a New Migration and clone the Config GitOps Repo which will be the repository that contains our GitOps infrastructure components and state The URL is https://github.com/devsecops-workshop/openshift-gitops-getting-started.git Check OpenShift Data Foundation (ODF) Storage Deployment Now it\u0026rsquo;s time to check if the StorageSystem deployment from ODF completed succesfully. In the openShift web console:\n Open Storage-\u0026gt;DataFoundation On the overview page go to the Storage Systems tab Click ocs-storagecluster-storagesystem On the next page make sure the status indicators on the Block and File and Object tabs are green! Click image to enlarge Click image to enlarge Your container storage is ready to go, explore the information on the overview pages if you\u0026rsquo;d like.\nInstall Red Hat Quay Container Registry The image that we have just deployed was pushed to the internal OpenShift Registry which is a great starting point for your cloud native journey. But if you require more control over you image repos, a graphical GUI, scalability, internal security scanning and the like you may want to upgrade to Red Hat Quay. So as a next step we want to replace the internal registry with Quay.\nQuay installation is done through an operator, too:\n In Operators-\u0026gt;OperatorHub filter for Quay Install the Red Hat Quay Operator with the default settings Create a new project called quay at the top Project selection menu While in the project quay go to Administration-\u0026gt;LimitRanges and delete the quay-core-resource-limits Click image to enlarge In the operator overview of the Quay Operator on the Quay Registry tile click Create instance If the YAML view is shown switch to Form view Make sure you are in the quay project Change the name to quay Click Create Click the new Quayregistry, scroll down to Conditions and wait until the Available type changes to True Click image to enlarge Now that the Registry is installed you have to configure a superuser:\n Make sure you are in the quay Project Go to Networking-\u0026gt;Routes, access the Quay portal using the URL of the first route (quay-quay) Click Create Account As username put in quayadmin, a (fake) email address and and quayadmin as password. Click Create Account again In the OpenShift Web Console open Workloads-\u0026gt;Secrets Search for quay-config-editor-credentials-..., open the secret and copy the values, you\u0026rsquo;ll need them in a second. Go back to the Routes and open the quay-quay-config-editor route Login with the values of the secret from above Click Sign in Scroll down to Access Settings As Super User put in quayadmin click Validate Configuration Changes and after the validation click Reconfigure Quay Reconfiguring Quay takes some time. The easiest way to determine if it\u0026rsquo;s been finished is to open the Quay portal (using the quay-quay Route). At the upper right you\u0026rsquo;ll see the username (quayadmin), if you click the username the drop-down should show a link Super User Admin Panel. When it shows up you can proceed.\n Click image to enlarge Integrate Quay as Registry into OpenShift To synchronize the internal default OpenShift Registry with the Quay Registry, Quay Bridge is used.\n In the OperatorHub of your cluster, search for the Quay Bridge Operator Install it with default settings Now we finally create an Quay Bridge instance. :\n Go to the Red Hat Quay Bridge Operator overview (make sure you are in the quay namespace) On the Quay Integration tile click Create Instance Open Credentials secret Namespace containing the secret: quay Key within the secret: token Copy the Quay Portal hostname (including https://) and paste it into the Quay Hostname field Set Insecure registry to true Click Create Architecture recap Click image to enlarge "
},
{
"uri": "https://devsecops-workshop.github.io/2.1-prepare-cluster/",
"title": "Prepare Cluster",
"tags": [],
"description": "",
"content": "Prepare Cluster Integrate Quay as Registry into OpenShift To synchronize the internal default OpenShift Registry with the Quay Registry, the Quay Bridge is used. Now we need to create a new Organization in Quay:\n To access the Quay Portal make sure you are in the quay Project Go to Networking-\u0026gt;Routes, access the Quay portal using the URL of the first route (quay-quay) Login with User: quayadmin Password: quayadmin In the top + menu click Create New Organization Name it openshift_integration Click Create Organization We need an OAuth Application in Quay for the integration:\n Again In the Quay Portal, click the Applications icon in the menubar to the left Click Create New Application at the upper right Name it openshift, press Enter and click on the new openshift item by clicking it In the menubar to the left click the Generate Token icon Check all boxes and click Generate Access token Click image to enlarge In the next view click Authorize Application and confirm In the next view copy the Access Token and save it somewhere, we\u0026rsquo;ll need it again Now create a new secret for the Quay Bridge to access Quay. In the OpenShift web console make sure you are in the quay Project. Then:\n Go to Workloads-\u0026gt;Secrets and click Create-\u0026gt;Key/value secret Secret name: quay-credentials Key: token Value: paste the Access Token you generated in the Quay Portal in the text field below the grey Value field Click Create And you are done with the installation and integration of Quay as your registry!\nTest if the integration works:\n In the Quay Portal you should see your OpenShift Projects are synced and represented as Quay Organizations, prefixed with openshift_ (you might have to reload the browser). E.g. there should be a openshift_git Quay Organization. In the OpenShift web console create a new test Project, make sure it\u0026rsquo;s synced to Quay as an Organization and delete it again. Architecture recap Click image to enlarge "
},
{
"uri": "https://devsecops-workshop.github.io/3-inner-loop/",
"title": "Inner Loop",
"tags": [],
"description": "",
"content": "In this part of the workshop you\u0026rsquo;ll experience how modern software development using the OpenShift tooling can be done in a fast, iterative way. Inner loop here means this is the way, sorry, process, for developers to try out new things and quickly change and test their code on OpenShift without having to build new images all the time or being a Kubernetes expert. Install and Prepare Red Hat OpenShift Dev Spaces OpenShift Dev Spaces is a browser-based IDE for Cloud Native Development. All the heavy lifting is done through a container running your workspace on OpenShift. All you really need is a laptop. You can easily switch and setup a customized environment, plugin, build tools and runtimes. So switching from one project context to another is as easy a switching a website. No more endless installation and configuration marathons on your dev laptop. It is already part of your OpenShift subscription. If you want to find out more have a look here\n Install the Red Hat OpenShift Dev Spaces Operator from OperatorHub with default settings Go to Installed Operators -\u0026gt; Red Hat OpenShift Dev Spaces and create a new instance (Red Hat OpenShift Dev Spaces instance Specification) using the default settings in the project openshift-operators Wait until deployment has finished. This may take a couple of minutes as several components will be deployed. Once the instance status is ready (You can check the YAML of the instance: status \u0026gt; chePhase: Active), look up the devspaces Route in the openshift-workspaces project (If you can see the openshift-workspaces, you may need to toggle the Show default project button). Open the link in a new browser tab, click on Log in with OpenShift and log in with your OpenShift credentials Allow selected permissions We could create a workspace from one of the templates that come with Dev Spaces, but we want to use a customized workspace with some additionally defined plugins in a v2 devfile in our git repo. With devfiles you can share a complete workspace setup and with the click of a link and you will end up in a fully configured project in your browser.\n You will now need to access the Gitea repository where your Quarkus app resides and specifically get the path to the devfile.\n Find the Gitea URL by selecting the git project in openshift and then Networking \u0026gt; Routes Click on the URL and login to Gitea with username : gitea password : gitea On the right side click on the repository gitea/quarkus-build-options Then click on the devfile devspaces_devfile.yml Now click on button Raw (or Originalformat in German) and copy this URL It is important that you have the URL to the Raw version, otherwise DevSpace will recieve a website that it cannot parse.\nNow back in your DevSpaces Workspace :\n In the left menu click on Create Workspace Paste the full URL of the devfile that you just copied into the Git Repo URL field and click Create \u0026amp; Open Click image to enlarge You\u0026rsquo;ll get into the Creating a workspace \u0026hellip; view, give the workspace containers some time to spin up. If a popup appears asking you to \u0026ldquo;trust the authors of the files\u0026rdquo; click Yes, I trust the authors Click image to enlarge When your workspace has finally started, have a good look around in the UI. It should look familiar if you have ever worked with VSCode or similar IDEs.\nWhile working with Dev Spaces make sure you have AdBlockers disabled, you are not on a VPN and a have good internet connection to ensure a stable setup. If you are facing any issues try to reload the Browser window. If that doesn\u0026rsquo;t help restart the workspace in the main DevSpaces Web Console under Workspaces and then menu Restart Workspace\n Clone the Quarkus Application Code As an example you\u0026rsquo;ll create a new Java application. You don\u0026rsquo;t need to have prior experience programming in Java as this will be kept really simple.\nWe will use a Java application based on the Quarkus stack. Quarkus enables you to create much smaller and faster containerized Java applications than ever before. You can even transcompile these apps to native Linux binaries that start blazingly fast. The app that we will use is just a basic example created with the Quarkus Generator with a simple RESTful API that answers to http requests. But at the end of the day this setup will work with any Java application.\nFun fact: Every OpenShift Subscription already provides a Quarkus Subscription.\n Let\u0026rsquo;s clone our project into our workspace :\n Bring up your OpenShift DevSpaces in your browser Click on the the \u0026ldquo;Hamburger\u0026rdquo; menu in the top left, then View \u0026gt; Command Palette In the Command Palette prompt that appears on the top, start typing git clone until you can select the Git: Clone item Click image to enlarge Enter the Git URL to your Gitea Repository (You can copy the URL by clicking on the clipboard icon in Gitea) and press enter Click image to enlarge In the following dialog Choose a folder to clone \u0026hellip;, move up the dirs and select the /projects dir, then click the button OK In the following dialog when asked how to open the code, click on Open Click image to enlarge The windows will briefly reload and then you will be in the cloned project folder You may have to check \u0026ldquo;Trust the authors \u0026hellip;\u0026rdquo; and click Yes, I trust the authors again. Last time, promise :) Click image to enlarge Access OpenShift and Create the Development Stage Project Now we want to create a new OpenShift project for our app:\n Open a terminal in your DevSpaces IDE In the top left \u0026lsquo;hamburger\u0026rsquo; menu click on Terminal \u0026gt; New Terminal) The oc OpenShift cli client is already installed and you are already logged into the cluster So go ahead and create a new project workshop-dev oc new-project workshop-dev Use odo to Deploy and Update our Application odo or \u0026lsquo;OpenShift do\u0026rsquo; is a cli that enables developers to quickly get started with cloud native app development without being a Kubernetes expert. It offers support for multiple runtimes and you can easily setup microservice components, push code changes into running containers and debug remotely with just a few simple commands. To find out more, have look here\nFirst we need to make sure we are in the folder of the cloned project.\nEnter the following command in the terminal:\npwd if you are not in the /projects/quarkus-build-options folder, change into with the cd command\nodo is smart enough to figure out what programming language and frameworks you are using. So let\u0026rsquo;s let initialize our project\nodo init You can then opt-into telemetry (Y/n) A matching Quarkus DevFile is found in the odo repository. Choose Y to download You can select a container in which odo will be started. Hit Enter (None) As componenten name keep the suggestion. Hit Enter odo is now intialized for your app. Let\u0026rsquo;s deploy the app to openshift in odo dev mode\nodo dev This will compile the app, start a pod in the OpenShift project and inject the app.\nThere will be a couple of popups in the bottom right corner (Click on all of them as explained below)\n \u0026ldquo;A new process is listening \u0026hellip;\u0026rdquo; -\u0026gt; Choose Yes \u0026ldquo;Redirect is not enabled \u0026hellip;\u0026rdquo; \u0026ndash;\u0026gt; Click on Open in New Tab \u0026ldquo;Do you want VS Code - Open Source to open an external website\u0026rdquo; \u0026ndash;\u0026gt; Choose Open New tabs will open. One with the DevFile Editor and one showing the Quarkus webpage of your app. You may have to wait a reload in a few seconds.\nTo test the app in the Quarkus App tab:\nYour app should be displayed as a simple web page. In the RESTEasy JAX-RS section click the @Path endpoint /hello to see the result.\nNow for the fun part:\nUsing odo you can dynamically change your code and push it again without the need t o build a new container image! No dev magic involved:\n In your DevWorkspace on the left, expand the file tree to open file src/main/java/org/acme/GreetingRessource.java and change the string \u0026ldquo;Hello RESTEasy\u0026rdquo; to \u0026ldquo;Hello Workshop\u0026rdquo; (DevSpaces auto saves every edit directly. No need to save the file manually.)\n And reload the app webpage.\n Bam! The change should be there in a matter of seconds\n Architecture recap Click image to enlarge "
},
{
"uri": "https://devsecops-workshop.github.io/4-outer-loop/",
"title": "Outer Loop",
"tags": [],
"description": "",
"content": "Now that you have seen how a developer can quickly start to code using modern cloud native tooling, it\u0026rsquo;s time to learn how to push the application towards a production environment. The first step is to implement a CI/CD pipeline to automate new builds. Let\u0026rsquo;s call this stage int for integration.\nInstall OpenShift Pipelines To create and run the build pipeline you\u0026rsquo;ll use OpenShift Pipelines based on project Tekton. The first step is to install it:\n Install the Red Hat OpenShift Pipelines Operator In the OpenShift Web Console select Operators \u0026gt; OperatorHub Find the Red Hat OpenShift Pipelines Operator and install it with the default settings Since the Piplines assets are installed asynchronously it is possible that the Pipeline Templates are not yet setup when proceeding immedately to the next step. So now is good time to grab a coffee.\n Create App Deployment and Build Pipeline After installing the Operator, create a new deployment of your game-changing application:\n Create a new OpenShift project called workshop-int (e.g. using the Projects menu item at the top) In the left menu at the top switch to the OpenShift Developer Console by clicking on Administrator \u0026gt; Developer Close the welcome pop up Make sure you are still in the workshop-int project by verifying in the top Project menu Click the +Add menu entry to the left and choose the Import from Git card As Git Repo URL enter the clone URL for the quarkus-build-options repo in your Gitea instance Click image to enlarge There might be a warning about the repo url that you can ignore Leave Git type set to other Click Show advanced Git options and for Git reference enter master As Import Strategy select Builder Image As Builder Image select Java and openjdk-11-el7 / Red Hat OpenJDK 11 (RHEL 7) We are choosing this image as we have prepared some CVEs in a later chapter based on this image As Application Name enter workshop-app As Name enter workshop Check Add pipeline If you don\u0026rsquo;t have the checkbox Add pipeline and get the message There are no pipeline templates available for Java and Deployment combination in the next step then just give it few more minutes and reload the page.\n Click Create In the main menu left, click on Pipelines and then the instance in column Last run and observe how the Pipeline is executed Click image to enlarge Adjust the Pipeline to Deploy to Quay The current Pipeline deploys to the Internal Registry by default. The image that was just created by the first run was pushed there.\nTo leverage our brand new Quay registry we need to modify the Pipeline in order to push the images to the Quay registry. In addition the OpenShift ImageStream must be modified to point to the Quay registry, too.\nCreate a new s2i-java ClusterTask The first thing is to create a new Source-To-Image Pipeline Task to automatically update the ImageStream to point to Quay. You could of course copy and modify the default s2i-java task using the built-in YAML editor of the OpenShift Web Console. But to make this as painless as possible we have prepared the needed YAML object definition for you already.\n Open a Web Terminal by clicking the \u0026gt;_ in the upper right of the web console, Click Start and wait for the terminal to initialize From here you can run oc commands The YAML object definitions for this lab are in the repo https://github.com/devsecops-workshop/yaml.git, go there and review the YAML definition. Apply the YAML for the new ClusterTask: oc create -f https://raw.githubusercontent.com/devsecops-workshop/yaml/main/s2i-java-workshop.yml Click image to enlarge To make this lab pretty much self-contained, we run oc commands from the OCP Web Terminal. But of course you can do the above steps from any Linux system where you set up the oc command.\n You should now have a new ClusterTask named s2i-java-workshop, go to the OpenShift Web Console and check:\n Switch to the Administrator view Switch to the workshop-int Project Go to Pipelines-\u0026gt;Tasks-\u0026gt;ClusterTasks Search for the s2i-java-workshop ClusterTask and open it Switch to the YAML view Please take the time to review the additions to the default s2i-java task:\nIn the params section there are two new parameters, that will tell the pipeline which ImageStream and tag to update.\n- default: \u0026#39;\u0026#39; description: The name of the ImageStream which should be updated name: IMAGESTREAM type: string - default: \u0026#39;\u0026#39; description: The Tag of the ImageStream which should be updated name: IMAGESTREAMTAG type: string At the end of the steps section is a new step that takes care of actaully creating the ImageStream tag that points to the image in Quay\n- env: - name: HOME value: /tekton/home image: \u0026#39;image-registry.openshift-image-registry.svc:5000/openshift/cli:latest\u0026#39; name: update-image-stream resources: {} script: \u0026gt; #!/usr/bin/env bash oc tag --source=docker $(params.IMAGE) $(params.IMAGESTREAM):$(params.IMAGESTREAMTAG) --insecure securityContext: runAsNonRoot: true runAsUser: 65532 Modify the Pipeline Now that we have our new build tasks we need to modify the pipeline to:\n Introduce the new parameters into the Pipeline configuration Use the new s2i-java-workshop task To make this easier we again provide you with a full YAML definition for the Pipeline.\nDo the following:\n Go to your Web Terminal (if it timed out just start it again) If you use this lab guide with your domain as query parameter (see here), you are good to go with the command below because your domain was already inserted into the command. If not, you have to replace \u0026lt;DOMAIN\u0026gt; manually.\n First get the YAML file: curl https://raw.githubusercontent.com/devsecops-workshop/yaml/main/workshop-pipeline-without-git-update.yml -o workshop-pipeline-without-git-update.yml Now we have to replace the REPLACEME placeholders in the YAML file with your lab domain. Run (Insert your domain if not done automatically): sed -i \u0026#39;s/REPLACEME/\u0026lt;DOMAIN\u0026gt;/g\u0026#39; workshop-pipeline-without-git-update.yml Apply the new definition: oc replace -f workshop-pipeline-without-git-update.yml Again take the time to review the changes in the web console:\n In the OpenShift Web Console menu go to Pipelines-\u0026gt;Pipelines Click the workshop Pipeline and switch to YAML These are the new parameters that have been added to the pipeline: - default: workshop name: IMAGESTREAM type: string - default: latest name: IMAGESTREAMTAG type: string The preexisting parameter IMAGE_NAME now points to your local Quay registry:\n- default: \u0026gt;- quay-quay-quay.apps.\u0026lt;DOMAIN\u0026gt;/openshift_workshop-int/workshop name: IMAGE_NAME type: string And finally the build task was modified to work with the two new parameters:\ntasks: - name: build params: [...] - name: IMAGESTREAM value: $(params.IMAGESTREAM) - name: IMAGESTREAMTAG value: $(params.IMAGESTREAMTAG) The name of the taskRef was changed to s2i-java-workshop, in order to use our custom Pipeline Task: taskRef: kind: ClusterTask name: s2i-java-workshop You are done with adapting the Pipeline to use the Quay registry!\nWe are ready to give it a try, but first let\u0026rsquo;s have quick look at our target Quay repository\n Go to the Quay portal and there to the openshift_workshop-int organization. In the openshift_workshop-int / workshop repository access the Tags in the menu to the left. There should be no container image (yet) Click image to enlarge Now it\u0026rsquo;s time to configure and start the Pipeline.\n In OpenShift Web Console open the Pipelines view Open the workshop Pipeline Go to the top right menu and choose Actions -\u0026gt; Start In the Start Pipeline window that opens, but before (!) starting the actual pipeline, we need to add a Secret so the pipeline can authenticate and push to the Quay repository:\n Switch to the Quay Web Portal and click on the openshift_workshop-int / workshop repository On the left click on Settings Click on the openshift_workshop-int+builder Robot Account and copy the token Click image to enlarge Back in the Start Pipeline form At the buttom, click on Show credential options and then Add secret Set these values Secret name : quay-workshop-int-token Access to : Image Registry Authentication type : Basic Authentication Server URL : quay-quay-quay.apps.\u0026lt;DOMAIN\u0026gt;/openshift_workshop-int (replace your cluster domain if necessary) Username : openshift_workshop-int+builder Password or token : the token you copied from the Quay Robot Account before \u0026hellip; Then click on the checkmark below to add the secret The secret has just been added and will be mounted automatically everytime the pipeline runs Hit Start If the pipeline fails you may have to recheck the Secret quay-workshop-int-token directly if the username and password are set correctly.\nOnce the Pipeline run has finished, go to the Quay Portal and check the Repository openshift_workshop-int/workshop again. Under Tags you should now see a new workshop Image version that was just pushed by the pipeline.\nCongratulations: Quay is now a first level citizen of your pipeline build strategy.\nCreate an ImageStream Tag with an Old Image Version Now that your build pipeline has been set up and is ready. There is one more step in preparation of the security part of this workshop. We need a way to build and deploy from an older image with some security issues in it. For this we will add another ImageStream Tag in the default Java ImageStream that points to an older version with a known CVE issue in it.\n Using the OpenShift Administrator view, switch to the project openshift and under Builds click on ImageStreams Search and open the ImageStream java Switch to YAML view and add the following snippet to the spec \u0026gt; tags: section. Be careful to keep the needed indentation! - name: java-old-image annotations: description: Build and run Java applications using Maven and OpenJDK 8. iconClass: icon-rh-openjdk openshift.io/display-name: Red Hat OpenJDK 8 (UBI 8) sampleContextDir: undertow-servlet sampleRepo: \u0026#34;https://github.com/jboss-openshift/openshift-quickstarts\u0026#34; supports: \u0026#34;java:8,java\u0026#34; tags: \u0026#34;builder,java,openjdk\u0026#34; version: \u0026#34;8\u0026#34; from: kind: DockerImage name: \u0026#34;registry.redhat.io/openjdk/openjdk-11-rhel7:1.10-1\u0026#34; generation: 4 importPolicy: {} referencePolicy: type: Local This will add a tag java-old-image that points to an older version of the RHEL Java image. The image and security vulnerabilities can be inspected in the Red Hat Software Catalog here\n Have a look at version 1.10-1 We will use this tag to test our security setup in a later chapter.\nCreate a new Project For the subsequent exercises we need a new project:\n Create a new OpenShift Project workshop-prod Architecture recap Click image to enlarge "
},
{
"uri": "https://devsecops-workshop.github.io/5-gitops/",
"title": "Configure GitOps",
"tags": [],
"description": "",
"content": "Now that our CI/CD build and integration stage is ready we could promote the app version directly to a production stage. But with the help of the GitOps approach, we can leverage our Git system to handle promotion that is tracked through commits and can deploy and configure the whole production environment. This stage is just too critical to configure manually and without an audit.\nInstall OpenShift GitOps So let\u0026rsquo;s start be installing the OpenShift GitOps Operator based on the project ArgoCD.\n Install the Red Hat OpenShift GitOps Operator from OperatorHub with the default settings The installation of the GitOps Operator will give you a clusterwide ArgoCD instance available at the link in the top right menu, but since we want to have an instance to manage just our prod project we will create another ArgoCD instance in that specific project.\n You should already have created an OpenShift Project workshop-prod With the project workshop-prod selected in the top menu click on Installed Operators and then Red Hat OpenShift GitOps. On the ArgoCD \u0026ldquo;tile\u0026rdquo; click on Create instance to create an ArgoCD instance in the workshop-prod project. Click image to enlarge Keep the settings as they are and click Create Check the GitOps Config Repository We already have a second repository, called openshift-gitops-getting-started in Gitea that holds the required Gitops yaml resources. We will use this repo to push changes to our workshop-prod enivronment.\nHave a quick look at the structure of this git project:\napp - contains yaml files for the deployment, service and route resources needed by our application. These will be applied to the cluster. There is also a kustomization.yaml defining that kustomize layers will be applied to all yamls\nenvironments/dev - contains the kustomization.yaml which will be modified by our builds with new Image versions. ArgoCD will pick up these changes and trigger new deployments.\nSetup the GitOps Project in ArgoCD Let\u0026rsquo;s setup the project that tells ArgoCD to watch our configuration repository and update resources in the workshop-prod project accordingly.\n Find the local ArgoCD URL (not the global instance) by going to Networking \u0026gt; Routes in namespace workshop-prod Open the ArgoCD website, ignoring the certificate warning Don\u0026rsquo;t login with OpenShift but with username and password User is admin and password will be in Secret argocd-cluster in the Project workspace-prod ArgoCD works with the concept of Applications. We will create an Application and point it to the configuration Git repository. ArgoCD will look for Kubernetes yaml files in the repository and path and deploy them to the defined project. Additionally, ArgoCD will also react to changes to the repository and reflect these to the project. You can also enable self-healing to prevent configuration drift. If you want find out more about OpenShift GitOps have look here.\n Create Application Click the Applications icon on the left Click Create Application Application Name: workshop Project: default SYNC POLICY: Automatic Repository URL: Copy the URL of your config repo openshift-gitops-getting-started from Gitea Path: environments/dev Cluster URL: https://kubernetes.default.svc Namespace: workshop-prod Click Create Click on Sync and then Synchronize to manually trigger the first sync Watch the resources (Deployment, Service, Route) get rolled out to the project workshop-prod. Notice, we also scaling our app to 2 pods in the production stage as we want some high availability. But the actual deployment will not succeed as shown by the \u0026lsquo;broken heart\u0026rsquo; icons!\nSince we have not published our image to the Quay workshop-prod repository the initial Deployment will try to roll out non existant image from Quay. Once the first pipeline run is complete, our newly built image will be replaced in the Deployment and rolled out.\n Our complete production stage is now configured and controlled through GitOps. But how do we tell ArgoCD that there is a new version of our app to deploy? Well, we will add a step to our build pipeline updating the configuration repository.\nAs we do not want to modify our original repository file we will use a tool called Kustomize that can add incremental change layers to YAML files. Since ArgoCD permanently watches this repository, it will pick up these Kustomize changes.\nIt is also possible to update the repository with a Pull request. Then you have an approval process for your production deployment.\n Initialize the workshop-prod/workshop Repository in Quay We will need to initialize the workshop-prod/workshop in Quay so the robo user will be able to push images there later on.\n In Quay select the organization openshift_workshop-prod on the right Click on + Create New Repository on the top left Click image to enlarge Make sure to select openshift_workshop-prod as Organization Enter workshop as repo name Set the repo to Public Click Create Public Repository Click image to enlarge Add Kustomize and Git Push Tekton Task Let\u0026rsquo;s add a new custom Tekton task to the workshop-int project that can update the Image tag via Kustomize after the build process completed and then push the change to our git configuration repository.\nWe could add this through the OpenShift Web Console as well but to save time we will apply the file directly via the oc command.\n Go to your Web Terminal or open a new one. Apply the task via YAML: oc create -f https://raw.githubusercontent.com/devsecops-workshop/yaml/main/tekton-kustomize.yml In the OpenShift Web Console switch back to project workshop-int and then go to Pipelines \u0026gt; Tasks \u0026gt; Tasks and have a look at the just imported task git-update-deployment. You should see the git commands how the configuration repository will be cloned, patched by Kustomize and then pushed again. Add Tekton Tasks to your Pipeline to Promote your Image to workshop-prod So now we have a new Tekton Task in our task catalog to update a GitOps Git repository, but we still need to promote the actual image from our workshop-int to workshop-prod project. Otherwise the image will not be available for our deployment.\n In the workshop_int project, go to Pipelines \u0026gt; Pipelines \u0026gt; workshop and then YAML You can edit pipelines either directly in YAML or in the visual Pipeline Builder. We will see how to use the Builder later on, so let\u0026rsquo;s edit the YAML for now.\n Add the new Task to your Pipeline by adding it to the YAML like this:\n First we will add a new Pipeline Parameter \u0026lsquo;GIT_CONFIG_REPO\u0026rsquo; at the beginning of the pipeline and set it by default to our GitOps configuration repository (This will be updated by the Pipeline and then trigger ArgoCD to deploy to Production) So in the YAML view at the end of the spec \u0026gt; params section add the following (if the \u0026lt;DOMAIN\u0026gt; placeholder hasn\u0026rsquo;t been replaced automatically, do it manually): - default: \u0026gt;- https://repository-git.apps.\u0026lt;DOMAIN\u0026gt;/gitea/openshift-gitops-getting-started.git name: GIT_CONFIG_REPO type: string Next insert the new tasks at the tasks level right after the deploy task We will map the Pipeline parameter GIT_CONFIG_REPO to the Task parameter GIT_REPOSITORY Make sure to fix indentation after pasting into the YAML! In the OpenShift YAML viewer/editor you can mark multiple lines and use tab to indent this lines for one step.\n - name: skopeo-copy params: - name: srcImageURL value: \u0026#34;docker://$(params.QUAY_URL)/openshift_workshop-int/workshop:latest\u0026#34; - name: destImageURL value: \u0026#34;docker://$(params.QUAY_URL)/openshift_workshop-prod/workshop:latest\u0026#34; - name: srcTLSverify value: \u0026#34;false\u0026#34; - name: destTLSverify value: \u0026#34;false\u0026#34; runAfter: - build taskRef: kind: ClusterTask name: skopeo-copy workspaces: - name: images-url workspace: workspace - name: git-update-deployment params: - name: GIT_REPOSITORY value: $(params.GIT_CONFIG_REPO) - name: CURRENT_IMAGE value: \u0026#34;quay.io/nexus6/hello-microshift:1.0.0-SNAPSHOT\u0026#34; - name: NEW_IMAGE value: $(params.QUAY_URL)/openshift_workshop-prod/workshop - name: NEW_DIGEST value: $(tasks.build.results.IMAGE_DIGEST) - name: KUSTOMIZATION_PATH value: environments/dev runAfter: - skopeo-copy taskRef: kind: Task name: git-update-deployment workspaces: - name: workspace workspace: workspace The Pipeline should now look like this. Notice that the new tasks runs in parallel to the deploy task\n Click image to enlarge Now, the pipeline is set. The last thing we need is authentication against the Gitea repository and the workshop-prod Quay org. We will add those from the start pipeline form next. Make sure to replace the placeholder if required.\nUpdate our Prod Stage via Pipeline and GitOps Click on Pipeline Start\n In the form go down and expand Show credential options Click Add Secret, then enter Secret name : quay-workshop-prod-token Access to: Image Registry Authentication type: Basic Authentication Server URL: quay-quay-quay.apps.\u0026lt;DOMAIN\u0026gt;/openshift_workshop-prod Username: openshift_workshop-prod+builder Password : (Retrieve this from the Quay organization openshift_workshop-prod robo account openshift_workshop-prod+builder as before) Click the checkmark Then click Add Secret again Secret name : gitea-secret Access to: Git Server Authentication type: Basic Authentication Server URL: https://repository-git.apps.\u0026lt;DOMAIN\u0026gt;/gitea/openshift-gitops-getting-started.git (replace url if necassary) Username: gitea Password : gitea Click the checkmark Run the pipeline by clicking Start and see that in your Gitea configuration repository the file /environment/dev/kustomize.yaml is updated with the new image version Notice that the deploy and the git-update steps now run in parallel. This is one of the strength of Tekton. It can scale natively with pods on OpenShift.\n This will tell ArgoCD to update the Deployment with this new image version\n Check that the new image is rolled out sucessfully now (you may need to sync manually in ArgoCD to speed things up)\n Architecture recap Click image to enlarge "
},
{
"uri": "https://devsecops-workshop.github.io/10-rhacs-setup/",
"title": "Install and Configure ACS",
"tags": [],
"description": "",
"content": "During the workshop you went through the OpenShift developer experience starting from software development using Quarkus and odo, moving on to automating build and deployment using Tekton pipelines and finally using GitOps for production deployments.\nNow it\u0026rsquo;s time to add another extremely important piece to the setup: enhancing application security in a containerized world. Using Red Hat Advanced Cluster Security for Kubernetes, of course!\nInstall RHACS RHCAS Operator Install the Advanced Cluster Security for Kubernetes operator from the OperatorHub:\n Switch Update approval to Manual Apart from this use the default settings Approve the installation when asked Red Hat recommends installing the Red Hat Advanced Cluster Security for Kubernetes Operator in the rhacs-operator namespace. This will happen by default..\n Installing the main component Central You must install the ACS Central instance in its own project and not in the rhacs-operator and openshift-operator projects, or in any project in which you have installed the ACS Operator!\n Navigate to Operators → Installed Operators Select the ACS operator You should now be in the rhacs-operator project the Operator created, create a new OpenShift Project for the Central instance: Create a new project called stackrox (Red Hat recommends using stackrox as the project name.) by selecting Projects: Create project In the Operator view under Provided APIs on the tile Central click Create Instance Switch to the YAML View. Replace the YAML content with the following: apiVersion: platform.stackrox.io/v1alpha1 kind: Central metadata: name: stackrox-central-services namespace: stackrox spec: monitoring: openshift: enabled: true central: notifierSecretsEncryption: enabled: false exposure: loadBalancer: enabled: false port: 443 nodePort: enabled: false route: enabled: true telemetry: enabled: true db: isEnabled: Default persistence: persistentVolumeClaim: claimName: central-db resources: limits: cpu: 2 memory: 6Gi requests: cpu: 500m memory: 1Gi persistence: persistentVolumeClaim: claimName: stackrox-db egress: connectivityPolicy: Online scannerV4: db: persistence: persistentVolumeClaim: claimName: scanner-v4-db indexer: scaling: autoScaling: Disabled maxReplicas: 2 minReplicas: 1 replicas: 1 matcher: scaling: autoScaling: Disabled maxReplicas: 2 minReplicas: 1 replicas: 1 scannerComponent: Default scanner: analyzer: scaling: autoScaling: Disabled maxReplicas: 2 minReplicas: 1 replicas: 1 Click Create After the deployment has finished (Status Conditions: Deployed, Initialized in the Operator view on the Central tab), it can take some time until the application is completely up and running. One easy way to check the state, is to switch to the Developer console view on the upper left. Then make sure you are in the stackrox project and open the Topology map. You\u0026rsquo;ll see the three deployments of the Central instance:\n scanner scanner-db central central-db Wait until all Pods have been scaled up properly.\nVerify the Installation\nSwitch to the Administrator console view again. Now to check the installation of your Central instance, access the ACS Portal:\n Look up the central-htpasswd secret that was created to get the password If you access the details of your Central instance in the Operator page you\u0026rsquo;ll find the complete commandline using oc to retrieve the password from the secret under Admin Credentials Info. Just sayin\u0026hellip; ;)\n Look up and access the route central which was also generated automatically. This will get you to the ACS Portal, accept the self-signed certificate and login as user admin with the password from the secret.\nNow you have a Central instance that provides the following services in an RHACS setup:\n The application management interface and services. It handles data persistence, API interactions, and user interface access. You can use the same Central instance to secure multiple OpenShift or Kubernetes clusters.\n Scanner, which is a vulnerability scanner for scanning container images. It analyzes all image layers for known vulnerabilities from the Common Vulnerabilities and Exposures (CVEs) list. Scanner also identifies vulnerabilities in packages installed by package managers and in dependencies for multiple programming languages.\n To actually do and see anything you need to add a SecuredCluster (be it the same or another OpenShift cluster). For effect go to the ACS Portal, the Dashboard should by pretty empty, click on either of the Compliance link in the menu to the left, lots of zero\u0026rsquo;s and empty panels, too.\nThis is because you don\u0026rsquo;t have a monitored and secured OpenShift cluster yet.\nPrepare to add Secured Clusters Now we\u0026rsquo;ll add your OpenShift cluster as Secured Cluster to ACS.\nFirst, you have to generate an init bundle which contains certificates and is used to authenticate a SecuredCluster to the Central instance, regardless if it\u0026rsquo;s the same cluster as the Central instance or a remote/other cluster.\nWe are using the API to create the init bundle in this workshop, because if we use the Web Terminal we can\u0026rsquo;t upload and downloaded file to it. For the steps to create the init bundle in the ACS Portal see the appendix.\nLet\u0026rsquo;s create the init bundle using the ACS API on the commandline:\nGo to your Web Terminal (if it timed out just start it again), then paste, edit and execute the following lines:\n Set the ACS API endpoint, replace \u0026lt;central_url\u0026gt; with the base URL of your ACS portal (without \u0026lsquo;https://\u0026rsquo; e.g. central-stackrox.apps.cluster-cqtsh.cqtsh.example.com) export ROX_ENDPOINT=\u0026lt;central_url\u0026gt;:443 Set the admin password (same as for the portal, look up the secrets again) export PASSWORD=\u0026lt;password\u0026gt; Give the init bundle a name export DATA={\\\u0026#34;name\\\u0026#34;:\\\u0026#34;my-init-bundle\\\u0026#34;} Finally run the curl command against the API to create the init bundle using the variables set above curl -k -o bundle.json -X POST -u \u0026#34;admin:$PASSWORD\u0026#34; -H \u0026#34;Content-Type: application/json\u0026#34; --data $DATA https://${ROX_ENDPOINT}/v1/cluster-init/init-bundles Convert it to the needed format cat bundle.json | jq -r \u0026#39;.kubectlBundle\u0026#39; \u0026gt; bundle64 base64 -d bundle64 \u0026gt; kube-secrets.bundle You should now have these two files in your Web Terminal session: bundle.json and kube-secrets.bundle.\nThe init bundle needs to be applied to all OpenShift clusters you want to secure and monitor.\nAs said, you can create an init bundle in the ACS Portal, download it and apply it from any terminal where you can run oc against your cluster. We used the API method to show you how to use it and to enable you to use the Web Terminal.\n Prepare the Secured Cluster For this workshop we run Central and SecuredCluster on one OpenShift cluster. E.g. we monitor and secure the same cluster the central services live on.\nApply the init bundle\nAgain in the web terminal:\n Run oc create -f kube-secrets.bundle -n stackrox pointing to the init bundle you downloaded from the Central instance or created via the API as above. This will create a number of secrets, the output should be: secret/collector-tls created secret/sensor-tls created secret/admission-control-tls created Add the Cluster as SecuredCluster to ACS Central You are ready to install the SecuredClusters instance, this will deploy the secured cluster services:\n In the OpenShift Web Console go to the ACS Operator in Operators-\u0026gt;Installed Operators Using the Operator create an instance of the Secured Cluster type in the Project you created (should be stackrox) If you are in the YAML view switch to the Form view Change the Cluster Name for the cluster if you want, it\u0026rsquo;ll appear under this name in the ACS Portal And most importantly for Central Endpoint enter the address and port number of your Central instance, this is the same as the ACS Portal. If your ACS Portal is available at https://central-stackrox.apps.\u0026lt;DOMAIN\u0026gt; the endpoint is central-stackrox.apps.\u0026lt;DOMAIN\u0026gt;:443. Under Admission Control Settings make sure listenOnCreates, listenOnUpdates and ListenOnEvents is enabled Set Contact Image Scanners to ScanIfMissing **Collector Settings** change the value for **Collection** form `EBPF` to `KernelModule`. This is a workaround for a known issue. -- Click Create Now go to your ACS Portal again, after a couple of minutes you should see your secured cluster under Platform Configuration-\u0026gt;Clusters. Wait until all Cluster Status indicators become green.\nConfigure Quay Integrations in ACS Create an integration to scan the Quay registry To enable scanning of images in your Quay registry, you\u0026rsquo;ll have to configure an Integration with valid credentials, so this is what you\u0026rsquo;ll do.\nNow, create a new Integration:\n Access the RHACS Portal and configure the already existing integrations of type Generic Docker Registry. Go to Platform Configuration -\u0026gt; Integrations -\u0026gt; Generic Docker Registry. Click the New integration button Integration name: Quay local Endpoint: https://quay-quay-quay.apps.\u0026lt;DOMAIN\u0026gt; (replace domain if required) Username: quayadmin Password: quayadmin Press the Test button to validate the connection and press Save when the test is successful. Architecture recap Click image to enlarge "
},
{
"uri": "https://devsecops-workshop.github.io/11-rhacs-warmup/",
"title": "Getting to know ACS",
"tags": [],
"description": "",
"content": "Before we start to integrate Red Hat Advanced Cluster Security in our setup, you should become familiar with the basic concepts.\nACS Features ACS delivers on these security use cases:\n Vulnerability Management: Protect the software supply chain and prevent known vulnerabilities from being used as an entry point in your applications. Configuration Management: Leverage the OpenShift platform for declarative security to prevent or limit attacks, even in the presence of exploitable vulnerabilities. Network Segmentation: Using Kubernetes network policies in OpenShift, restrict open network paths for isolation and prevent lateral movement by attackers. Risk Profiling: Prioritize applications and security risks automatically to focus investigation and mitigation efforts. Threat detection and incident response: Continuous observation and response in order to take action on attack-related activities, and to use observed behavior to inform mitigation efforts to harden security. Compliance: Making sure that industry and regulatory standards are being met in your OpenShift environments. UI Overview Click image to enlarge Dashboard: The dashboard serves as the security overview - helping the security team understand what the sources of risk are, categories of violations, and gaps in compliance. All of the elements are clickable for more information and categories are customizable.\n Top bar: Near the top, we see a condensed overview of the status. It provides insight into the status of clusters, nodes, violations and so on. The top bar provides links to Search, Command-line tools, Cluster Health, Documentation, API Reference, and the logged-in user account.\n Left menus: The left hands side menus provide navigation into each of the security use-cases, as well as product configuration to integrate with your existing tooling.\n Global Search: On every page throughout the UI, the global search allows you to search for any data that ACS tracks.\n Exploring the Security Use Cases Now start to explore the Security Use Cases ACS targets as provided in the left side menu.\n Network Graph:\n The Network Graph is a flow diagram, firewall diagram, and firewall rule builder in one. The default view Active shows the actual traffic for the past hour between the deployments in all namespaces. Violations:\n Violations record all times where a policy criteria was triggered by any of the objects in your cluster - images, components, deployments, runtime activity. Compliance:\n The compliance reports gather information for configuration, industry standards, and best practices for container-based workloads running in OpenShift. Vulnerability Management:\n Vulnerability Management provides several important reports - where the vulnerabilities are, which are the most widespread or the most recent, where my images are coming from. In the upper right are buttons to link to all policies, CVEs, and images, and a menu to bring you to reports by cluster, namespace, deployment, and so on. Configuration Management:\n Configuration management provides visibility into a number of infrastructure components: clusters and nodes, namespaces and deployments, and Kubernetes systems like RBAC and secrets. Risk:\n The Risk view goes beyond the basics of vulnerabilities. It helps to understand how deployment configuration and runtime activity impact the likelihood of an exploit to occurr and how successful those exploits might be. This list view shows all deployments, in all clusters and namespaces, ordered by Risk priority. Filters Most UI pages have a filter section at the top that allows you to narrow the view to matching or non-matching criteria. Almost all of the attributes that ACS gathers can be filtered, try it out:\n Go to the Risk view Click in the Filters Bar Start typing Process Name and select the Process Name key Type java and press enter: click away to get the filters dropdown to clear You should see your deployment that has been “seen” running Java since it started Try another one: limit the filter to your Project namespace only Note the Create Policy button. It can be used to create a policy from the search filter to automatically identify this criteria. System Policies As the foundation of ACS are the system policies, have a good look around:\n Navigate to the Policiy Management section from Platform Configuration in the left side menu. You will get an overview of the built-in policies All of the policies that ship with the product are designed with the goal to provide targeted remediation strategies that improve security hardening. You’ll see this list contains many Build and Deploy time policies to catch misconfigurations early in the pipeline, but also Runtime policies. These policies come from us at Red Hat - our expertise, our interpretation of industry best practice, and our interpretation of common compliance standards, but you can modify them or create your own. By default only some policies are enforced. If you want to get an overview which ones, you can use the filter view introduced above. Use Enforcement as filter key and FAIL_BUILD_ENFORCEMENT as value.\n Architecture recap Click image to enlarge "
},
{
"uri": "https://devsecops-workshop.github.io/12-create-policy/",
"title": "Create a Custom Security Policy",
"tags": [],
"description": "",
"content": "Objective You should have one or more pipelines to build your application from the first workshop part, now we want to secure the build and deployment of it. For the sake of this workshop we\u0026rsquo;ll take a somewhat simplified use case:\nWe want to scan our application image for the Red Hat Security Advisory RHSA-2021:4904 concerning openssl-lib.\nIf this RHSA is found in an image we don\u0026rsquo;t want to deploy the application.\nThese are the steps you will go through:\n Create a custom Security Policy to check for the advisory Test if the policy is triggered in non-enforcing mode with an older image version that contains the issue and then with a newer version with the issue fixed The final goal is to integrate the policy into the build pipeline Create a Custom System Policy First create a new policy category and the system policy. In the ACS Portal do the following:\n Platform Configuration-\u0026gt;Policy Management-\u0026gt;Policy categories tab-\u0026gt;Create category Enter Workshop as Category name Click Create Platform Configuration-\u0026gt;Policy Management-\u0026gt;Policies tab-\u0026gt;Create policy Policy Details Name: Workshop RHSA-2021:4904 Severity: Critical Categories: Workshop Click Next Policy Behaviour Lifecycle Stages: Build, Deploy Response method: Inform Click Next Policy Criteria Find the CVE policy criteria under Drag out policy fields in Image contents Drag \u0026amp; drop it on the drop zone of Policy Section 1 Put RHSA-2021:4904 into the CVE identifier field Click Next Policy Scope You could limit the scope the policy is applied in, do nothing for now Click Next Review Policy Have a quick look around, if the policy would create a violation you get a preview here Click Save Click image to enlarge Currently there is an issue with persisting the group change to the central instance. As a workaround run this in your Web Terminal zu restart the central instance:\noc delete pod -n stackrox -l app=central Test the Policy Start the pipeline with the affected image version:\n In the OpenShift Web Console go to the Pipeline in your workshop-int project, start it and set Version to java-old-image (Remember how we set up this ImageStream tag to point to an old and vulnerable version of the image?) In the ACS Portal follow the Violations view To make it easier spotting the violations for this deployment you can filter the list by entering namespace and then workshop-int in the filter bar.\n Expected result: You\u0026rsquo;ll see the build deployments (Quarkus-Build-Options-Git-Gsklhg-Build-...) come and go when they are finished. When the final build is deployed you\u0026rsquo;ll see a violation in ACS Portal for policy Workshop RHSA-2021:4904 (Check the Time of the violation) There will be other policy violations listed, triggered by default policies, have a look around. Note that none of the policies are enforced (so that the pipeline build would be stopped) yet!\n Now start the pipeline with the fixed image version that doesn\u0026rsquo;t contain the CVE anymore:\n Start the pipeline again but this time leave the Java Version as is (openjdk-11-el7). Follow the Violations in the ACS Portal Expected result: You\u0026rsquo;ll see the build deployments come up and go When the final build is deployed you\u0026rsquo;ll see the policy violation for Workshop RHSA-2021:4904 for your deployment is gone because the image no longer contains it This shows how ACS is automatically scanning images when they become active against all enabled policies. But we don\u0026rsquo;t want to just admire a violation after the image has been deployed, we want to disable the deployment during build time! So the next step is to integrate the check into the build pipeline and enforce it (don\u0026rsquo;t deploy the application).\nArchitecture recap Click image to enlarge "
},
{
"uri": "https://devsecops-workshop.github.io/13-rhacs-pipeline/",
"title": "Integrating ACS into the Pipeline",
"tags": [],
"description": "",
"content": "Finally: Putting the Sec in DevSecOps! There are basically two ways to interface with ACS. The UI, which focuses on the needs of the security team, and a separate \u0026ldquo;interface\u0026rdquo; for developers to integrate into their existing toolset (CI/CD pipeline, consoles, ticketing systems etc): The roxctl commandline tool. This way ACS provides a familiar interface to understand and address issues that the security team considers important.\n ACS policies can act during the CI/CD pipeline to identify security risk in container images before they are started.\nIntegrate Image Scan into the Pipeline You should have created and build a custom policy in ACS and tested it to trigger violations. Now you will integrate it into the build pipeline.\nOur task will use the roxctl cli Build-time policies require the use of the roxctl command-line tool which is available for download from the ACS Central UI, in the upper right corner of the dashboard. You don\u0026rsquo;t need to to download this now as our Tekton task will do this automatically.\nroxctl needs to authenticate to ACS Central to do anything. You can use either username and password or API tokens to authenticate against ACS Central. It\u0026rsquo;s good practice to use a token so that\u0026rsquo;s what we\u0026rsquo;ll do.\nLet\u0026rsquo;s Go : Create the roxctl token In the ACS portal:\n Navigate to Platform Configuration \u0026gt; Integrations. Scroll down to the Authentication Tokens category, and select API Token. Click Generate Token. Enter the name pipeline for the token and select the role Admin. Select Generate Save the contents of the token somewhere! Create OCP secret with token Change to the OpenShift Web Console and create a secret with the API token in the project your pipeline lives in:\n In the UI switch to your workshop-int Project Create a new key/value Secret named roxsecrets Introduce these key/values into the secret: rox_central_endpoint: \u0026lt;the URL to your ACS Portal\u0026gt; (without https:// but adding the port, e.g. entral-stackrox.apps.cluster-cqtsh.cqtsh.example.com:443) If the DOMAIN placeholder was automatically replaced it should be: central-stackrox.apps.\u0026lt;DOMAIN\u0026gt;:443 If not, replace it manually with your DOMAIN rox_api_token: \u0026lt;the API token you generated\u0026gt; Even if the form says Drag and drop file with your value here\u0026hellip; you can just paste the text.\n Remove ImageStream Change Trigger There is one more thing you have to do before integrating the image scanning into your build pipeline:\nWhen you created your deployment, a trigger was automatically added that deploys a new version when the image referenced by the ImageStream changes.\nThis is not what we want! Because this way a newly build image would be deployed immediately even if the roxctl scan detects a policy violation and terminates the pipeline.\nHave a look for yourself:\n In the OCP console go to Workloads-\u0026gt;Deployments and open the workshop Deployment Switch to the YAML view Near the top under annotations (around lines 11-12) you\u0026rsquo;ll find an annotation image.openshift.io/triggers. Remove exactly this lines and click Save:\nimage.openshift.io/triggers: \u0026gt;- [{\u0026#34;from\u0026#34;:{\u0026#34;kind\u0026#34;:\u0026#34;ImageStreamTag\u0026#34;,\u0026#34;name\u0026#34;:\u0026#34;workshop2:latest\u0026#34;,\u0026#34;namespace\u0026#34;:\u0026#34;workshop-int\u0026#34;},\u0026#34;fieldPath\u0026#34;:\u0026#34;spec.template.spec.containers[?(@.name==\\\u0026#34;workshop2\\\u0026#34;)].image\u0026#34;,\u0026#34;pause\u0026#34;:\u0026#34;false\u0026#34; This way we make sure that a new image won\u0026rsquo;t be deployed automatically right after the build task which also updates the ImageStream.\nCreate a Scan Task You are now ready to create a new pipeline task that will use roxctl to scan the image build in your pipeline before the deploy step:\n In the OpenShift UI, make sure you are still in the project with your pipeline and the secret roxsecrets Go to Pipelines-\u0026gt;Tasks Click Create-\u0026gt; ClusterTask Replace the YAML displayed with this: apiVersion: tekton.dev/v1beta1 kind: ClusterTask metadata: name: rox-image-check spec: params: - description: \u0026gt;- Secret containing the address:port tuple for StackRox Central (example - rox.stackrox.io:443) name: rox_central_endpoint type: string - description: Secret containing the StackRox API token with CI permissions name: rox_api_token type: string - description: \u0026#34;Full name of image to scan (example -- gcr.io/rox/sample:5.0-rc1)\u0026#34; name: image type: string - description: Use image digest result from s2i-java build task name: image_digest type: string results: - description: Output of `roxctl image check` name: check_output steps: - env: - name: ROX_API_TOKEN valueFrom: secretKeyRef: key: rox_api_token name: $(params.rox_api_token) - name: ROX_CENTRAL_ENDPOINT valueFrom: secretKeyRef: key: rox_central_endpoint name: $(params.rox_central_endpoint) image: registry.access.redhat.com/ubi8/ubi-minimal:latest name: rox-image-check resources: {} script: \u0026gt; #!/usr/bin/env bash set +x curl -k -L -H \u0026#34;Authorization: Bearer $ROX_API_TOKEN\u0026#34; https://$ROX_CENTRAL_ENDPOINT/api/cli/download/roxctl-linux --output ./roxctl \u0026gt; /dev/null; echo \u0026#34;Getting roxctl\u0026#34; chmod +x ./roxctl \u0026gt; /dev/null ./roxctl image check -c Workshop --insecure-skip-tls-verify -e $ROX_CENTRAL_ENDPOINT --image $(params.image)@$(params.image_digest) Take your time to understand the Tekton task definition:\n First, some parameters are defined, it\u0026rsquo;s important to understand some of these are taken or depend on the build task that run before. The script action pulls the roxctl binary into the pipeline workspace so you\u0026rsquo;ll always have a version compatible with your ACS version. The most important bit is the roxctl execution, of course: it executes the image check command only checks against policies from category Workshop that was created above. This way you can check against a subset of policies! defines the image to check and it\u0026rsquo;s digest Add the Scan Task to the Pipeline Now add the rox-image-check task to your pipeline between the build and deploy steps.\n In the Pipelines view of your project click the three dots to the right and the Edit Pipeline Remember how we edited the pipeline directly in yaml before? OpenShift comes with a graphical Pipeline editor that we will use this time.\n Hover your mouse over build task and click the + at the right side of it, to add a task Click on Add task Then enter rox-image-check in the search box Click image to enlarge Click the Add button to add to the pipeline To add the required parameters from the pipeline for the task so the ACS client can connect to central, click the rox-image-check task. Click image to enlarge A form with the parameters will open, fill it in: rox_central_endpoint: roxsecrets rox_api_token: roxsecrets image: quay-quay-quay.apps.\u0026lt;DOMAIN\u0026gt;/openshift_workshop-int/workshop (if the DOMAIN placeholder hasn\u0026rsquo;t been replaced automatically, do it manually) Adapt the Project name if you changed it image_digest: $(tasks.build.results.IMAGE_DIGEST) This variable takes the result of the build task and uses it in the scan task. Click image to enlarge Don\u0026rsquo;t save yet Add the oc patch Task to the Pipeline As you remember we removed the trigger that updates the Deployment on ImageStream changes. Now the Deployment will never be updated and our new Image version will never be deployed to workshop-int.\nTo fix this we will add a new oc client Task that updates the Deployment, only after the Scan Task has run.\n While still in the visual pipeline editor Click on the + button to the left of the deploy Task Click on Add Task In the search window enter openshiftand select the openshift-client from Red Hat Click on Add Click on the new openshift-client Task In Task form on the right enter Display name : update-deploy SCRIPT : oc patch deploy/workshop -p '{\u0026quot;spec\u0026quot;:{\u0026quot;template\u0026quot;:{\u0026quot;spec\u0026quot;:{\u0026quot;containers\u0026quot;:[{\u0026quot;name\u0026quot;:\u0026quot;workshop\u0026quot;,\u0026quot;image\u0026quot;:\u0026quot;$(params.QUAY_URL)/openshift_workshop-int/workshop@$(tasks.build.results.IMAGE_DIGEST)\u0026quot;}]}}}}' Click image to enlarge Now save the pipeline Test the Scan Task With our custom System Policy still not set to enforce we first are going to test the pipeline integration. Go to Pipelines and next to your pipeline click on the three dots and then Start. Now in the pipeline startform enter java-old-image in the Version field.\n Expected Result: The rox-image-check task should succeed, but if you have a look at the output (click the task in the visual representation) you should see that the build violated our policy! Enforce the Policy The last step is to enforce the System Policy. If the policy is violated the pipeline should be stopped and the application should not be deployed.\n Edit your custom System Policy Workshop RHSA-2021:4904 in ACS Portal and set Response Method to Inform and enforce and then switch on Build and Deploy below. Run the pipeline again, first with Version java-old-image and then with Version openjdk-11-el7 (default) Expected results: We are sure you know by now what to expect! The pipeline should fail with the old image version and succeed with the latest image version! Make sure you run the pipeline once, otherwise your application will not have a valid image tag when you kill the running pod in the next chapter Click image to enlarge Architecture recap Click image to enlarge "
},
{
"uri": "https://devsecops-workshop.github.io/15-runtime-security/",
"title": "Securing Runtime Events",
"tags": [],
"description": "",
"content": "So far you\u0026rsquo;ve seen how ACS can handle security issues concerning Build and Deploy stages. But ACS is also able to detect and secure container runtime behaviour. Let\u0026rsquo;s have a look\u0026hellip;\nHandling Security Issues at Runtime As a scenario let\u0026rsquo;s assume you want to protect container workloads against attackers who are trying to install software. ACS comes with pre-configured policies for Ubuntu and Red Hat-based containers to detect if a package management tool is installed, this can be used in the Build and Deploy stages:\n Red Hat Package Manager in Image And, more important for this section about runtime security, a policy to detect the execution of a package manager as a runtime violation, using Kernel instrumentation:\n Red Hat Package Manager Execution In the ACS Portal, go to Platform Configuration-\u0026gt;Policy Management, search for the policies by e.g. typing policy and then red hat into the filter. Open the policy detail view by clicking it and have a look at what they do.\nYou can use the included policies as they are but you can always e.g. clone and adapt them to your needs or write completely new ones.\n As you can see the Red Hat Package Manager Execution policy will alert as soon as a process rpm or dnf or yum is executed.\nLike with most included policies it is not set to enforce!\n Test the Runtime Policy To see how the alert would look like, we have to trigger the condition:\n You should have a namespace with your Quarkus application runnning In the OpenShift Web Console navigate to the pod and open a terminal into the container Run yum search test Go to the Violations view in the ACS Portal. You should see a violation of the policy, if you click it, you\u0026rsquo;ll get the details. Run several yum commands in the terminal and check back with the Violations view: As long as you stay in the same deployment, there won\u0026rsquo;t be a new violation but you will see the details for every new violation of the same type in the details. Enforce Runtime Protection But the real fun starts when you enforce the policy. Using the included policy, it\u0026rsquo;s easy to just \u0026ldquo;switch it on\u0026rdquo;:\n In the ACS Portal bring up the Red Hat Package Manager Execution Policy again. Click the Edit Policy button in the Actions drop-down to the upper right. Click Next until you arrive at the Policy behaviour page. Under Response Method select Inform and enforce Set Configure enforcement behaviour for Runtime to Enforce on Runtime Click Next until you arrive at the last page and click Save Now trigger the policy again by opening a terminal into the pod in the OpenShift Web Console and executing yum. See what happens:\n Runtime enforcement will kill the pod immediately (via k8s). OpenShift will scale it up again automatically This is expected and allows to contain a potential compromise while not causing a production outage. Architecture recap Click image to enlarge "
},
{
"uri": "https://devsecops-workshop.github.io/16-acm/",
"title": "Advanced Cluster Management",
"tags": [],
"description": "",
"content": "Advanced Cluster Management Overview Red Hat Advanced Cluster Management for Kubernetes (ACM) provides management, visibility and control for your OpenShift and Kubernetes environments. It provides management capabilities for:\n cluster creation application lifecycle security and compliance All across hybrid cloud environments.\nClusters and applications are visible and managed from a single console, with built-in security policies. Run your operations from anywhere that Red Hat OpenShift runs, and manage any Kubernetes cluster in your fleet.\nInstall Advanced Cluster Managagement Before you can start using ACM, you have to install it using an Operator on your OpenShift cluster.\n Login to the OpenShift Web Console with you cluster admin credentials In the Web Console, go to Operators \u0026gt; OperatorHub and search for the Advanced Cluster Management for Kubernetes operator. Install the operator with the default settings It will install into a new Project open-cluster-management by default. After the operator has been installed it will inform you to create a MultiClusterHub, the central component of ACM.\n Click image to enlarge Click the Create MultiClusterHub button and have a look at the available installation parameters, but don\u0026rsquo;t change anything.\nClick Create.\nAt some point you will be asked to refresh the web console. Do this, you\u0026rsquo;ll notice a new drop-down menu at the top of the left menu bar. If left set to local-cluster you get the standard console view, switching to All Clusters takes you to a view provided by ACM covering all your clusters.\nOkay, right now you\u0026rsquo;ll only see one, your local-cluster listed here.\nA first look at Advanced Cluster Management Now let\u0026rsquo;s change to the full ACM console:\n Switch back to the local-clusters view Go to Operators-\u0026gt;Installed operators and click the Advanced Cluster Management for Kubernetes operator In the operator overview page choose the MultiClusterHub tab. The multiclusterhub instance you deployed should be in Status Running by now. Switch back to the All Clusters You are now in your ACM dashboard!\n Click image to enlarge Have a look around:\n Go to Infrastructure-\u0026gt;Clusters You\u0026rsquo;ll see your lab OpenShift cluster here, the infrastructure it\u0026rsquo;s running on and the version. There might be a version update available, don\u0026rsquo;t run it please\u0026hellip; ;) If you click the cluster name, you\u0026rsquo;ll get even more information, explore! Manage Cluster Lifecycle One of the main features of Advanced Cluster Management is cluster lifecycle management. ACM can help to:\n manage credentials deploy clusters to various cloud providers and on-premises import existing clusters use labels on clusters for management purposes Let\u0026rsquo;s give this a try!\nDeploy an OpenShift Cluster Okay, do not overstress our cloud ressources and for the fun of it we\u0026rsquo;ll deploy a Single Node OpenShift (SNO) cluster to the same AWS account your lab cluster is running in.\nCreate Cloud Credentials The first step is to create credentials in ACM to deploy to the Amazon Web Services account.\nYou\u0026rsquo;ll get the AWS_ACCESS_KEY_ID and AWS_SECRET_ACCESS_KEY needed to deploy to AWS from your facilitators.\n On your OpenShift cluster, create a new namespace sno In the ACM web console, navigate to Credentials and click Add credential: As Credential type select AWS Credential name: sno Namespace: Choose the sno namespace Base DNS domain: sandbox\u0026lt;NNNN\u0026gt;.opentlc.com, replace \u0026lt;NNNN\u0026gt; with your id, you can find it e.g. in the URL Click Next Now you need to enter the AWS credentials, enter the Access key ID and Secret access key as provided. Click Next Click Next again for proxy settings Now you need to enter an OpenShift Pull Secret, copy it from your OpenShift cluster: Switch to the project openshift-config and copy the content of the secret pull-secret To connect to the managed SNO you need to enter a SSH private key ($HOME/.ssh/\u0026lt;LABID\u0026gt;key.pem) and public key ($HOME/.ssh/\u0026lt;LABID\u0026gt;key.pub). Use the respective keys from your lab environments bastion host, the access details will be provided. The \u0026lt;LABID\u0026gt; can be found in the URL, e.g. multicloud-console.apps.cluster-z48z9.z48z9.sandbox910.opentlc.com Click Next Click Add You have created a new set of credentials to deploy to the AWS account you are using.\nDeploy Single Node OpenShift Now you\u0026rsquo;ll deploy a new OpenShift instance:\n In the ACM console, navigate to Infrastructure -\u0026gt; Clusters and click Create cluster. As provider choose Amazon Web Services Infrastructure provider credential: Select the sno credential you created. Control plane type - AWS: Standalone Cluster name: aws-sno Cluster set: Leave empty \u0026hellip; Base DNS Domain: Set automatically from the credentials Release name: Use the latest 4.14.19 release available Additional Label: sno=true Click Next On the Node pools view leave the Region set to us-east-1 Architecture: amd64 Expand Control plane pool read the information for Zones (and leave the setting empty) change Instance Type to m5.2xlarge. Expand Worker pool 1: Set Node count to 0 (we want a single node OCP\u0026hellip;). Click Next Have a look at the network screen but don\u0026rsquo;t change anything Now click Next until you arrive at the Review. Do the following:\n Set YAML: On In the cluster YAML editor select the install-config tab In the controlPlane section change the replicas field to 1. It\u0026rsquo;s time to deploy your cluster, click Create!\nACM monitors the installation of the new cluster and finally imports it. Click View logs under Cluster install to follow the installation log.\nInstallation of a SNO takes around 30 minutes in our lab environment.\n After the installation has finished, access the Clusters section in the ACM portal again.\n Click image to enlarge Explore the information ACM is providing, including the Console URL and the access credentials of your shiny new SNO instance. Use them to login to the SNO Web Console.\nApplication Lifecycle Management In the previous lab, you explored the Cluster Lifecycle functionality of RHACM by deploying a new OpenShift single-node instance to AWS. Now let\u0026rsquo;s have a look at another capability, Application Lifecycle management.\nApplication Lifecycle management is used to manage applications on your clusters. This allows you to define a single or multi-cluster application using Kubernetes specifications, but with additional automation of the deployment and lifecycle management of resources to individual clusters. An application designed to run on a single cluster is straightforward and something you ought to be familiar with from working with OpenShift fundamentals. A multi-cluster application allows you to orchestrate the deployment of these same resources to multiple clusters, based on a set of rules you define for which clusters run the application components.\nThe naming convention of the different components of the Application Lifecycle model in RHACM is as follows:\n Channel: Defines a place where deployable resources are stored, such as an object store, Kubernetes namespace, Helm repository, or GitHub repository. Subscription: Definitions that identify deployable resources available in a Channel resource that are to be deployed to a target cluster. PlacementRule: Defines the target clusters where subscriptions deploy and maintain the application. It is composed of Kubernetes resources identified by the Subscription resource and pulled from the location defined in the Channel resource. Application: A way to group the components here into a more easily viewable single resource. An Application resource typically references a Subscription resource. Creating a Simple Application with ACM Start with adding labels to your two OpenShift clusters in your ACM console:\n On the local cluster add a label: environment=prod On the new SNO deployment add label: environment=dev Click image to enlarge Now it\u0026rsquo;s time to actually deploy the application. But first have a look at the manifest definitions ACM will use as deployables at https://github.com/devsecops-workshop/book-import/tree/master/book-import.\nThen in the ACM console navigate to Applications:\n Click Create application, select Subscription Make sure the view is set to YAML Name: book-import Namespace: book-import Under Repository location for resources -\u0026gt; Repository types, select GIT URL: https://github.com/devsecops-workshop/book-import.git Branch: master Path: book-import Select Deploy application resources only on clusters with all specified labels Cluster sets: default Label: environment Value: dev Click image to enlarge Click Create and then the topology tab to view the application being deployed:\n Click image to enlarge Select Cluster, the application should have been deployed to the SNO cluster because of the label environment=dev Select the Route and click on the URL, this should take you to the Book Import application Explore the other objects Now edit the application in the ACM console and change the label to environment=prod. What happens?\nIn this simple example you have seen how to deploy an application to an OpenShift cluster using ACM. All manifests defining the application where kept in a Git repo, ACM then used the manifests to deploy the required objects into the target cluster.\nBonus Chapter : Pre/Post Tasks with Ansible Automation Platform 2 You can integrate Ansible Automation Platform and the Automation Controller (formerly known as Ansible Tower) with ACM to perform pre / post tasks within the application lifecycle engine. The prehook and posthook task allows you to trigger an Ansible playbook before and after the application is deployed, respectively.\nNotice that you will need a Red Hat Account with a valid Ansible subscription for this part.\nInstall Automation Controller To give this a try you need an Automation Controller instance. So let\u0026rsquo;s deploy one on your cluster using the AAP Operator:\n In OperatorHub search for the Ansible Automation Platform operator and install it using the default settings. After installation has finished create an Automation Controller instance using the Operator, name it automationcontroller When the instance is ready, look up automationcontroller-admin-password secret Then look up the automationcontroller route, access it and login as user admin using the password from the secret Apply a manifest or use the username/password login to the Red Hat Customer Portal and add a subscription You are now set with a shiny new Ansible Automation Platform Controller!\nAdd Auth Token In the Automation Controller web UI, generate a token for the admin user:\n Go to Users Click admin and select Tokens Click the Add button As description add Token for use by ACM Update the Scope to Write Click Save Save the token value to a text file, you will need this token later!\nConfigure Template in Automation Controller For Automation Controller to run something we must configure a Project and a Template first.\nCreate an Ansible Project:\n Select Projects in the left menu Click Add Name: ACM Test Organization: Default SCM Type: Git SCM URL: https://github.com/devsecops-workshop/ansible-acm.git Click Save Create an Ansible Job Template:\n Select Templates in the left menu. Click Add then Add Job Template Name: acm-test Inventory: Demo Inventory Project: ACM Test Playbook: message.yml Check Prompt on launch for Variables Click Save Click Launch Verify that the Job run by going to Jobs and looking for an acm-test job showing a successful Playbook run.\nCreate AAP credentials in ACM Set up the credential which is going to allow ACM to interact with your AAP instance in your ACM Portal:\n Click on Credentials on the left menu and select Add Credential button. Credential type: Red Hat Ansible Automation Platform Credential name: appaccess Namespace: open-cluster-management Click Next Ansible Tower Host: Ansible Tower token: Click Next and Add Use the ACM - Ansible integration And now let\u0026rsquo;s configure the ACM integration with Ansible Automation Platform to kick off a job in Automation Controller. In this case the Ansible job will just run our simple playbook that will only output a message.\nIn the ACM Portal:\n Go to the Applications menu on the left and click Create application → Subscription Enter the following information: Name: book-import2 Namespace: book-import2 Under repository types, select GIT repository URL: https://github.com/devsecops-workshop/book-import.git Branch: prehook Path: book-import Expand the Configure automation for prehook and posthook dropdown menu Ansible Automation Platform credential: appaccess Select Deploy application resources only on clusters matching specified labels Label: environment Value: dev Click Create Give this a few minutes. The application will complete and in the application topology view you will see the Ansible prehook. In Automation Controller go to Jobs and verify the Automation Job run.\n"
},
{
"uri": "https://devsecops-workshop.github.io/20-appendix/",
"title": "Appendix",
"tags": [],
"description": "",
"content": "Create ACS init bundle in ACS Portal Create the init bundle using the ACS Portal:\n Navigate to Platform Configuration → Integrations. Under the Authentication Tokens section, click on Cluster Init Bundle. Click Generate bundle Enter a name for the cluster init bundle and click Generate. Click Download Kubernetes Secret File to download the generated bundle. If you are running oc on your laptop, you are set. If you are SSH-ing to another host (like the bastion host) to run oc, you have to scp the init bundle file over there. If you are using the OpenShift Web Terminal you have to use the API method.\n Create a serviceaccount to scan the internal OpenShift registry The integrations to the internal registry were created automatically. But to enable scanning of images in the internal registry, you\u0026rsquo;ll have to configure valid credentials, so this is what you\u0026rsquo;ll do:\n add a serviceaccount assign it the needed privileges configure the Integrations in ACS with the new credentials But the first step is to disable the auto-generate mechanism, otherwise your updated credentials would be set back automatically:\n In the OpenShift Web Console, switch to the project stackrox, go to Installed Operators-\u0026gt;Advanced Cluster Security for Kubernetes Open your Central instance stackrox-central-services Switch to the YAML view, under spec: add the following YAML snippet (one indent): customize: envVars: - name: ROX_DISABLE_AUTOGENERATED_REGISTRIES value: 'true' Click Save Create ServiceAccount to read images from Registry\n In the OpenShift Web Console make sure you are still in the stackrox Project User Management -\u0026gt; ServiceAccounts -\u0026gt; Create ServiceAccount Replace the example name in the YAML with acs-registry-reader and click Create In the new ServiceAccount, under Secrets click one of the acs-registry-reader-token-... secrets Under Data copy the Token Using oc give the ServiceAccount the right to read images from all projects: oc adm policy add-cluster-role-to-user 'system:image-puller' system:serviceaccount:stackrox:acs-registry-reader -n stackrox Configure Registry Integrations in ACS\nAccess the ACS Portal and configure the already existing integrations of type Generic Docker Registry. Go to Platform Configuration -\u0026gt; Integrations -\u0026gt; Generic Docker Registry. You should see a number of autogenerated (from existing pull-secrets) entries.\nYou have to change four entries pointing to the internal registry, you can easily recognize them by the placeholder Username serviceaccount.\nFor each of the four local registry integrations click Edit integration using the three dots at the right:\n Put in acs-registry-reader as Username Paste the token you copied from the secret into the Password field Select Disable TLS certificate validation Press the Test button to validate the connection and press Save when the test is successful. ACS is now able to scan images in the internal registry!\n"
},
{
"uri": "https://devsecops-workshop.github.io/",
"title": "",
"tags": [],
"description": "",
"content": "DevSecOps Workshop What is it about This workshop will introduce you to the application development cycle leveraging OpenShift\u0026rsquo;s tooling \u0026amp; features with a special focus on securing your environment using Advanced Cluster Security for Kubernetes (ACS). You will get a brief introduction in several OpenShift features like OpenShift Pipelines, OpenShift GitOps, OpenShift DevSpaces. And all in a fun way.\nArchitecture overview Click image to enlarge Credits \u0026amp; Contribution This workshop was created by\n Daniel Brintzinger Goetz Rieger Sebastian Dehn with contributions from\n Tobias Michelis Feel free to open an issue or create a pull request in GitHub\n"
},
{
"uri": "https://devsecops-workshop.github.io/categories/",
"title": "Categories",
"tags": [],
"description": "",
"content": ""
},
{
"uri": "https://devsecops-workshop.github.io/tags/",
"title": "Tags",
"tags": [],
"description": "",
"content": ""
}]