Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Initial working version of automated Tower on OpenShift #9

Open
wants to merge 2 commits into
base: master
Choose a base branch
from
Open
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
166 changes: 166 additions & 0 deletions tower-on-openshift/README.adoc
Original file line number Diff line number Diff line change
@@ -0,0 +1,166 @@
= Install Ansible Tower on OpenShift

== Pre-requisites

- Ansible
- python3-docker on the control host
- RHEL 7 image downloaded e.g. to /var/lib/libvirt/images/rhel-server-7.5-x86_64-dvd.iso

== Create a fitting VM

Have a look at memory and vCPUs (see below considerations):

------------------------------------------------------------------------
virt-install \
--name tower_on_openshift \
--os-variant rhel7 \
--initrd-inject=/home/elavarde/Documents/Work/Ansible/tower_on_openshift.ks \
--location /var/lib/libvirt/images/rhel-server-7.5-x86_64-dvd.iso \
--extra-args "ks=file:/tower_on_openshift.ks" \
--ram 8192 \
--disk pool=default,boot_order=1,format=qcow2,bus=virtio,discard=unmap,sparse=yes,size=10 \
--disk pool=default,boot_order=2,format=qcow2,bus=virtio,discard=unmap,sparse=yes,size=50 \
--controller scsi,model=virtio-scsi \
--rng /dev/random \
--vcpus 3 \
--cpu host \
--accelerate \
--network network=default,model=virtio
# --extra-args "ks=file:/tower_on_openshift.ks console=ttyS0,115200" \
# --nographics \
# --boot useserial=on \
------------------------------------------------------------------------

FIXME: first step with only one disk of 60GB to avoid having to fiddle with the 2nd disk for now...

WAIT...

In the meantime, download the Linux OpenShift client from https://access.redhat.com/downloads/content/290 e.g. oc-3.11.59-linux.tar.gz

When finished:

. enter hostname/IP address into /etc/hosts on the control host
. clean-up `.ssh/known_hosts`
. ssh-copy-id a key which you've also ssh-add'ed (there are other ways to address this but let's keep it simple)
. copy `tower_on_openshift.example.inventory` to `tower_on_openshift.local.inventory` and adapt to your needs.
. call the following command:
+
------------------------------------------------------------------------
ansible-playbook -i tower_on_openshift.local.inventory tower_on_openshift.yml \
-e 'rhsm_username=YOUR_RHSM_USER' \
-e 'rhsm_password=YOUR_RHSM_PASSWORD'`
------------------------------------------------------------------------
+
. Follow the instructions from the playbook as what to do next (basically call `oc cluster up` to start the OpenShift cluster, followed by `./setup_openshift.sh` to install Tower.

== Manual steps

CAUTION: The following lines are probably not needed anymore as they were describing the manual steps, now replaced almost completely by the playbook. Certain steps might still be of interest though (e.g. how to create a fitting "external" PostgreSQL pod in OpenShift).

=== Install OpenShift as one node cluster

LOG IN as root/redhat

subscription-manager register # enter CDN user and password
subscription-manager list --available | less # search for Employee SKU, note pool-id
subscription-manager attach --pool=8a85f9833e1404a9013e3cddf99305e6

yum repolist # rhel-7-server-rpms only...
subscription-manager repos --enable=rhel-7-server-extras-rpms # for Docker
yum -y update # wait for 265 packages to be installed...

yum -y install docker

scp oc-3.11.59-linux.tar.gz [email protected]:/tmp
cd /usr/local/bin
tar xvzf /tmp/oc-3.11.59-linux.tar.gz # oc binary should be executable!

# check https://github.com/openshift/origin/blob/release-3.11/docs/cluster_up_down.md

echo '{ "insecure-registries": ["172.30.0.0/16"] }' > /etc/docker/daemon.json

systemctl start docker && systemctl enable docker

iprange=$(docker network inspect -f "{{range .IPAM.Config }}{{ .Subnet }}{{end}}" bridge)
firewall-cmd --permanent --new-zone dockerc
firewall-cmd --permanent --zone dockerc --add-source ${iprange}
firewall-cmd --permanent --zone dockerc --add-port 8443/tcp
firewall-cmd --permanent --zone dockerc --add-port 53/udp
firewall-cmd --permanent --zone dockerc --add-port 8053/udp
firewall-cmd --reload

pvcreate /dev/vdb
vgcreate vgvols /dev/vdb
lvcreate --name lvvols --size 20G vgvols
mkfs.ext4 -L VOLSFS /dev/mapper/vgvols-lvvols
mount /dev/mapper/vgvols-lvvols /srv
# --host-pv-dir doesn't exist anymore !? neither --host-data-dir
# hence we just add the disk to the VG/LV already existing...
pvcreate /dev/vdb
vgextend vg1 /dev/vdb
lvextend -r -l100%VG /dev/vg1/root
# FIXME we can have it easier... See note with virt-install about only one disk

# following https://access.redhat.com/RegistryAuthentication
docker login https://registry.redhat.io # enter your RHN credentials -> Login Succeeded
cat ~/.docker/config.json # contains an auth token from registry.redhat.io

# make sure you are in a directory you don't mind being filled...
oc cluster up \
--public-hostname=tower-ocp.ewl.example.com \
--routing-suffix=192.168.122.226.nip.io # the IP address of the VM

WAIT...

# give the admin user cluster-admin rights...
oc login -u system:admin
oc adm policy add-cluster-role-to-user cluster-admin admin

=== Install PostgreSQL

- connect to the UI as developer https://tower-ocp.ewl.example.com:8443/console/
- select the myproject project
- add a PostgreSQL 9.6 DB to the project
* Memory Limit: 512Mi (default)
* Namespace: openshift (default)
* Database Service Name: pgawx
* PG Connection Username: awx
* PG Connection Password: pgredhat
* PG Database Name: awx
* Volume Capacity: 10Gi
* Version of PG Image: 9.6
* Result: Username: awx Password: pgredhat Database Name: awxdb Connection URL: postgresql://pgawx:5432/

=== Install Ansible Tower

Reference:: https://docs.ansible.com/ansible-tower/latest/html/administration/openshift_configuration.html

cd
wget https://releases.ansible.com/ansible-tower/setup_openshift/ansible-tower-openshift-setup-3.4.0.tar.gz
tar xvzf ansible-tower-openshift-setup-3.4.0.tar.gz
cd ansible-tower-openshift-setup-3.4.0

Edit the inventory file as in tower.inventory:

- `openshift_host=https://tower-ocp.ewl.example.com:8443` - the port is important or 443 will be used!
- `openshift_skip_tls_verify=true` because our setup is only for test or else __"error: The server is using a certificate that does not match its hostname: x509: certificate is valid for *.router.default.svc.cluster.local, router.default.svc.cluster.local, not localhost"__.
- increase for customer requirements (or decrease here for test setups):
* `task_cpu_request=1500` is the default sufficient for 6 simultaneous forks (1000 should be sufficient for 4 forks)
* `task_mem_request=2` is the default sufficient for 20 simultaneous forks (1 should be sufficient for 10 forks)



4 vCPUs are required using the defaults because we also need some space for PostgreSQL on top of the required 3 vCPUs:

# oc describe pod ansible-tower-0 | grep -e cpu -e memory
cpu: 500m
memory: 1Gi
cpu: 1500m
memory: 2Gi
cpu: 500m
memory: 2Gi
cpu: 500m
memory: 1Gi
Tolerations: node.kubernetes.io/memory-pressure:NoSchedule
Warning FailedScheduling 3m (x212 over 38m) default-scheduler 0/1 nodes are available: 1 Insufficient cpu.

5 changes: 5 additions & 0 deletions tower-on-openshift/TODO.adoc
Original file line number Diff line number Diff line change
@@ -0,0 +1,5 @@
- automate oc completion to /etc/profile.d/oc.sh
- automate the provisioning and the VM
- make the whole thing less specific to my environment
- replace oc client command with k8s module
- make the setup persistent e.g. with https://github.com/openshift-evangelists/oc-cluster-wrapper
10 changes: 10 additions & 0 deletions tower-on-openshift/postgresql-pvc.yml
Original file line number Diff line number Diff line change
@@ -0,0 +1,10 @@
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: postgresql-pvc
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 100Gi
80 changes: 80 additions & 0 deletions tower-on-openshift/tower.inventory.j2
Original file line number Diff line number Diff line change
@@ -0,0 +1,80 @@
localhost ansible_connection=local ansible_python_interpreter="/usr/bin/env python"

[all:vars]

# This will create or update a default admin (superuser) account in Tower
admin_user=admin
admin_password='redhat'

# Tower Secret key
# It's *very* important that this stay the same between upgrades or you will lose
# the ability to decrypt your credentials
secret_key='verysecretkey'

# Database Settings
# =================

# Set pg_hostname if you have an external postgres server, otherwise
# a new postgres service will be created
{% if db_type == "extern" %}
pg_hostname=pgawx
{% else %}
#pg_hostname=pgawx
{% endif %}

# If using an external database, provide your existing credentials.
# If you choose to use the provided containerized Postgres depolyment, these
# values will be used when provisioning the database.
pg_username='awx'
pg_password='pgredhat'
pg_database='awxdb'
pg_port=5432

rabbitmq_password='rmqredhat'
rabbitmq_erlang_cookie='rmqcookie'

# Note: The user running this installer will need cluster-admin privileges.
# Tower's job execution container requires running in privileged mode,
# and a service account must be created for auto peer-discovery to work.

# Deploy into Openshift
# =====================

openshift_host=https://tower-ocp.ewl.example.com:8443
openshift_skip_tls_verify=true
openshift_project=myproject
openshift_user=admin
openshift_password=redhat

# If you don't want to hardcode a password here, just do:
# ./setup_openshift.sh -e openshift_token=$TOKEN

# Skip this section if you BYO database. This is only used when you want the installer to deploy
# a containerized Postgres deployment inside of your OpenShift cluster.
# This is only recommended if you have experience storing and managing persistent
# data in containerized environments.
#
# Name of a PVC you've already provisioned for database:
{% if db_type == "intern" %}
openshift_pg_pvc_name=postgresql-pvc
{% else %}
# openshift_pg_pvc_name=postgresql-pvc
{% endif %}
#
# Or... use an emptyDir volume for the OpenShift Postgres pod.
# Useful for demos or testing purposes.
{% if db_type == "empty" %}
openshift_pg_emptydir=true
{% else %}
# openshift_pg_emptydir=false
{% endif %}

# Deploy into Vanilla Kubernetes
# ==============================

# kubernetes_context=test-cluster
# kubernetes_namespace=tower

# reduce (or increase) the memory and CPU size required
task_cpu_request=1000
task_mem_request=1
14 changes: 14 additions & 0 deletions tower-on-openshift/tower_on_openshift.example.inventory
Original file line number Diff line number Diff line change
@@ -0,0 +1,14 @@
[all]
# we assume SSH key has been copied and is ssh-added...
tower-ocp.ewl.example.com ansible_user=root

[all:vars]

rhsm_pool_pattern="^Employee SKU$"
# rhsm_pool_id: any pool id as alternative to rhsm_pool_pattern
oc_tar_source=~/Downloads/oc-3.11.59-linux.tar.gz

tower_version=3.4.1
tower_extract_dir=/root
# db_type can be intern, extern or empty
db_type=intern
37 changes: 37 additions & 0 deletions tower-on-openshift/tower_on_openshift.ks
Original file line number Diff line number Diff line change
@@ -0,0 +1,37 @@
install
lang en_US.UTF-8
keyboard --vckeymap=us --xlayouts='us'
firstboot --enable
auth --enableshadow --passalgo=sha512
services --enabled=chronyd
eula --agreed
reboot

# network
network --bootproto=dhcp --device=eth0 --noipv6 --activate --hostname=tower-ocp.ewl.example.com

# System timezone
timezone Europe/Berlin --isUtc

# Disks
bootloader --location=mbr --boot-drive=vda
ignoredisk --only-use=vda
zerombr
clearpart --all --initlabel --drives=vda
part /boot/efi --fstype="vfat" --size=200 --ondisk=vda
part /boot --fstype="ext2" --size=512 --ondisk=vda --asprimary
part pv.10 --fstype="lvmpv" --size=1 --grow --ondisk=vda

# LVMs
volgroup vg1 pv.10
logvol / --fstype=xfs --name=root --vgname=vg1 --size=1 --grow
#logvol swap --fstype=swap --size=2048 --vgname=vg1

rootpw --plaintext redhat

%packages
@base
net-tools
wget

%end
Loading