Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Shxco apache to nginx upgrade #154

Merged
merged 12 commits into from
Dec 20, 2023
16 changes: 9 additions & 7 deletions group_vars/shxco/vars.yml
Original file line number Diff line number Diff line change
Expand Up @@ -12,19 +12,21 @@ django_app: "{{ python_app }}"
symlink: mep
# wsgi file relative to deploy location
wsgi_path: "{{ django_app }}/wsgi.py"
# use python 3.6
python_version: 3.6
# use python 3.8
python_version: 3.8
# nodejs version
node_version: "10"
node_version: "18"

# Override clone root to use deploy user home instead of root
clone_root: "/home/{{ deploy_user }}/repos"
# don't distinguish between qa/prod paths
install_root: "/srv/www/{{ app_name }}"
# apache location
apache_app_path: "/var/www/{{ app_name }}"
# use site name instead of 'mep' for apache site config
apache_conf_name: "shakespeareandco"

# set passenger defaults for production; override for other environments
passenger_app_root: "/var/www/{{ app_name }}"
passenger_server_name: "shakespeareandco.princeton.edu"
passenger_startup_file: "{{ app_name }}/wsgi.py"
passenger_python: "{{ passenger_app_root }}/env/bin/python"

# pul deploy user
deploy_user: "conan"
Expand Down
13 changes: 12 additions & 1 deletion group_vars/shxco_qa/vars.yml
Original file line number Diff line number Diff line change
Expand Up @@ -12,6 +12,17 @@ zk_host: "lib-zk-staging1:2181,lib-zk-staging2:2181,lib-zk-staging3:2181/solr8"
solr_url: "http://lib-solr8-staging.princeton.edu:8983/solr/"
solr_server: "{{ groups['solr_staging'][0] }}"

# override passenger server name with qa hostname
application_url: "test-shakespeareandco.cdh.princeton.edu"

# passenger settings
passenger_server_name: "{{ application_url }}"




# source host when replicating data/media (from ansible host inventory file)
replication_source_host: shxco_prod

# configure scripts to run as cron jobs
crontab:
Expand All @@ -21,4 +32,4 @@ crontab:
minute: 30
hour: 2
job: "bin/cron-wrapper {{ deploy }}/env/bin/python {{ deploy }}/manage.py twitterbot_100years schedule >> {{ logging_dir }}/twitterbot_100years.log 2>&1"
state: absent
state: absent
4 changes: 2 additions & 2 deletions playbooks/replicate.yml
Original file line number Diff line number Diff line change
Expand Up @@ -14,7 +14,7 @@

# source hosts tasks
- name: Generate database and media backups on source host
hosts: geniza_prod, cdhweb_prod, prosody_prod
hosts: geniza_prod, cdhweb_prod, prosody_prod, shxco_prod
connection: ssh
remote_user: pulsys
# generate backups on source host
Expand Down Expand Up @@ -48,7 +48,7 @@
- name: Restore database and media backups on target host
# hosts: dev
# connection: local
hosts: geniza_qa, cdhweb_qa, prosody_qa
hosts: geniza_qa, cdhweb_qa, prosody_qa, shxco_qa
connection: ssh
remote_user: pulsys
vars:
Expand Down
3 changes: 2 additions & 1 deletion playbooks/shxco_qa.yml
Original file line number Diff line number Diff line change
Expand Up @@ -18,12 +18,13 @@
name: pulibrary.princeton_ansible.timezone
- build_project_repo
- postgresql
- passenger
- build_npm
- run_webpack # dependency for collectstatic
- configure_logging # logging directory must exist before running django commands
- django
- solr_collection
- configure_apache
# - configure_apache
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
# - configure_apache

- finalize_deploy
- configure_crontab # used in prod but not always in qa
- close_deployment
15 changes: 9 additions & 6 deletions roles/build_npm/tasks/main.yml
Original file line number Diff line number Diff line change
Expand Up @@ -19,14 +19,17 @@
channel: "{{ node_version }}/stable"
state: present
when: ansible_distribution == "Ubuntu"
register: snap_results

# NOTE: upgrading nodejs on cdh-geniza1 failed;
# was able to get it working with refresh
# instead of install:
# sudo snap refresh node --channel=16
# We may want to run refresh as a command, see
# https://serverfault.com/a/1025300
# NOTE: upgrading nodejs with ansible snap fails, even though
# the documentation claims it should refresh when then channel changes.
# Manually run a refresh command to ensure version changes take effect, e.g.:
# sudo snap refresh node --channel=18
# NOTE2: could add a node -v check and only refresh on mismatch

- name: Refresh nodejs to ensure version changes take effect
become: true
ansible.builtin.command: "snap refresh node --channel={{ node_version }}/stable"
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@kayiwa FYI, we (@quadrismegistus and I) needed to solve the node upgrade problem (#143) and came up with a simpler solution than switching to the princeton ansible role (as previously proposed). Running the refresh command here resolves the problem, and doesn't seem to cause any problems or delay when refresh has been done. The comment above should make it clear why this is here, and if/when ansible actually runs the refresh as it claims to do this stanza can be removed.


- name: install javascript dependencies with npm
become: true
Expand Down
3 changes: 3 additions & 0 deletions roles/passenger/tasks/main.yml
Original file line number Diff line number Diff line change
Expand Up @@ -54,6 +54,9 @@
become: true
ansible.builtin.apt:
update_cache: true
# update cache fails on bionic because postgres no longer has a release
# only update cache on newer VMs; skip updating on older vms
when: ansible_distribution_version != "18.04"

# Nginx and passenger installation.
- name: Install Nginx and Passenger.
Expand Down