This is a project used by Ministry of Justice UK and agencies. https://intranet.justice.gov.uk/
Nb.
README.md
is located in.github/
, the preferred location for a clean repository.
The application uses Docker. This repository provides two separate local test environments, namely:
- Docker Compose
- Kubernetes
Where docker compose
provides a pre-production environment to develop features and apply upgrades, Kubernetes allows
us to test and debug our deployments to the Cloud Platform.
In a terminal, move to the directory where you want to install the application. You may then run:
git clone https://github.com/ministryofjustice/intranet.git
Change directories:
cd intranet
Next, depending on the environment you would like to launch, do one of the following.
This environment has been set up to develop and improve the application.
The following make command will get you up and running.
It creates the environment, starts all services and opens a command prompt on the container that houses our PHP code,
the service is called php-fpm
:
make
During the make
process, the Dory proxy will attempt to install. You will be guided though an installation, if needed.
You will have five services running with different access points. They are:
Nginx
http://intranet.docker/
PHP-FPM
make bash
On first use, the application will need initializing with the following command.
composer install
Node
This service watches and compiles our assets, no need to access. The output of this service is available on STDOUT.
When working with JS files in the src
directory it can be useful to develop from inside the node container.
Using a devcontainer will allow the editor to have access to the node_modules
directory, which is good for intellisense and type-safety.
When using a devcontainer, first start the required services with make
and then open the project in the devcontainer.
Be sure to keep an eye on the node container's terminal output for any laravel mix errors.
The folder src/components
is used for when it makes sense to keep a group of scss/js/php files together.
The folder src/components/post-meta
is an example where php is required to register fields in the backend, and js is used to register fields in the frontend.
MariaDB
Internally accessed by PHP-FPM on port 3306
PHPMyAdmin
http://intranet.docker:9191/
Login details located in docker-compose.yml
There is no need to install application software on your computer.
All required software is built within the services and all services are ephemeral.
There are multiple volume mounts created in this project and shared across the services. The approach has been taken to speed up and optimise the development experience.
This environment is useful to test Kubernetes deployment scripts.
Local setup attempts to get as close to development on Cloud Platform as possible, with a production-first approach.
- Docker
- kubectl
- Kind
- Hosts file update, you could...
sudo nano /etc/hosts
... on a new line, add:127.0.0.1 intranet.local
Once the above requirements have been met, we are able to launch our application by executing the following make command:
make local-kube
The following will take place:
- If running; the Dory proxy is stopped
- A Kind cluster is created with configuration from:
deploy/config/local/cluster.yml
- The cluster Ingress is configured
- Nginx and PHP-FPM images are built
- Images are transferred to the Kind Control Plane
- Local deployment is applied using
kubectl apply -f deploy/local
- Verifies pods using
kubectl get pods -w
Access the running application here: http://intranet.local/
In the MariaDB YAML file you will notice a persistent volume claim. This will assist you in keeping application data, preventing you from having to reinstall WordPress every time you stop and start the service.
Most secrets are managed via GitHub settings
It is the intention that WordPress keys and salts are auto generated, before the initial GHA build stage. Lots of testing occurred yet the result wasn't desired; dynamic secrets could not be hidden in the log outputs. Due to this, secrets are managed in settings.
# Make interaction a little easier; we can create repeatable
# variables, our namespace is the same name as the app, defined
# in ./deploy/development/deployment.tpl
#
# If interacting with a different stack, change the NSP var.
# For example;
# - production, change to 'intranet-prod'
# Set some vars, gets the first available pod
NSP="intranet-dev"; \
POD=$(kubectl -n $NSP get pod -l app=$NSP -o jsonpath="{.items[0].metadata.name}");
# Local interaction is a little different:
# - local, change NSP to `default` and app to `intranet-local`
NSP="default"; \
POD=$(kubectl -n $NSP get pod -l app=intranet-local -o jsonpath="{.items[0].metadata.name}");
After setting the above variables (via copy -> paste -> execute
) the following blocks of commands will work
using copy -> paste -> execute
too.
# list available pods and their status for the namespace
kubectl get pods -n $NSP
# watch for updates, add the -w flag
kubectl get pods -w -n $NSP
# describe the first available pod
kubectl describe pods -n $NSP
# monitor the system log of the first pod container
kubectl logs -f $POD -n $NSP
# monitor the system log of the fpm container
kubectl logs -f $POD -n $NSP fpm
# open an interactive shell on an active pod
kubectl exec -it $POD -n $NSP -- ash
# open an interactive shell on the FPM container
kubectl exec -it $POD -n $NSP -c fpm -- ash
Create a bucket with the following settings:
- Region:
eu-west-2
- Object Ownership:
- ACLs enabled
- Bucket owner preferred
- Block all public access:
- Block public access to buckets and objects granted through new access control lists (ACLs): NO
- Block public access to buckets and objects granted through any access control lists (ACLs): YES
- Block public access to buckets and objects granted through new public bucket or access point policies: YES
- Block public and cross-account access to buckets and objects through any public bucket or access point policies: YES
Create a deployment with the following settings:
- Cache key and origin requests
- Legacy cache settings
- Query strings: All
- Legacy cache settings
To restrict access to the Amazon S3 bucket follow the guide to implement origin access control (OAC) https://repost.aws/knowledge-center/cloudfront-access-to-amazon-s3
For using u user's keys, create a user with a policy similar to the following:
{
"Sid": "s3-bucket-access",
"Effect": "Allow",
"Action": "s3:*",
"Resource": "arn:aws:s3:::bucket-name"
}
An access key can then be used for testing actions related to the S3 bucket, use env vars:
- AWS_ACCESS_KEY_ID
- AWS_SECRET_ACCESS_KEY
When deployed, server-roles should be used.
To verify that S3 & CloudFront are working correctly.
- Go to WP Offload Media Lite settings page. There should be green checks for Storage & Delivery settings.
- Upload an image via Media Library.
- Image should be shown correctly in the Media Library.
- The img source domain should be CloudFront.
- Directly trying to access an image via the S3 bucket url should return an access denied message.