-
Notifications
You must be signed in to change notification settings - Fork 14
Architectural Differences
- Custom Python application that listens for incoming GRPC (HTTP/2) requests and serves a similar role to the TFRS backend.
- Will eventually be supplemented by Celery workers.
- Stateless (so we can run multiple pods if necessary for performance or HA)
- Runs NodeJS. Similar to TFRS frontend but not served statically.
Envoy is the main entry point to the application. It will redirect all incoming requests to the appropriate resources per the rules defined in dockerfiles/envoy/envoy.yaml
. It should be the destination for all incoming HTTP requests to the application after SSL termination.
The config I supplied for skaffold listens on port 10000
but this is arbitrary and can be changed. There is an admin listener or port 9901
that provides diagnostic information. It does not need public routing witin Openshift.
- Requests matching prefix
/sockjs-node/
are redirected to the frontend container. This is for hot reload support and should be removed in the Openshift environments. We will need to route websocket connections for notifications (but this is not built just yet). - Requests matching prefix
/grpc/
are redirected to the backend container (with the prefix rewritten to/
for the benefit of the code running on the backend). GRPC-web (HTTP/1.1 encoded, or HTTP/2 if support by browser) protocol will be translated to normal GRPC (strict HTTP/2). Envoy has a module that handles this transparently and it's enabled by the config.
Routes are sent to the DNS
name as resolved by Openshift for the target service. So the backend service requests are sent to an envoy cluster named python_transaction_service
which will use Openshift DNS to find containers with DNS name python-backend
. It will load balance if there is more than one.
- SMTPLogger - For local development only. Takes the place of an SMTP server and logs the messages to console. For debugging any emails we send.
- Keycloak - For local development only. Provided by the Openshift environment but not by local kubernetes.
These services are not yet part of either the local or Skaffold development but will be soon. Their configurations are essentially the same as they were in TFRS and could be deployed now.
- RabbitMQ - No public routing. Incoming AMQP connections from frontend, backend, and (future) Celery
- Minio - Needs public routing (either direct or via Envoy if you want them to be under a single subdomain)
- ClamAV - No public routing
I think using the k8s-*
files as a starting point for Openshift deployment makes a lot of sense. We might even consider parameterizing them with a tool like ytt
to improve manageability, but there are some key differences:
- Envoy doesn't need Keycloak rules in Openshift since it's a hosted service. We could considering paramterizing
envoy.yaml
? - In local development, we care about hot-reloading the frontend since we need it to respond quickly to changes. We need to change our frontend build scripts to operate differently when running in Openshift. However, I think we should serve our content using
node
directly, and not with ans2i-built
image, because we need to have the frontend respond to websocket requests for notifications (Doing it this way will allow us to use a single deployment for both the web content and notifications, saving us one deployment). - We run the (
alembic upgrade head
) as part of the entry point in Skaffold (see Dockerfile). It could be part of the deployment process in Openshift since it shouldn't run on every startup.