diff --git a/api/index.html b/api/index.html index 7ac0897..50ef2e0 100644 --- a/api/index.html +++ b/api/index.html @@ -434,6 +434,15 @@ + + +
sleep
clientTest individual service endpoints¶We assume that you have the excellent jq utility already installed.
This can also be performed directly from the UI.
Test one of the visits-service
endpoints:
Now that we have some familiarity with some of the API endpoints that make up this application, let's turn our attention to configuring a small aspect of resilience: timeouts.
diff --git a/ingress/index.html b/ingress/index.html index 8b34032..3f151eb 100644 --- a/ingress/index.html +++ b/ingress/index.html @@ -568,44 +568,175 @@The below configuration creates a listener on the ingress gateway for HTTP traffic on port 80.
+Apply the gateway configuration to your cluster:
-The above configuration creates a listener on the ingress gateway for HTTP traffic on port 80.
Since no routing has been configured yet for the gateway, a request to the gateway should return an HTTP 404 response:
The original Spring Cloud Gateway routing rules were replaced and are now captured with a standard Istio VirtualService CRD in manifests/ingress/routes.yaml
.
Apply the routing rules for the gateway:
- +The original Spring Cloud Gateway routing rules were replaced and are now captured with a standard Istio VirtualService in manifests/ingress/routes.yaml
:
routes.yaml
configures routing for the Istio ingress gateway (which replaces spring cloud gateway) to the application's API endpoints.
It exposes endpoints to each of the services, and in addition, routes requests with the /api/gateway
prefix to the petclinic-frontend
application. In the original version, the petclinic-frontend application and the gateway "proper" were bundled together as a single microservice.
Apply the routing rules for the gateway:
+With the application deployed, and ingress configured, we can finally view the application's user interface.
+With the application deployed and ingress configured, we can finally view the application's user interface.
To see the running PetClinic application, open a browser tab and visit http://$LB_IP/.
You should see a home page. Navigate to the Vets page, then the Pet Owners page, and finally, drill down to a specific pet owner, and otherwise get acquainted with the UI.
Prior to Istio, the common solution in the Spring ecosystem to issues of service discovery, resilience, load balancing was Spring Cloud. Spring Cloud consists of multiple projects that provide dependencies that developers add to their applications to help them deal with issues of client-side load-balancing, retries, circuit-breaking, service discovery and so on.
In spring-petclinic-istio
, those dependencies have been removed. What remains as dependencies inside each service are what you'd expect to find:
Spring Cloud was removed. We can replace this configuration with an Istio Custom Resource.
The file timeouts.yaml
configures the equivalent 4s timeout on requests to the visits
service, replacing the previous Resilience4j-based implementation.
Apply the timeout configuration to your cluster:
-kubectl apply -f manifests/config/timeouts.yaml
+
The fallback logic in PetClinicController.getOwnerDetails
was retrofitted to detect the Gateway Timeout (504) response code instead of using a Resilience4j API.
To test this feature, the environment variable DELAY_MILLIS was introduced into the visits service to insert a delay when fetching visits.
@@ -515,29 +544,29 @@ Test resilience and fallbackkubectl exec deploy/sleep -- curl -s visits-service:8080/pets/visits?petId=8 | jq
+
Observe the call succeed and return a list of visits for this particular pet.
-
Call the petclinic-frontend
endpoint, and note that for each pet, we see a list of visits:
-kubectl exec deploy/sleep -- curl -s petclinic-frontend:8080/api/gateway/owners/6 | jq
+
-
Edit the deployment manifest for the visits-service
so that the environment variable DELAY_MILLIS
is set to the value "5000" (which is 5 seconds). One way to do this is to edit the file with (then save and exit):
-kubectl edit deploy visits-v1
+
-
Once the new visits-service
pod reaches Ready status, make the same call again:
-kubectl exec deploy/sleep -- curl -v visits-service:8080/pets/visits?petId=8 | jq
+
Observe the 504 (Gateway timeout) response this time around (because it exceeds the 4-second timeout).
-
Call the petclinic-frontend
endpoint once more, and note that for each pet, the list of visits is empty:
-kubectl exec deploy/sleep -- curl -s petclinic-frontend:8080/api/gateway/owners/6 | jq
+
That is, the call succeeds, the timeout is caught, and the fallback empty list of visits is returned in its place.
Tail the logs of petclinic-frontend
and observe a log message indicating the fallback was triggered.
diff --git a/search/search_index.json b/search/search_index.json
index 28eb88b..959ee65 100644
--- a/search/search_index.json
+++ b/search/search_index.json
@@ -1 +1 @@
-{"config":{"lang":["en"],"separator":"[\\s\\-]+","pipeline":["stopWordFilter"]},"docs":[{"location":"","title":"Introduction","text":"A great deal of \"cruft\" accumulates inside many files in spring-petclinic-cloud
: configuration for service discovery, load balancing, routing, retries, resilience, and so on.
When you move to Istio, you get separation of concerns. It's ironic that the Spring framework's raison d'\u00eatre was separation of concerns, but its focus is inside a monolithic application, not between microservices. When you move to cloud-native applications, you end up with a tangle of concerns that Istio helps you untangle.
And, little by little, our apps become sane again. It reminds me of one of Antoine de Saint-Exup\u00e9ry's famous quotes:
Perfection is finally attained not when there is no longer anything to add, but when there is no longer anything to take away
The following instructions will walk you through deploying spring-petclinic-istio
either using a local Kubernetes cluster or a remote, cloud-based cluster.
After the application is deployed, I walk you through some aspects of the application and additional benefits gained from running on the Istio platform: orthogonal configuration of traffic management and resilience concerns, stronger security and workload identity, and observability.
Let's get started..
"},{"location":"api/","title":"API Endpoints","text":"Below, we demonstrate calling endpoints on the application in either of two ways:
- Internally from within the Kubernetes cluster
-
Through the \"front door\", via the ingress gateway
The environment variable LB_IP
captures the public IP address of the load balancer fronting the ingress gateway. We can access the service endpoints through that IP address.
"},{"location":"api/#deploy-the-sleep-client","title":"Deploy the sleep
client","text":"We make use of Istio's sleep sample application to facilitate the task of making calls to workloads from inside the cluster.
The sleep
deployment is a blank client Pod that can be used to send direct calls to specific microservices from within the Kubernetes cluster.
Deploy sleep
to your cluster:
kubectl apply -f manifests/sleep.yaml\n
Wait for the sleep pod to be ready (2/2 containers).
"},{"location":"api/#test-individual-service-endpoints","title":"Test individual service endpoints","text":""},{"location":"api/#call-the-vets-controller-endpoint","title":"Call the \"Vets\" controller endpoint","text":"InternalExternal kubectl exec deploy/sleep -- curl -s vets-service:8080/vets | jq\n
curl -s http://$LB_IP/api/vet/vets | jq\n
"},{"location":"api/#customers-service-endpoints","title":"Customers service endpoints","text":"Here are a couple of customers-service
endpoints to test:
InternalExternal kubectl exec deploy/sleep -- curl -s customers-service:8080/owners | jq\n
kubectl exec deploy/sleep -- curl -s customers-service:8080/owners/1/pets/1 | jq\n
curl -s http://$LB_IP/api/customer/owners | jq\n
curl -s http://$LB_IP/api/customer/owners/1/pets/1 | jq\n
Give the owner George Franklin a new pet, Sir Hiss (a snake):
InternalExternal kubectl exec deploy/sleep -- curl -s \\\n -X POST -H 'Content-Type: application/json' \\\n customers-service:8080/owners/1/pets \\\n -d '{ \"name\": \"Sir Hiss\", \"typeId\": 4, \"birthDate\": \"2020-01-01\" }'\n
curl -X POST -H 'Content-Type: application/json' \\\n http://$LB_IP/api/customer/owners/1/pets \\\n -d '{ \"name\": \"Sir Hiss\", \"typeId\": 4, \"birthDate\": \"2020-01-01\" }'\n
This can also be performed directly from the UI.
"},{"location":"api/#the-visits-service","title":"The Visits service","text":"Test one of the visits-service
endpoints:
InternalExternal kubectl exec deploy/sleep -- curl -s visits-service:8080/pets/visits?petId=8 | jq\n
curl -s http://$LB_IP/api/visit/pets/visits?petId=8 | jq\n
"},{"location":"api/#petclinic-frontend","title":"PetClinic Frontend","text":"Call petclinic-frontend
endpoint that calls both the customers and visits services:
InternalExternal kubectl exec deploy/sleep -- curl -s petclinic-frontend:8080/api/gateway/owners/6 | jq\n
curl -s http://$LB_IP/api/gateway/owners/6 | jq\n
"},{"location":"deploy/","title":"Build and Deploy PetClinic","text":""},{"location":"deploy/#deploy-each-microservices-backing-database","title":"Deploy each microservice's backing database","text":"Deployment decisions:
- We use mysql. Mysql can be installed with helm. Its charts are in the bitnami repository.
- We deploy a separate database statefulset for each service.
- Inside each statefulset we name the database \"service_instance_db\".
- Apps use the root username \"root\".
- The helm installation will generate a root user password in a secret.
- The applications reference the secret name to get at the database credentials.
"},{"location":"deploy/#preparatory-steps","title":"Preparatory steps","text":"We assume you already have helm installed.
-
Add the helm repository:
helm repo add bitnami https://charts.bitnami.com/bitnami\n
-
Update it:
helm repo update\n
"},{"location":"deploy/#deploy-the-databases","title":"Deploy the databases","text":"Deploy the databases with a helm install
command, one for each app/service:
-
Vets:
helm install vets-db-mysql bitnami/mysql --set auth.database=service_instance_db\n
-
Visits:
helm install visits-db-mysql bitnami/mysql --set auth.database=service_instance_db\n
-
Customers:
helm install customers-db-mysql bitnami/mysql --set auth.database=service_instance_db\n
The databases should be up after ~ 1-2 minutes.
Wait for the pods to be ready (2/2 containers).
"},{"location":"deploy/#build-the-apps-create-the-docker-images-push-them-to-the-local-registry","title":"Build the apps, create the docker images, push them to the local registry","text":"We assume you already have maven installed locally.
-
Compile the apps and run the tests:
mvn clean package\n
-
Build the images
mvn spring-boot:build-image\n
-
Publish the images
./push-images.sh\n
"},{"location":"deploy/#deploy-the-apps","title":"Deploy the apps","text":"The deployment manifests are located in manifests/deploy
.
The services are vets
, visits
, customers
, and petclinic-frontend
. For each service we create a Kubernetes ServiceAccount, a Deployment, and a ClusterIP service.
Apply the deployment manifests:
cat manifests/deploy/*.yaml | envsubst | kubectl apply -f -\n
The manifests reference the image registry environment variable, and so are passed through envsubst
for resolution before being applied to the Kubernetes cluster.
Wait for the pods to be ready (2/2 containers).
Here is a simple diagnostic command that tails the logs of the customers service pod, showing that the Spring Boot application has come up and is listening on port 8080.
kubectl logs --follow svc/customers-service\n
"},{"location":"deploy/#test-database-connectivity","title":"Test database connectivity","text":"The below instructions are taken from the output from the prior helm install
command.
Connect directly to the vets-db-mysql
database:
-
Obtain the root password from the Kubernetes secret:
bash shellfish shell MYSQL_ROOT_PASSWORD=$(kubectl get secret --namespace default \\\n vets-db-mysql -o jsonpath=\"{.data.mysql-root-password}\" | base64 -d)\n
set MYSQL_ROOT_PASSWORD $(kubectl get secret --namespace default \\\n vets-db-mysql -o jsonpath=\"{.data.mysql-root-password}\" | base64 -d)\n
-
Create, and shell into a mysql client pod:
kubectl run vets-db-mysql-client \\\n --rm --tty -i --restart='Never' \\\n --image docker.io/bitnami/mysql:8.0.36-debian-11-r2 \\\n --namespace default \\\n --env MYSQL_ROOT_PASSWORD=$MYSQL_ROOT_PASSWORD \\\n --command -- bash\n
-
Use the mysql
client to connect to the database:
mysql -h vets-db-mysql.default.svc.cluster.local -uroot -p\"$MYSQL_ROOT_PASSWORD\"\n
At the mysql prompt, select the database, list the tables, and query vet records:
use service_instance_db;\n
show tables;\n
select * from vets;\n
Exit the mysql prompt with \\q
, then exit the pod with exit
.
One can similarly connect to and inspect the customers-db-mysql
and visits-db-mysql
databases.
"},{"location":"deploy/#summary","title":"Summary","text":"At this point you should have all applications deployed and running, connected to their respective databases.
But we cannot access the application's UI until we configure ingress, which is our next topic.
"},{"location":"ingress/","title":"Configure Ingress","text":"The original project made use of the Spring Cloud Gateway project to configure ingress and routing.
Ingress is Istio's bread and butter. Envoy provides those capabilities. And so the dependency was removed and replaced with a standard Istio Ingress Gateway.
The Istio installation includes the Ingress Gateway component. You should be able to see the deployment in the istio-system
namespace with:
kubectl get deploy -n istio-system\n
Ingress is configured with Istio in two parts: the gateway configuration proper, and the configuration to route requests to backing services.
"},{"location":"ingress/#configure-the-gateway","title":"Configure the Gateway","text":"---\napiVersion: networking.istio.io/v1beta1\nkind: Gateway\nmetadata:\n name: main-gateway\nspec:\n selector:\n istio: ingressgateway\n servers:\n - port:\n number: 80\n name: http\n protocol: HTTP\n hosts:\n - \"*\"\n
kubectl apply -f manifests/ingress/gateway.yaml\n
The above configuration creates a listener on the ingress gateway for HTTP traffic on port 80.
Since no routing has been configured yet for the gateway, a request to the gateway should return an HTTP 404 response:
curl -v http://$LB_IP/\n
"},{"location":"ingress/#configure-routing","title":"Configure routing","text":"The original Spring Cloud Gateway routing rules were replaced and are now captured with a standard Istio VirtualService CRD in manifests/ingress/routes.yaml
.
Apply the routing rules for the gateway:
kubectl apply -f manifests/ingress/routes.yaml\n
routes.yaml
configures routing for the Istio ingress gateway (which replaces spring cloud gateway) to the application's API endpoints.
It exposes endpoints to each of the services, and in addition, routes requests with the /api/gateway
prefix to the petclinic-frontend
application. In the original version, the petclinic-frontend application and the gateway \"proper\" were bundled together as a single microservice.
"},{"location":"ingress/#visit-the-app","title":"Visit the app","text":"With the application deployed, and ingress configured, we can finally view the application's user interface.
To see the running PetClinic application, open a browser tab and visit http://$LB_IP/.
You should see a home page. Navigate to the Vets page, then the Pet Owners page, and finally, drill down to a specific pet owner, and otherwise get acquainted with the UI.
"},{"location":"ingress/#analysis","title":"Analysis","text":"Prior to Istio, the common solution in the Spring ecosystem to issues of service discovery, resilience, load balancing was Spring Cloud. Spring Cloud consists of multiple projects that provide dependencies that developers add to their applications to help them deal with issues of client-side load-balancing, retries, circuit-breaking, service discovery and so on.
In spring-petclinic-istio
, those dependencies have been removed. What remains as dependencies inside each service are what you'd expect to find:
- Spring boot and actuator are the foundation of modern Spring applications.
- Spring Data JPA and the mysql connector for database access.
- Micrometer for exposing application metrics via a Prometheus endpoint.
- Micrometer-tracing for propagating trace headers through these applications.
"},{"location":"observability/","title":"Observability","text":""},{"location":"observability/#observe-distributed-traces","title":"Observe Distributed Traces","text":"All boot apps are configured to propagate trace headers using micrometer-tracing, per the Istio documentation.
See the application.yaml
resource files and the property management.tracing.baggage.remote-fields
which configures the fields to propagate.
To make testing this easier, Istio is configured with 100% trace sampling.
"},{"location":"observability/#steps","title":"Steps","text":" -
Navigate to the base directory of your Istio distribution:
cd istio-1.20.2\n
-
Deploy Istio observability samples:
kubectl apply -f samples/addons/prometheus.yaml\nkubectl apply -f samples/addons/jaeger.yaml\nkubectl apply -f samples/addons/kiali.yaml\nkubectl apply -f samples/addons/grafana.yaml\n
Wait for the observability pods to be ready:
kubectl get pod -n istio-system\n
-
Call the petclinic-frontend
endpoint that calls both the customers and visits services, perhaps a two or three times so that the requests get sampled:
curl -s http://$LB_IP/api/gateway/owners/6 | jq\n
-
Start the jaeger dashboard:
istioctl dashboard jaeger\n
-
In Jaeger, search for traces involving the services petclinic-frontend, customers, and visits.
You should see new traces, with six spans, showing the full end-to-end request-response flow across all three services.
Close the jaeger dashboard.
"},{"location":"observability/#kiali","title":"Kiali","text":"The Kiali dashboard can likewise be used to display visualizations of such end-to-end flows.
-
Send a light load of requests against your application.
We provide a simple siege script to send requests through to the petclinic-frontend
endpoint that aggregates responses from both customers
and visits
services.
./siege.sh\n
-
Launch the Kiali dashboard:
istioctl dashboard kiali\n
Select the Graph view and the default
namespace. The flow of requests through the applications call graph will be rendered.
"},{"location":"observability/#exposing-metrics","title":"Exposing metrics","text":"Istio has built-in support for Prometheus as a mechanism for metrics collection.
Each Spring Boot application is configured with a micrometer dependency to expose a scrape endpoint for Prometheus to collect metrics.
Call the scrape endpoint and inspect the metrics exposed directly by the Spring Boot application:
kubectl exec deploy/customers-v1 -c istio-proxy -- curl -s localhost:8080/actuator/prometheus\n
Separately, Envoy collects a variety of metrics, often referred to as RED metrics (Requests, Errors, Durations).
Inspect the metrics collected and exposed by the Envoy sidecar:
kubectl exec deploy/customers-v1 -c istio-proxy -- curl -s localhost:15090/stats/prometheus\n
One common metric to note is istio_requests_total
kubectl exec deploy/customers-v1 -c istio-proxy -- curl -s localhost:15090/stats/prometheus | grep istio_requests_total\n
Both sets of metrics are aggregated (merged) and exposed on port 15020:
kubectl exec deploy/customers-v1 -c istio-proxy -- curl -s localhost:15020/stats/prometheus\n
For this to work, Envoy must be given the URL (endpoint) where the application's metrics are exposed.
This is done with a set of annotations on the deployment.
See the Istio documentation for more information.
"},{"location":"observability/#viewing-the-istio-grafana-metrics-dashboards","title":"Viewing the Istio Grafana metrics dashboards","text":"Launch the grafana dashboard, while maintaining a request load against the application.
./siege.sh\n
Then:
istioctl dash grafana\n
Review the Istio Service Dashboards for the three services petclinic-frontend
, customers
, and visits
.
You can also load the legacy dashboard that came with the petclinic application. You'll find the Grafana dashboard definition (a json file) here. Import the dashboard and then view it.
This dashboard has a couple of now-redundant panels showing the request volume and latencies, both of which are now subsumed by standard Istio dashboards.
But it also shows business metrics exposed by the application. Metrics such as number of owners, pets, and visits created or updated.
"},{"location":"resilience/","title":"Resilience","text":""},{"location":"resilience/#test-resilience-and-fallback","title":"Test resilience and fallback","text":"The original Spring Cloud version of PetClinic used Resilience4j to configure calls to the visit service with a timeout of 4 seconds, and a fallback to return an empty list of visits in the event that the request to get visits timed out (took longer).
Spring Cloud was removed. We can replace this configuration with an Istio Custom Resource.
The file timeouts.yaml
configures the equivalent 4s timeout on requests to the visits
service, replacing the previous Resilience4j-based implementation.
Apply the timeout configuration to your cluster:
kubectl apply -f manifests/config/timeouts.yaml\n
The fallback logic in PetClinicController.getOwnerDetails
was retrofitted to detect the Gateway Timeout (504) response code instead of using a Resilience4j API.
To test this feature, the environment variable DELAY_MILLIS was introduced into the visits service to insert a delay when fetching visits.
Here is how to test the behavior:
-
Call visits-service
directly:
kubectl exec deploy/sleep -- curl -s visits-service:8080/pets/visits?petId=8 | jq\n
Observe the call succeed and return a list of visits for this particular pet.
-
Call the petclinic-frontend
endpoint, and note that for each pet, we see a list of visits:
kubectl exec deploy/sleep -- curl -s petclinic-frontend:8080/api/gateway/owners/6 | jq\n
-
Edit the deployment manifest for the visits-service
so that the environment variable DELAY_MILLIS
is set to the value \"5000\" (which is 5 seconds). One way to do this is to edit the file with (then save and exit):
kubectl edit deploy visits-v1\n
-
Once the new visits-service
pod reaches Ready status, make the same call again:
kubectl exec deploy/sleep -- curl -v visits-service:8080/pets/visits?petId=8 | jq\n
Observe the 504 (Gateway timeout) response this time around (because it exceeds the 4-second timeout).
-
Call the petclinic-frontend
endpoint once more, and note that for each pet, the list of visits is empty:
kubectl exec deploy/sleep -- curl -s petclinic-frontend:8080/api/gateway/owners/6 | jq\n
That is, the call succeeds, the timeout is caught, and the fallback empty list of visits is returned in its place. Tail the logs of petclinic-frontend
and observe a log message indicating the fallback was triggered.
To restore the original behavior with no delay, edit the visits-v1
deployment again and set the environment variable value to \"0\".
"},{"location":"security/","title":"Security","text":""},{"location":"security/#leverage-workload-identity","title":"Leverage workload identity","text":"Workloads in Istio are assigned a SPIFFE identity.
Authorization policies can be applied that allow or deny access to a service as a function of that identity.
For example, we can restrict access to each database exclusively to its corresponding service, i.e.:
- Only the visits service can access the visits db
- Only the vets service can access the vets db
- Only the customers service can access the customers db
The above policy is specified in the file authorization-policies.yaml
.
"},{"location":"security/#exercise","title":"Exercise","text":" -
Use the previous Test database connectivity instructions to create a client pod and to use it to connect to the \"vets\" database. This operation should succeed. You should be able to see the \"service_instance_db\" and see the tables and query them.
-
Apply the authorization policies:
kubectl apply -f manifests/config/authorization-policies.yaml\n
-
Attempt once more to create a client pod to connect to the \"vets\" database. This time the operation will fail. That's because only the vets service is now allowed to connect to the database.
-
Also verify that the application itself continues to function because all database queries are performed via its associated service.
"},{"location":"setup/","title":"Setup","text":"Begin by cloning a local copy of the Spring PetClinic Istio repository from GitHub.
"},{"location":"setup/#kubernetes","title":"Kubernetes","text":"Select whether you wish to provision Kubernetes locally or remotely using a cloud provider.
Local SetupRemote Setup On a Mac running Docker Desktop or Rancher Desktop, make sure to give your VM plenty of CPU and memory. 16GB of memory and 6 CPUs seems to work for me.
Deploy a local K3D Kubernetes cluster with a local registry:
k3d cluster create my-istio-cluster \\\n --api-port 6443 \\\n --k3s-arg \"--disable=traefik@server:0\" \\\n --port 80:80@loadbalancer \\\n --registry-create my-cluster-registry:0.0.0.0:5010\n
Above, we:
- Disable the default traefik load balancer and configure local port 80 to instead forward to the \"istio-ingressgateway\" load balancer.
- Create a registry we can push to locally on port 5010 that is accessible from the Kubernetes cluster at \"my-cluster-registry:5000\".
Provision a k8s cluster in the cloud of your choice. For example, on GCP:
gcloud container clusters create my-istio-cluster \\\n --cluster-version latest \\\n --machine-type \"e2-standard-2\" \\\n --num-nodes \"3\" \\\n --network \"default\"\n
"},{"location":"setup/#environment-variables","title":"Environment variables","text":"Use envrc-template.sh
as the basis for configuring environment variables.
Be sure to:
- Set the local variable
local_setup
to either \"true\" or \"false\", depending on your choice of a local or remote cluster. - If using a remote setup, set the value of PUSH_IMAGE_REGISTRY to the value of your image registry URL.
I highly recommend using direnv
, a convenient way of associating setting environment variables with a specific directory.
If you choose to use direnv
, then the variables can be automatically set by renaming the file to .envrc
and running the command direnv allow
.
"},{"location":"setup/#istio","title":"Istio","text":" -
Follow the Istio documentation's instructions to download Istio.
-
After you have added the istioctl
CLI to your PATH, run the following installation command:
istioctl install -f manifests/istio-install-manifest.yaml\n
The above-referenced configuration manifest configures certain facets of the mesh, namely:
- Setting trace sampling at 100%, for ease of obtaining distributed traces
- Deploying sidecars (envoy proxies) not only alongside workloads, but also in front of mysql databases.
Once Istio is installed, feel free to verify the installation with:
istioctl verify-install\n
In the next section, you will work on deploying the microservices to the default
namespace.
As a final step, label the default
namespace for sidecar injection with:
kubectl label ns default istio-injection=enabled\n
"}]}
\ No newline at end of file
+{"config":{"lang":["en"],"separator":"[\\s\\-]+","pipeline":["stopWordFilter"]},"docs":[{"location":"","title":"Introduction","text":"A great deal of \"cruft\" accumulates inside many files in spring-petclinic-cloud
: configuration for service discovery, load balancing, routing, retries, resilience, and so on.
When you move to Istio, you get separation of concerns. It's ironic that the Spring framework's raison d'\u00eatre was separation of concerns, but its focus is inside a monolithic application, not between microservices. When you move to cloud-native applications, you end up with a tangle of concerns that Istio helps you untangle.
And, little by little, our apps become sane again. It reminds me of one of Antoine de Saint-Exup\u00e9ry's famous quotes:
Perfection is finally attained not when there is no longer anything to add, but when there is no longer anything to take away
The following instructions will walk you through deploying spring-petclinic-istio
either using a local Kubernetes cluster or a remote, cloud-based cluster.
After the application is deployed, I walk you through some aspects of the application and additional benefits gained from running on the Istio platform: orthogonal configuration of traffic management and resilience concerns, stronger security and workload identity, and observability.
Let's get started..
"},{"location":"api/","title":"API Endpoints","text":"Below, we demonstrate calling endpoints on the application in either of two ways:
- Internally from within the Kubernetes cluster
-
Through the \"front door\", via the ingress gateway
The environment variable LB_IP
captures the public IP address of the load balancer fronting the ingress gateway. We can access the service endpoints through that IP address.
"},{"location":"api/#deploy-the-sleep-client","title":"Deploy the sleep
client","text":"We make use of Istio's sleep sample application to facilitate the task of making calls to workloads from inside the cluster.
The sleep
deployment is a blank client Pod that can be used to send direct calls to specific microservices from within the Kubernetes cluster.
Deploy sleep
to your cluster:
kubectl apply -f manifests/sleep.yaml\n
Wait for the sleep pod to be ready (2/2 containers).
"},{"location":"api/#test-individual-service-endpoints","title":"Test individual service endpoints","text":"We assume that you have the excellent jq utility already installed.
"},{"location":"api/#call-the-vets-controller-endpoint","title":"Call the \"Vets\" controller endpoint","text":"InternalExternal kubectl exec deploy/sleep -- curl -s vets-service:8080/vets | jq\n
curl -s http://$LB_IP/api/vet/vets | jq\n
"},{"location":"api/#customers-service-endpoints","title":"Customers service endpoints","text":"Here are a couple of customers-service
endpoints to test:
InternalExternal kubectl exec deploy/sleep -- curl -s customers-service:8080/owners | jq\n
kubectl exec deploy/sleep -- curl -s customers-service:8080/owners/1/pets/1 | jq\n
curl -s http://$LB_IP/api/customer/owners | jq\n
curl -s http://$LB_IP/api/customer/owners/1/pets/1 | jq\n
Give the owner George Franklin a new pet, Sir Hiss (a snake):
InternalExternal kubectl exec deploy/sleep -- curl -s -v \\\n -X POST -H 'Content-Type: application/json' \\\n customers-service:8080/owners/1/pets \\\n -d '{ \"name\": \"Sir Hiss\", \"typeId\": 4, \"birthDate\": \"2020-01-01\" }'\n
curl -v -X POST -H 'Content-Type: application/json' \\\n http://$LB_IP/api/customer/owners/1/pets \\\n -d '{ \"name\": \"Sir Hiss\", \"typeId\": 4, \"birthDate\": \"2020-01-01\" }'\n
This can also be performed directly from the UI.
"},{"location":"api/#the-visits-service","title":"The Visits service","text":"Test one of the visits-service
endpoints:
InternalExternal kubectl exec deploy/sleep -- curl -s visits-service:8080/pets/visits?petId=8 | jq\n
curl -s http://$LB_IP/api/visit/pets/visits?petId=8 | jq\n
"},{"location":"api/#petclinic-frontend","title":"PetClinic Frontend","text":"Call petclinic-frontend
endpoint that calls both the customers and visits services:
InternalExternal kubectl exec deploy/sleep -- curl -s petclinic-frontend:8080/api/gateway/owners/6 | jq\n
curl -s http://$LB_IP/api/gateway/owners/6 | jq\n
"},{"location":"api/#summary","title":"Summary","text":"Now that we have some familiarity with some of the API endpoints that make up this application, let's turn our attention to configuring a small aspect of resilience: timeouts.
"},{"location":"deploy/","title":"Build and Deploy PetClinic","text":""},{"location":"deploy/#deploy-each-microservices-backing-database","title":"Deploy each microservice's backing database","text":"Deployment decisions:
- We use mysql. Mysql can be installed with helm. Its charts are in the bitnami repository.
- We deploy a separate database statefulset for each service.
- Inside each statefulset we name the database \"service_instance_db\".
- Apps use the root username \"root\".
- The helm installation will generate a root user password in a secret.
- The applications reference the secret name to get at the database credentials.
"},{"location":"deploy/#preparatory-steps","title":"Preparatory steps","text":"We assume you already have helm installed.
-
Add the helm repository:
helm repo add bitnami https://charts.bitnami.com/bitnami\n
-
Update it:
helm repo update\n
"},{"location":"deploy/#deploy-the-databases","title":"Deploy the databases","text":"Deploy the databases with a helm install
command, one for each app/service:
-
Vets:
helm install vets-db-mysql bitnami/mysql --set auth.database=service_instance_db\n
-
Visits:
helm install visits-db-mysql bitnami/mysql --set auth.database=service_instance_db\n
-
Customers:
helm install customers-db-mysql bitnami/mysql --set auth.database=service_instance_db\n
The databases should be up after ~ 1-2 minutes.
Wait for the pods to be ready (2/2 containers).
"},{"location":"deploy/#build-the-apps-create-the-docker-images-push-them-to-the-local-registry","title":"Build the apps, create the docker images, push them to the local registry","text":"We assume you already have maven installed locally.
-
Compile the apps and run the tests:
mvn clean package\n
-
Build the images
mvn spring-boot:build-image\n
-
Publish the images
./push-images.sh\n
"},{"location":"deploy/#deploy-the-apps","title":"Deploy the apps","text":"The deployment manifests are located in manifests/deploy
.
The services are vets
, visits
, customers
, and petclinic-frontend
. For each service we create a Kubernetes ServiceAccount, a Deployment, and a ClusterIP service.
Apply the deployment manifests:
cat manifests/deploy/*.yaml | envsubst | kubectl apply -f -\n
The manifests reference the image registry environment variable, and so are passed through envsubst
for resolution before being applied to the Kubernetes cluster.
Wait for the pods to be ready (2/2 containers).
Here is a simple diagnostic command that tails the logs of the customers service pod, showing that the Spring Boot application has come up and is listening on port 8080.
kubectl logs --follow svc/customers-service\n
"},{"location":"deploy/#test-database-connectivity","title":"Test database connectivity","text":"The below instructions are taken from the output from the prior helm install
command.
Connect directly to the vets-db-mysql
database:
-
Obtain the root password from the Kubernetes secret:
bash shellfish shell MYSQL_ROOT_PASSWORD=$(kubectl get secret --namespace default \\\n vets-db-mysql -o jsonpath=\"{.data.mysql-root-password}\" | base64 -d)\n
set MYSQL_ROOT_PASSWORD $(kubectl get secret --namespace default \\\n vets-db-mysql -o jsonpath=\"{.data.mysql-root-password}\" | base64 -d)\n
-
Create, and shell into a mysql client pod:
kubectl run vets-db-mysql-client \\\n --rm --tty -i --restart='Never' \\\n --image docker.io/bitnami/mysql:8.0.36-debian-11-r2 \\\n --namespace default \\\n --env MYSQL_ROOT_PASSWORD=$MYSQL_ROOT_PASSWORD \\\n --command -- bash\n
-
Use the mysql
client to connect to the database:
mysql -h vets-db-mysql.default.svc.cluster.local -uroot -p\"$MYSQL_ROOT_PASSWORD\"\n
At the mysql prompt, select the database, list the tables, and query vet records:
use service_instance_db;\n
show tables;\n
select * from vets;\n
Exit the mysql prompt with \\q
, then exit the pod with exit
.
One can similarly connect to and inspect the customers-db-mysql
and visits-db-mysql
databases.
"},{"location":"deploy/#summary","title":"Summary","text":"At this point you should have all applications deployed and running, connected to their respective databases.
But we cannot access the application's UI until we configure ingress, which is our next topic.
"},{"location":"ingress/","title":"Configure Ingress","text":"The original project made use of the Spring Cloud Gateway project to configure ingress and routing.
Ingress is Istio's bread and butter. Envoy provides those capabilities. And so the dependency was removed and replaced with a standard Istio Ingress Gateway.
The Istio installation includes the Ingress Gateway component. You should be able to see the deployment in the istio-system
namespace with:
kubectl get deploy -n istio-system\n
Ingress is configured with Istio in two parts: the gateway configuration proper, and the configuration to route requests to backing services.
"},{"location":"ingress/#configure-the-gateway","title":"Configure the Gateway","text":"The below configuration creates a listener on the ingress gateway for HTTP traffic on port 80.
gateway.yaml ---\napiVersion: networking.istio.io/v1beta1\nkind: Gateway\nmetadata:\n name: main-gateway\nspec:\n selector:\n istio: ingressgateway\n servers:\n - port:\n number: 80\n name: http\n protocol: HTTP\n hosts:\n - \"*\"\n
Apply the gateway configuration to your cluster:
kubectl apply -f manifests/ingress/gateway.yaml\n
Since no routing has been configured yet for the gateway, a request to the gateway should return an HTTP 404 response:
curl -v http://$LB_IP/\n
"},{"location":"ingress/#configure-routing","title":"Configure routing","text":"The original Spring Cloud Gateway routing rules were replaced and are now captured with a standard Istio VirtualService in manifests/ingress/routes.yaml
:
routes.yaml
configures routing for the Istio ingress gateway (which replaces spring cloud gateway) to the application's API endpoints.
It exposes endpoints to each of the services, and in addition, routes requests with the /api/gateway
prefix to the petclinic-frontend
application. In the original version, the petclinic-frontend application and the gateway \"proper\" were bundled together as a single microservice.
routes.yaml ---\napiVersion: networking.istio.io/v1beta1\nkind: VirtualService\nmetadata:\n name: petclinic-routes\nspec:\n hosts:\n - \"*\"\n gateways:\n - main-gateway\n http:\n - match:\n - uri:\n prefix: \"/api/customer/\"\n rewrite:\n uri: \"/\"\n route:\n - destination:\n host: customers-service.default.svc.cluster.local\n port:\n number: 8080\n - match:\n - uri:\n prefix: \"/api/visit/\"\n rewrite:\n uri: \"/\"\n route:\n - destination:\n host: visits-service.default.svc.cluster.local\n port:\n number: 8080\n timeout: 4s\n - match:\n - uri:\n prefix: \"/api/vet/\"\n rewrite:\n uri: \"/\"\n route:\n - destination:\n host: vets-service.default.svc.cluster.local\n port:\n number: 8080\n - match:\n - uri:\n prefix: \"/api/gateway\"\n route:\n - destination:\n host: petclinic-frontend.default.svc.cluster.local\n port:\n number: 8080\n - route:\n - destination:\n host: petclinic-frontend.default.svc.cluster.local\n port:\n number: 8080\n
Apply the routing rules for the gateway:
kubectl apply -f manifests/ingress/routes.yaml\n
"},{"location":"ingress/#visit-the-app","title":"Visit the app","text":"With the application deployed and ingress configured, we can finally view the application's user interface.
To see the running PetClinic application, open a browser tab and visit http://$LB_IP/.
You should see a home page. Navigate to the Vets page, then the Pet Owners page, and finally, drill down to a specific pet owner, and otherwise get acquainted with the UI.
"},{"location":"ingress/#analysis","title":"Analysis","text":"Prior to Istio, the common solution in the Spring ecosystem to issues of service discovery, resilience, load balancing was Spring Cloud. Spring Cloud consists of multiple projects that provide dependencies that developers add to their applications to help them deal with issues of client-side load-balancing, retries, circuit-breaking, service discovery and so on.
In spring-petclinic-istio
, those dependencies have been removed. What remains as dependencies inside each service are what you'd expect to find:
- Spring Boot and actuator are the foundation of modern Spring applications.
- Spring Data JPA and the mysql connector for database access.
- Micrometer for exposing application metrics via a Prometheus endpoint.
- Micrometer-tracing for propagating trace headers through these applications.
"},{"location":"observability/","title":"Observability","text":""},{"location":"observability/#observe-distributed-traces","title":"Observe Distributed Traces","text":"All boot apps are configured to propagate trace headers using micrometer-tracing, per the Istio documentation.
See the application.yaml
resource files and the property management.tracing.baggage.remote-fields
which configures the fields to propagate.
To make testing this easier, Istio is configured with 100% trace sampling.
"},{"location":"observability/#steps","title":"Steps","text":" -
Navigate to the base directory of your Istio distribution:
cd istio-1.20.2\n
-
Deploy Istio observability samples:
kubectl apply -f samples/addons/prometheus.yaml\nkubectl apply -f samples/addons/jaeger.yaml\nkubectl apply -f samples/addons/kiali.yaml\nkubectl apply -f samples/addons/grafana.yaml\n
Wait for the observability pods to be ready:
kubectl get pod -n istio-system\n
-
Call the petclinic-frontend
endpoint that calls both the customers and visits services, perhaps a two or three times so that the requests get sampled:
curl -s http://$LB_IP/api/gateway/owners/6 | jq\n
-
Start the jaeger dashboard:
istioctl dashboard jaeger\n
-
In Jaeger, search for traces involving the services petclinic-frontend, customers, and visits.
You should see new traces, with six spans, showing the full end-to-end request-response flow across all three services.
Close the jaeger dashboard.
"},{"location":"observability/#kiali","title":"Kiali","text":"The Kiali dashboard can likewise be used to display visualizations of such end-to-end flows.
-
Send a light load of requests against your application.
We provide a simple siege script to send requests through to the petclinic-frontend
endpoint that aggregates responses from both customers
and visits
services.
./siege.sh\n
-
Launch the Kiali dashboard:
istioctl dashboard kiali\n
Select the Graph view and the default
namespace. The flow of requests through the applications call graph will be rendered.
"},{"location":"observability/#exposing-metrics","title":"Exposing metrics","text":"Istio has built-in support for Prometheus as a mechanism for metrics collection.
Each Spring Boot application is configured with a micrometer dependency to expose a scrape endpoint for Prometheus to collect metrics.
Call the scrape endpoint and inspect the metrics exposed directly by the Spring Boot application:
kubectl exec deploy/customers-v1 -c istio-proxy -- curl -s localhost:8080/actuator/prometheus\n
Separately, Envoy collects a variety of metrics, often referred to as RED metrics (Requests, Errors, Durations).
Inspect the metrics collected and exposed by the Envoy sidecar:
kubectl exec deploy/customers-v1 -c istio-proxy -- curl -s localhost:15090/stats/prometheus\n
One common metric to note is istio_requests_total
kubectl exec deploy/customers-v1 -c istio-proxy -- curl -s localhost:15090/stats/prometheus | grep istio_requests_total\n
Both sets of metrics are aggregated (merged) and exposed on port 15020:
kubectl exec deploy/customers-v1 -c istio-proxy -- curl -s localhost:15020/stats/prometheus\n
For this to work, Envoy must be given the URL (endpoint) where the application's metrics are exposed.
This is done with a set of annotations on the deployment.
See the Istio documentation for more information.
"},{"location":"observability/#viewing-the-istio-grafana-metrics-dashboards","title":"Viewing the Istio Grafana metrics dashboards","text":"Launch the grafana dashboard, while maintaining a request load against the application.
./siege.sh\n
Then:
istioctl dash grafana\n
Review the Istio Service Dashboards for the three services petclinic-frontend
, customers
, and visits
.
You can also load the legacy dashboard that came with the petclinic application. You'll find the Grafana dashboard definition (a json file) here. Import the dashboard and then view it.
This dashboard has a couple of now-redundant panels showing the request volume and latencies, both of which are now subsumed by standard Istio dashboards.
But it also shows business metrics exposed by the application. Metrics such as number of owners, pets, and visits created or updated.
"},{"location":"resilience/","title":"Resilience","text":""},{"location":"resilience/#test-resilience-and-fallback","title":"Test resilience and fallback","text":"The original Spring Cloud version of PetClinic used Resilience4j to configure calls to the visit service with a timeout of 4 seconds, and a fallback to return an empty list of visits in the event that the request to get visits timed out (took longer).
Spring Cloud was removed. We can replace this configuration with an Istio Custom Resource.
The file timeouts.yaml
configures the equivalent 4s timeout on requests to the visits
service, replacing the previous Resilience4j-based implementation.
timeouts.yaml ---\napiVersion: networking.istio.io/v1beta1\nkind: VirtualService\nmetadata:\n name: visits\nspec:\n hosts:\n - visits-service.default.svc.cluster.local\n http:\n - route:\n - destination:\n host: visits-service.default.svc.cluster.local\n timeout: 4s\n
Apply the timeout configuration to your cluster:
kubectl apply -f manifests/config/timeouts.yaml\n
The fallback logic in PetClinicController.getOwnerDetails
was retrofitted to detect the Gateway Timeout (504) response code instead of using a Resilience4j API.
To test this feature, the environment variable DELAY_MILLIS was introduced into the visits service to insert a delay when fetching visits.
Here is how to test the behavior:
-
Call visits-service
directly:
kubectl exec deploy/sleep -- curl -s visits-service:8080/pets/visits?petId=8 | jq\n
Observe the call succeed and return a list of visits for this particular pet.
-
Call the petclinic-frontend
endpoint, and note that for each pet, we see a list of visits:
kubectl exec deploy/sleep -- curl -s petclinic-frontend:8080/api/gateway/owners/6 | jq\n
-
Edit the deployment manifest for the visits-service
so that the environment variable DELAY_MILLIS
is set to the value \"5000\" (which is 5 seconds). One way to do this is to edit the file with (then save and exit):
kubectl edit deploy visits-v1\n
-
Once the new visits-service
pod reaches Ready status, make the same call again:
kubectl exec deploy/sleep -- curl -v visits-service:8080/pets/visits?petId=8 | jq\n
Observe the 504 (Gateway timeout) response this time around (because it exceeds the 4-second timeout).
-
Call the petclinic-frontend
endpoint once more, and note that for each pet, the list of visits is empty:
kubectl exec deploy/sleep -- curl -s petclinic-frontend:8080/api/gateway/owners/6 | jq\n
That is, the call succeeds, the timeout is caught, and the fallback empty list of visits is returned in its place. Tail the logs of petclinic-frontend
and observe a log message indicating the fallback was triggered.
To restore the original behavior with no delay, edit the visits-v1
deployment again and set the environment variable value to \"0\".
"},{"location":"security/","title":"Security","text":""},{"location":"security/#leverage-workload-identity","title":"Leverage workload identity","text":"Workloads in Istio are assigned a SPIFFE identity.
Authorization policies can be applied that allow or deny access to a service as a function of that identity.
For example, we can restrict access to each database exclusively to its corresponding service, i.e.:
- Only the visits service can access the visits db
- Only the vets service can access the vets db
- Only the customers service can access the customers db
The above policy is specified in the file authorization-policies.yaml
:
authorization-policies.yaml ---\napiVersion: security.istio.io/v1beta1\nkind: AuthorizationPolicy\nmetadata:\n name: vets-db-allow-vets-service\nspec:\n selector:\n matchLabels:\n app.kubernetes.io/instance: vets-db-mysql\n action: ALLOW\n rules:\n - from:\n - source:\n principals: [\"cluster.local/ns/default/sa/vets-service\"]\n to:\n - operation:\n ports: [\"3306\"]\n---\napiVersion: security.istio.io/v1beta1\nkind: AuthorizationPolicy\nmetadata:\n name: customers-db-allow-customers-service\nspec:\n selector:\n matchLabels:\n app.kubernetes.io/instance: customers-db-mysql\n action: ALLOW\n rules:\n - from:\n - source:\n principals: [\"cluster.local/ns/default/sa/customers-service\"]\n to:\n - operation:\n ports: [\"3306\"]\n---\napiVersion: security.istio.io/v1beta1\nkind: AuthorizationPolicy\nmetadata:\n name: visits-db-allow-visits-service\nspec:\n selector:\n matchLabels:\n app.kubernetes.io/instance: visits-db-mysql\n action: ALLOW\n rules:\n - from:\n - source:\n principals: [\"cluster.local/ns/default/sa/visits-service\"]\n to:\n - operation:\n ports: [\"3306\"]\n
"},{"location":"security/#exercise","title":"Exercise","text":" -
Use the previous \"Test database connectivity\" instructions to create a client pod and to use it to connect to the \"vets\" database. This operation should succeed. You should be able to see the \"service_instance_db\" and see the tables and query them.
-
Apply the authorization policies:
kubectl apply -f manifests/config/authorization-policies.yaml\n
-
Attempt once more to create a client pod to connect to the \"vets\" database. This time the operation will fail. That's because only the vets service is now allowed to connect to the database.
-
Verify that the application itself continues to function because all database queries are performed via its associated service.
"},{"location":"setup/","title":"Setup","text":"Begin by cloning a local copy of the Spring PetClinic Istio repository from GitHub.
"},{"location":"setup/#kubernetes","title":"Kubernetes","text":"Select whether you wish to provision Kubernetes locally or remotely using a cloud provider.
Local SetupRemote Setup On a Mac running Docker Desktop or Rancher Desktop, make sure to give your VM plenty of CPU and memory. 16GB of memory and 6 CPUs seems to work for me.
Deploy a local K3D Kubernetes cluster with a local registry:
k3d cluster create my-istio-cluster \\\n --api-port 6443 \\\n --k3s-arg \"--disable=traefik@server:0\" \\\n --port 80:80@loadbalancer \\\n --registry-create my-cluster-registry:0.0.0.0:5010\n
Above, we:
- Disable the default traefik load balancer and configure local port 80 to instead forward to the \"istio-ingressgateway\" load balancer.
- Create a registry we can push to locally on port 5010 that is accessible from the Kubernetes cluster at \"my-cluster-registry:5000\".
Provision a k8s cluster in the cloud of your choice. For example, on GCP:
gcloud container clusters create my-istio-cluster \\\n --cluster-version latest \\\n --machine-type \"e2-standard-2\" \\\n --num-nodes \"3\" \\\n --network \"default\"\n
"},{"location":"setup/#environment-variables","title":"Environment variables","text":"Use envrc-template.sh
as the basis for configuring environment variables.
Be sure to:
- Set the local variable
local_setup
to either \"true\" or \"false\", depending on your choice of a local or remote cluster. - If using a remote setup, set the value of PUSH_IMAGE_REGISTRY to the value of your image registry URL.
I highly recommend using direnv
, a convenient way of associating setting environment variables with a specific directory.
If you choose to use direnv
, then the variables can be automatically set by renaming the file to .envrc
and running the command direnv allow
.
"},{"location":"setup/#istio","title":"Istio","text":" -
Follow the Istio documentation's instructions to download Istio.
-
After you have added the istioctl
CLI to your PATH, run the following installation command:
istioctl install -f manifests/istio-install-manifest.yaml\n
The above-referenced configuration manifest configures certain facets of the mesh, namely:
- Setting trace sampling at 100%, for ease of obtaining distributed traces
- Deploying sidecars (envoy proxies) not only alongside workloads, but also in front of mysql databases.
Once Istio is installed, feel free to verify the installation with:
istioctl verify-install\n
In the next section, you will work on deploying the microservices to the default
namespace.
As a final step, label the default
namespace for sidecar injection with:
kubectl label ns default istio-injection=enabled\n
"}]}
\ No newline at end of file
diff --git a/security/index.html b/security/index.html
index ffb454e..54c9532 100644
--- a/security/index.html
+++ b/security/index.html
@@ -541,22 +541,127 @@ Leverage workload identity
+authorization-policies.yaml
+
+
Exercise¶
-
-
Use the previous Test database connectivity instructions to create a client pod and to use it to connect to the "vets" database. This operation should succeed. You should be able to see the "service_instance_db" and see the tables and query them.
+Use the previous "Test database connectivity" instructions to create a client pod and to use it to connect to the "vets" database. This operation should succeed. You should be able to see the "service_instance_db" and see the tables and query them.
-
Apply the authorization policies:
-kubectl apply -f manifests/config/authorization-policies.yaml
+
-
Attempt once more to create a client pod to connect to the "vets" database. This time the operation will fail. That's because only the vets service is now allowed to connect to the database.
-
-
Also verify that the application itself continues to function because all database queries are performed via its associated service.
+Verify that the application itself continues to function because all database queries are performed via its associated service.
diff --git a/sitemap.xml.gz b/sitemap.xml.gz
index ed4f21e..7b8a09e 100644
Binary files a/sitemap.xml.gz and b/sitemap.xml.gz differ