To see the running PetClinic application, open a browser tab and visit http://$LB_IP/.
You should see a home page. Navigate to the Vets page, then the Pet Owners page, and finally, drill down to a specific pet owner, and otherwise get acquainted with the UI.
Prior to Istio, the common solution in the Spring ecosystem to issues of service discovery, resilience, load balancing was Spring Cloud. Spring Cloud consists of multiple projects that provide dependencies that developers add to their applications to help them deal with issues of client-side load-balancing, retries, circuit-breaking, service discovery and so on.
-
In spring-petclinic-istio, those dependencies have been removed. What remains as dependencies inside each service are what you'd expect to find:
-
-
Spring Boot and actuator are the foundation of modern Spring applications.
-
Spring Data JPA and the mysql connector for database access.
-
Micrometer for exposing application metrics via a Prometheus endpoint.
A great deal of \"cruft\" accumulates inside many files in spring-petclinic-cloud: configuration for service discovery, load balancing, routing, retries, resilience, and so on.
When you move to Istio, you get separation of concerns. It's ironic that the Spring framework's raison d'\u00eatre was separation of concerns, but its focus is inside a monolithic application, not between microservices. When you move to cloud-native applications, you end up with a tangle of concerns that Istio helps you untangle.
And, little by little, our apps become sane again. It reminds me of one of Antoine de Saint-Exup\u00e9ry's famous quotes:
Perfection is finally attained not when there is no longer anything to add, but when there is no longer anything to take away
The following instructions will walk you through deploying spring-petclinic-istio either using a local Kubernetes cluster or a remote, cloud-based cluster.
After the application is deployed, I walk you through some aspects of the application and additional benefits gained from running on the Istio platform: orthogonal configuration of traffic management and resilience concerns, stronger security and workload identity, and observability.
Below, we demonstrate calling endpoints on the application in either of two ways:
Internally from within the Kubernetes cluster
Through the \"front door\", via the ingress gateway
The environment variable LB_IP captures the public IP address of the load balancer fronting the ingress gateway. We can access the service endpoints through that IP address.
"},{"location":"api/#deploy-the-sleep-client","title":"Deploy the sleep client","text":"
We make use of Istio's sleep sample application to facilitate the task of making calls to workloads from inside the cluster.
The sleep deployment is a blank client Pod that can be used to send direct calls to specific microservices from within the Kubernetes cluster.
Deploy sleep to your cluster:
kubectl apply -f manifests/sleep.yaml\n
Wait for the sleep pod to be ready (2/2 containers).
"},{"location":"api/#test-individual-service-endpoints","title":"Test individual service endpoints","text":"
We assume that you have the excellent jq utility already installed.
"},{"location":"api/#call-the-vets-controller-endpoint","title":"Call the \"Vets\" controller endpoint","text":"InternalExternal
Now that we have some familiarity with some of the API endpoints that make up this application, let's turn our attention to configuring a small aspect of resilience: timeouts.
"},{"location":"deploy/","title":"Build and Deploy PetClinic","text":""},{"location":"deploy/#deploy-each-microservices-backing-database","title":"Deploy each microservice's backing database","text":"
Deployment decisions:
We use mysql. Mysql can be installed with helm. Its charts are in the bitnami repository.
We deploy a separate database statefulset for each service.
Inside each statefulset we name the database \"service_instance_db\".
Apps use the root username \"root\".
The helm installation will generate a root user password in a secret.
The applications reference the secret name to get at the database credentials.
"},{"location":"deploy/#build-the-apps-create-the-docker-images-push-them-to-the-local-registry","title":"Build the apps, create the docker images, push them to the local registry","text":"
We assume you already have maven installed locally.
Compile the apps and run the tests:
mvn clean package\n
Build the images
mvn spring-boot:build-image\n
Publish the images
./push-images.sh\n
"},{"location":"deploy/#deploy-the-apps","title":"Deploy the apps","text":"
The deployment manifests are located in manifests/deploy.
The services are vets, visits, customers, and petclinic-frontend. For each service we create a Kubernetes ServiceAccount, a Deployment, and a ClusterIP service.
The manifests reference the image registry environment variable, and so are passed through envsubst for resolution before being applied to the Kubernetes cluster.
Wait for the pods to be ready (2/2 containers).
Here is a simple diagnostic command that tails the logs of the customers service pod, showing that the Spring Boot application has come up and is listening on port 8080.
The original project made use of the Spring Cloud Gateway project to configure ingress and routing.
Ingress is Istio's bread and butter. Envoy provides those capabilities. And so the dependency was removed and replaced with a standard Istio Ingress Gateway.
The Istio installation includes the Ingress Gateway component. You should be able to see the deployment in the istio-system namespace with:
kubectl get deploy -n istio-system\n
Ingress is configured with Istio in two parts: the gateway configuration proper, and the configuration to route requests to backing services.
"},{"location":"ingress/#configure-the-gateway","title":"Configure the Gateway","text":"
The below configuration creates a listener on the ingress gateway for HTTP traffic on port 80.
The original Spring Cloud Gateway routing rules were replaced and are now captured with a standard Istio VirtualService in manifests/ingress/routes.yaml:
routes.yaml configures routing for the Istio ingress gateway (which replaces spring cloud gateway) to the application's API endpoints.
It exposes endpoints to each of the services, and in addition, routes requests with the /api/gateway prefix to the petclinic-frontend application. In the original version, the petclinic-frontend application and the gateway \"proper\" were bundled together as a single microservice.
"},{"location":"ingress/#visit-the-app","title":"Visit the app","text":"
With the application deployed and ingress configured, we can finally view the application's user interface.
To see the running PetClinic application, open a browser tab and visit http://$LB_IP/.
You should see a home page. Navigate to the Vets page, then the Pet Owners page, and finally, drill down to a specific pet owner, and otherwise get acquainted with the UI.
Prior to Istio, the common solution in the Spring ecosystem to issues of service discovery, resilience, load balancing was Spring Cloud. Spring Cloud consists of multiple projects that provide dependencies that developers add to their applications to help them deal with issues of client-side load-balancing, retries, circuit-breaking, service discovery and so on.
In spring-petclinic-istio, those dependencies have been removed. What remains as dependencies inside each service are what you'd expect to find:
Spring Boot and actuator are the foundation of modern Spring applications.
Spring Data JPA and the mysql connector for database access.
Micrometer for exposing application metrics via a Prometheus endpoint.
Micrometer-tracing for propagating trace headers through these applications.
The Istio documentation dedicates a page to guide users on how to propagate trace headers in calls between microservices, in order to support distributed tracing.
In this version of PetClinic, all Spring Boot microservices have been configured to propagate trace headers using micrometer-tracing.
Micrometer tracing is an elegant solution, in that we do not have to couple the trace header propagation with the application logic. Instead, it becomes a simple matter of static configuration.
See the application.yaml resource files and the property management.tracing.baggage.remote-fields which configures the fields to propagate.
To make testing this easier, Istio is configured with 100% trace sampling.
In its samples directory, Istio provides sample deployment manifests for various observability tools, including Zipkin and Jaeger.
Deploy Jaeger to your Kubernetes cluster:
Navigate to the base directory of your Istio distribution:
cd istio-1.20.2\n
Deploy jaeger:
kubectl apply -f samples/addons/jaeger.yaml\n
Wait for the Jaeger pod to be ready:
kubectl get pod -n istio-system\n
Next, let us turn our attention to calling an endpoint that will generate a trace capture, and observe it in the Jaeger dashboard:
Call the petclinic-frontend endpoint that calls both the customers and visits services. Feel free to make mulitple requests to generate multiple traces.
curl -s http://$LB_IP/api/gateway/owners/6 | jq\n
Launch the jaeger dashboard:
istioctl dashboard jaeger\n
In Jaeger, search for traces involving the services petclinic-frontend, customers, and visits.
You should see one or more traces, each with six spans. Click on any one of them to display the full end-to-end request-response flow across all three services.
What allows Istio to aggregate both scrape endpoints are annotations placed in the pod template specification for each application, communicating the URL of the Prometheus scrape endpoint.
For example, here are the prometheus annotations for the customers service.
For more information on metrics merging and Prometheus, see the Istio documentation.
"},{"location":"observability/#send-requests-to-the-application","title":"Send requests to the application","text":"
To send a steady stream of requests through the petclinic-frontend application, we use siege. Feel free to use other tools, or maybe a simple bash while loop.
Run the following siege command to send requests to various endpoints in our application:
The latest version of Spring Boot (3.2) takes advantage of a relatively recent feature of Prometheus known as \"exemplars.\" The current version of Istio uses an older version of Prometheus (2.41) that does not yet support exemplars.
Before deploying Prometheus, patch the prometheus deployment to use the latest version of the image:
The version of PetClinic from which this version derives already contained a custom Grafana dashboard.
To import the dashboard into Grafana:
Navigate to \"Dashboards\"
Click the \"New\" pulldown button, and select \"Import\"
Select \"Upload dashboard JSON file\", and select the file grafana-petclinic-dashboard.json from the repository's base directory.
Select \"Prometheus\" as the data source
Finally, click \"Import\"
The top two panels showing request latencies and request volumes are technically now redundant: both are now subsumed by the standard Istio dashboards.
Below those panels are custom application metrics. Metrics such as number of owners, pets, and visits created or updated.
Create a new Owner, give an existing owner a new pet, or add a visit for a pet, and watch those counters increment in Grafana.
Kiali is a bespoke \"console\" for Istio Service Mesh. One of the features of Kiali that stands out are the visualizations of requests making their way through the call graph.
Cancel the currently-running siege command. Relaunch siege, but with a different set of target endpoints:
The original Spring Cloud version of PetClinic used Resilience4j to configure calls to the visit service with a timeout of 4 seconds, and a fallback to return an empty list of visits in the event that the request to get visits timed out.
In this version of the application, the Spring Cloud dependencies were removed. We can replace this configuration with an Istio Custom Resource.
The file timeouts.yaml configures the equivalent 4s timeout on requests to the visits service, replacing the previous Resilience4j-based implementation.
The fallback logic in PetClinicController.getOwnerDetails was retrofitted to detect the Gateway Timeout (504) response code instead of using a Resilience4j API.
To test this feature, the environment variable DELAY_MILLIS was introduced into the visits service to insert a delay when fetching visits.
Edit the deployment manifest for the visits-service so that the environment variable DELAY_MILLIS is set to the value \"5000\" (which is 5 seconds). One way to do this is to edit the file with (then save and exit):
kubectl edit deploy visits-v1\n
Wait until the new pod has rolled out and become ready.
Once the new visits-service pod reaches Ready status, make the same call again:
Use the previous \"Test database connectivity\" instructions to create a client pod and to use it to connect to the \"vets\" database. This operation should succeed. You should be able to see the \"service_instance_db\" and see the tables and query them.
Attempt once more to create a client pod to connect to the \"vets\" database. This time the operation will fail. That's because only the vets service is now allowed to connect to the database.
Verify that the application itself continues to function because all database queries are performed via its associated service.
One problem in the enterprise is enforcing access to data via microservices. Giving another team direct access to data is a well-known anti-pattern, as it couples multiple applications to a specific storage technology, a specific database schema, one that cannot be allowed to evolve without impacting everyone.
With the aid of Istio and workload identity, we can make sure that the manner in which data is stored by a microservice is an entirely internal concern, one that can be modified at a later time, perhaps to use a different storage backend, or perhaps simply to allow for the evolution of the schema without \"breaking all the clients\".
After traffic management, resilience, and security, it is time to discuss the other important facet that servicec meshes help with: Observability.
"},{"location":"setup/","title":"Setup","text":"
Begin by cloning a local copy of the Spring PetClinic Istio repository from GitHub.
Select whether you wish to provision Kubernetes locally or remotely using a cloud provider.
Local SetupRemote Setup
On a Mac running Docker Desktop or Rancher Desktop, make sure to give your VM plenty of CPU and memory. 16GB of memory and 6 CPUs seems to work for me.
Deploy a local K3D Kubernetes cluster with a local registry:
"}]}
\ No newline at end of file
+{"config":{"lang":["en"],"separator":"[\\s\\-]+","pipeline":["stopWordFilter"]},"docs":[{"location":"","title":"Introduction","text":"
A great deal of \"cruft\" accumulates inside many files in spring-petclinic-cloud: configuration for service discovery, load balancing, routing, retries, resilience, and so on.
When you move to Istio, you get separation of concerns. It's ironic that the Spring framework's raison d'\u00eatre was separation of concerns, but its focus is inside a monolithic application, not between microservices. When you move to cloud-native applications, you end up with a tangle of concerns that Istio helps you untangle.
And, little by little, our apps become sane again. It reminds me of one of Antoine de Saint-Exup\u00e9ry's famous quotes:
Perfection is finally attained not when there is no longer anything to add, but when there is no longer anything to take away
The following instructions will walk you through deploying spring-petclinic-istio either using a local Kubernetes cluster or a remote, cloud-based cluster.
After the application is deployed, I walk you through some aspects of the application and additional benefits gained from running on the Istio platform: orthogonal configuration of traffic management and resilience concerns, stronger security and workload identity, and observability.
Below, we demonstrate calling endpoints on the application in either of two ways:
Internally from within the Kubernetes cluster
Through the \"front door\", via the ingress gateway
The environment variable LB_IP captures the public IP address of the load balancer fronting the ingress gateway. We can access the service endpoints through that IP address.
"},{"location":"api/#deploy-the-sleep-client","title":"Deploy the sleep client","text":"
We make use of Istio's sleep sample application to facilitate the task of making calls to workloads from inside the cluster.
The sleep deployment is a blank client Pod that can be used to send direct calls to specific microservices from within the Kubernetes cluster.
Deploy sleep to your cluster:
kubectl apply -f manifests/sleep.yaml\n
Wait for the sleep pod to be ready (2/2 containers).
"},{"location":"api/#test-individual-service-endpoints","title":"Test individual service endpoints","text":"
We assume that you have the excellent jq utility already installed.
"},{"location":"api/#call-the-vets-controller-endpoint","title":"Call the \"Vets\" controller endpoint","text":"InternalExternal
Now that we have some familiarity with some of the API endpoints that make up this application, let's turn our attention to configuring a small aspect of resilience: timeouts.
"},{"location":"deploy/","title":"Build and Deploy PetClinic","text":""},{"location":"deploy/#deploy-each-microservices-backing-database","title":"Deploy each microservice's backing database","text":"
Deployment decisions:
We use mysql. Mysql can be installed with helm. Its charts are in the bitnami repository.
We deploy a separate database statefulset for each service.
Inside each statefulset we name the database \"service_instance_db\".
Apps use the root username \"root\".
The helm installation will generate a root user password in a secret.
The applications reference the secret name to get at the database credentials.
"},{"location":"deploy/#build-the-apps-create-the-docker-images-push-them-to-the-local-registry","title":"Build the apps, create the docker images, push them to the local registry","text":"
We assume you already have maven installed locally.
Compile the apps and run the tests:
mvn clean package\n
Build the images
mvn spring-boot:build-image\n
Publish the images
./push-images.sh\n
"},{"location":"deploy/#deploy-the-apps","title":"Deploy the apps","text":"
The deployment manifests are located in manifests/deploy.
The services are vets, visits, customers, and petclinic-frontend. For each service we create a Kubernetes ServiceAccount, a Deployment, and a ClusterIP service.
The manifests reference the image registry environment variable, and so are passed through envsubst for resolution before being applied to the Kubernetes cluster.
Wait for the pods to be ready (2/2 containers).
Here is a simple diagnostic command that tails the logs of the customers service pod, showing that the Spring Boot application has come up and is listening on port 8080.
The original project made use of the Spring Cloud Gateway project to configure ingress and routing.
Ingress is Istio's bread and butter. Envoy provides those capabilities. And so the dependency was removed and replaced with a standard Istio Ingress Gateway.
The Istio installation includes the Ingress Gateway component. You should be able to see the deployment in the istio-system namespace with:
kubectl get deploy -n istio-system\n
Ingress is configured with Istio in two parts: the gateway configuration proper, and the configuration to route requests to backing services.
"},{"location":"ingress/#configure-the-gateway","title":"Configure the Gateway","text":"
The below configuration creates a listener on the ingress gateway for HTTP traffic on port 80.
The original Spring Cloud Gateway routing rules were replaced and are now captured with a standard Istio VirtualService in manifests/ingress/routes.yaml:
routes.yaml configures routing for the Istio ingress gateway (which replaces spring cloud gateway) to the application's API endpoints.
It exposes endpoints to each of the services, and in addition, routes requests with the /api/gateway prefix to the petclinic-frontend application. In the original version, the petclinic-frontend application and the gateway \"proper\" were bundled together as a single microservice.
"},{"location":"ingress/#visit-the-app","title":"Visit the app","text":"
With the application deployed and ingress configured, we can finally view the application's user interface.
To see the running PetClinic application, open a browser tab and visit http://$LB_IP/.
You should see a home page. Navigate to the Vets page, then the Pet Owners page, and finally, drill down to a specific pet owner, and otherwise get acquainted with the UI.
Next, let us explore some of the API endpoints exposed by the PetClinic application.
The Istio documentation dedicates a page to guide users on how to propagate trace headers in calls between microservices, in order to support distributed tracing.
In this version of PetClinic, all Spring Boot microservices have been configured to propagate trace headers using micrometer-tracing.
Micrometer tracing is an elegant solution, in that we do not have to couple the trace header propagation with the application logic. Instead, it becomes a simple matter of static configuration.
See the application.yaml resource files and the property management.tracing.baggage.remote-fields which configures the fields to propagate.
To make testing this easier, Istio is configured with 100% trace sampling.
In its samples directory, Istio provides sample deployment manifests for various observability tools, including Zipkin and Jaeger.
Deploy Jaeger to your Kubernetes cluster:
Navigate to the base directory of your Istio distribution:
cd istio-1.20.2\n
Deploy jaeger:
kubectl apply -f samples/addons/jaeger.yaml\n
Wait for the Jaeger pod to be ready:
kubectl get pod -n istio-system\n
Next, let us turn our attention to calling an endpoint that will generate a trace capture, and observe it in the Jaeger dashboard:
Call the petclinic-frontend endpoint that calls both the customers and visits services. Feel free to make mulitple requests to generate multiple traces.
curl -s http://$LB_IP/api/gateway/owners/6 | jq\n
Launch the jaeger dashboard:
istioctl dashboard jaeger\n
In Jaeger, search for traces involving the services petclinic-frontend, customers, and visits.
You should see one or more traces, each with six spans. Click on any one of them to display the full end-to-end request-response flow across all three services.
What allows Istio to aggregate both scrape endpoints are annotations placed in the pod template specification for each application, communicating the URL of the Prometheus scrape endpoint.
For example, here are the prometheus annotations for the customers service.
For more information on metrics merging and Prometheus, see the Istio documentation.
"},{"location":"observability/#send-requests-to-the-application","title":"Send requests to the application","text":"
To send a steady stream of requests through the petclinic-frontend application, we use siege. Feel free to use other tools, or maybe a simple bash while loop.
Run the following siege command to send requests to various endpoints in our application:
The latest version of Spring Boot (3.2) takes advantage of a relatively recent feature of Prometheus known as \"exemplars.\" The current version of Istio uses an older version of Prometheus (2.41) that does not yet support exemplars.
Before deploying Prometheus, patch the prometheus deployment to use the latest version of the image:
The version of PetClinic from which this version derives already contained a custom Grafana dashboard.
To import the dashboard into Grafana:
Navigate to \"Dashboards\"
Click the \"New\" pulldown button, and select \"Import\"
Select \"Upload dashboard JSON file\", and select the file grafana-petclinic-dashboard.json from the repository's base directory.
Select \"Prometheus\" as the data source
Finally, click \"Import\"
The top two panels showing request latencies and request volumes are technically now redundant: both are now subsumed by the standard Istio dashboards.
Below those panels are custom application metrics. Metrics such as number of owners, pets, and visits created or updated.
Create a new Owner, give an existing owner a new pet, or add a visit for a pet, and watch those counters increment in Grafana.
Kiali is a bespoke \"console\" for Istio Service Mesh. One of the features of Kiali that stands out are the visualizations of requests making their way through the call graph.
Cancel the currently-running siege command. Relaunch siege, but with a different set of target endpoints:
The original Spring Cloud version of PetClinic used Resilience4j to configure calls to the visit service with a timeout of 4 seconds, and a fallback to return an empty list of visits in the event that the request to get visits timed out.
In this version of the application, the Spring Cloud dependencies were removed. We can replace this configuration with an Istio Custom Resource.
The file timeouts.yaml configures the equivalent 4s timeout on requests to the visits service, replacing the previous Resilience4j-based implementation.
The fallback logic in PetClinicController.getOwnerDetails was retrofitted to detect the Gateway Timeout (504) response code instead of using a Resilience4j API.
To test this feature, the environment variable DELAY_MILLIS was introduced into the visits service to insert a delay when fetching visits.
Edit the deployment manifest for the visits-service so that the environment variable DELAY_MILLIS is set to the value \"5000\" (which is 5 seconds). One way to do this is to edit the file with (then save and exit):
kubectl edit deploy visits-v1\n
Wait until the new pod has rolled out and become ready.
Once the new visits-service pod reaches Ready status, make the same call again:
Use the previous \"Test database connectivity\" instructions to create a client pod and to use it to connect to the \"vets\" database. This operation should succeed. You should be able to see the \"service_instance_db\" and see the tables and query them.
Attempt once more to create a client pod to connect to the \"vets\" database. This time the operation will fail. That's because only the vets service is now allowed to connect to the database.
Verify that the application itself continues to function because all database queries are performed via its associated service.
One problem in the enterprise is enforcing access to data via microservices. Giving another team direct access to data is a well-known anti-pattern, as it couples multiple applications to a specific storage technology, a specific database schema, one that cannot be allowed to evolve without impacting everyone.
With the aid of Istio and workload identity, we can make sure that the manner in which data is stored by a microservice is an entirely internal concern, one that can be modified at a later time, perhaps to use a different storage backend, or perhaps simply to allow for the evolution of the schema without \"breaking all the clients\".
After traffic management, resilience, and security, it is time to discuss the other important facet that servicec meshes help with: Observability.
"},{"location":"setup/","title":"Setup","text":"
Begin by cloning a local copy of the Spring PetClinic Istio repository from GitHub.
Select whether you wish to provision Kubernetes locally or remotely using a cloud provider.
Local SetupRemote Setup
On a Mac running Docker Desktop or Rancher Desktop, make sure to give your VM plenty of CPU and memory. 16GB of memory and 6 CPUs seems to work for me.
Deploy a local K3D Kubernetes cluster with a local registry:
Prior to Istio, the common solution in the Spring ecosystem to issues of service discovery, resilience, load balancing was Spring Cloud. Spring Cloud consists of multiple projects that provide dependencies that developers add to their applications to help them deal with issues of client-side load-balancing, retries, circuit-breaking, service discovery and so on.
In spring-petclinic-istio, those dependencies have been removed. What remains as dependencies inside each service are what you'd expect to find:
Spring Boot and actuator are the foundation of modern Spring applications.
Spring Data JPA and the mysql connector for database access.
Micrometer for exposing application metrics via a Prometheus endpoint.
Micrometer-tracing for propagating trace headers through these applications.
"}]}
\ No newline at end of file
diff --git a/security/index.html b/security/index.html
index 6199243..3ecb5e7 100644
--- a/security/index.html
+++ b/security/index.html
@@ -473,6 +473,26 @@
+
+
+
+
+
+
+
Prior to Istio, the common solution in the Spring ecosystem to issues of service discovery, resilience, load balancing was Spring Cloud. Spring Cloud consists of multiple projects that provide dependencies that developers add to their applications to help them deal with issues of client-side load-balancing, retries, circuit-breaking, service discovery and so on.
+
In spring-petclinic-istio, those dependencies have been removed. What remains as dependencies inside each service are what you'd expect to find:
+
+
Spring Boot and actuator are the foundation of modern Spring applications.
+
Spring Data JPA and the mysql connector for database access.
+
Micrometer for exposing application metrics via a Prometheus endpoint.