diff --git a/404.html b/404.html index 5bc4b46..9c055d0 100644 --- a/404.html +++ b/404.html @@ -395,6 +395,26 @@ + + + + + + +
  • + + + + + Summary + + + + +
  • + + + diff --git a/api/index.html b/api/index.html index 50ef2e0..6b55ec2 100644 --- a/api/index.html +++ b/api/index.html @@ -513,6 +513,26 @@ + + + + + + +
  • + + + + + Summary + + + + +
  • + + + diff --git a/deploy/index.html b/deploy/index.html index f153ddb..688c15b 100644 --- a/deploy/index.html +++ b/deploy/index.html @@ -513,6 +513,26 @@ + + + + + + +
  • + + + + + Summary + + + + +
  • + + + diff --git a/index.html b/index.html index 45fa3d7..8e36007 100644 --- a/index.html +++ b/index.html @@ -412,6 +412,26 @@ + + + + + + +
  • + + + + + Summary + + + + +
  • + + + diff --git a/ingress/index.html b/ingress/index.html index 3f151eb..306e85e 100644 --- a/ingress/index.html +++ b/ingress/index.html @@ -381,15 +381,6 @@ - - -
  • - - - Analysis - - -
  • @@ -480,6 +471,26 @@ + + + + + + +
  • + + + + + Summary + + + + +
  • + + + @@ -531,15 +542,6 @@ - - -
  • - - - Analysis - - -
  • @@ -732,15 +734,7 @@

    Visit the appWith the application deployed and ingress configured, we can finally view the application's user interface.

    To see the running PetClinic application, open a browser tab and visit http://$LB_IP/.

    You should see a home page. Navigate to the Vets page, then the Pet Owners page, and finally, drill down to a specific pet owner, and otherwise get acquainted with the UI.

    -

    Analysis

    -

    Prior to Istio, the common solution in the Spring ecosystem to issues of service discovery, resilience, load balancing was Spring Cloud. Spring Cloud consists of multiple projects that provide dependencies that developers add to their applications to help them deal with issues of client-side load-balancing, retries, circuit-breaking, service discovery and so on.

    -

    In spring-petclinic-istio, those dependencies have been removed. What remains as dependencies inside each service are what you'd expect to find:

    - +

    Next, let us explore some of the API endpoints exposed by the PetClinic application.

    diff --git a/observability/index.html b/observability/index.html index e774fde..a1df958 100644 --- a/observability/index.html +++ b/observability/index.html @@ -12,6 +12,8 @@ + + @@ -516,6 +518,26 @@ + + + + + + +
  • + + + + + Summary + + + + +
  • + + + @@ -833,6 +855,22 @@

    Kiali& + + + + + + diff --git a/resilience/index.html b/resilience/index.html index fe47dc1..ea41574 100644 --- a/resilience/index.html +++ b/resilience/index.html @@ -407,6 +407,26 @@ + + + + + + +
  • + + + + + Summary + + + + +
  • + + + diff --git a/search/search_index.json b/search/search_index.json index 1449df1..0f39ead 100644 --- a/search/search_index.json +++ b/search/search_index.json @@ -1 +1 @@ -{"config":{"lang":["en"],"separator":"[\\s\\-]+","pipeline":["stopWordFilter"]},"docs":[{"location":"","title":"Introduction","text":"

    A great deal of \"cruft\" accumulates inside many files in spring-petclinic-cloud: configuration for service discovery, load balancing, routing, retries, resilience, and so on.

    When you move to Istio, you get separation of concerns. It's ironic that the Spring framework's raison d'\u00eatre was separation of concerns, but its focus is inside a monolithic application, not between microservices. When you move to cloud-native applications, you end up with a tangle of concerns that Istio helps you untangle.

    And, little by little, our apps become sane again. It reminds me of one of Antoine de Saint-Exup\u00e9ry's famous quotes:

    Perfection is finally attained not when there is no longer anything to add, but when there is no longer anything to take away

    The following instructions will walk you through deploying spring-petclinic-istio either using a local Kubernetes cluster or a remote, cloud-based cluster.

    After the application is deployed, I walk you through some aspects of the application and additional benefits gained from running on the Istio platform: orthogonal configuration of traffic management and resilience concerns, stronger security and workload identity, and observability.

    Let's get started..

    "},{"location":"api/","title":"API Endpoints","text":"

    Below, we demonstrate calling endpoints on the application in either of two ways:

    1. Internally from within the Kubernetes cluster
    2. Through the \"front door\", via the ingress gateway

      The environment variable LB_IP captures the public IP address of the load balancer fronting the ingress gateway. We can access the service endpoints through that IP address.

    "},{"location":"api/#deploy-the-sleep-client","title":"Deploy the sleep client","text":"

    We make use of Istio's sleep sample application to facilitate the task of making calls to workloads from inside the cluster.

    The sleep deployment is a blank client Pod that can be used to send direct calls to specific microservices from within the Kubernetes cluster.

    Deploy sleep to your cluster:

    kubectl apply -f manifests/sleep.yaml\n

    Wait for the sleep pod to be ready (2/2 containers).

    "},{"location":"api/#test-individual-service-endpoints","title":"Test individual service endpoints","text":"

    We assume that you have the excellent jq utility already installed.

    "},{"location":"api/#call-the-vets-controller-endpoint","title":"Call the \"Vets\" controller endpoint","text":"InternalExternal
    kubectl exec deploy/sleep -- curl -s vets-service:8080/vets | jq\n
    curl -s http://$LB_IP/api/vet/vets | jq\n
    "},{"location":"api/#customers-service-endpoints","title":"Customers service endpoints","text":"

    Here are a couple of customers-service endpoints to test:

    InternalExternal
    kubectl exec deploy/sleep -- curl -s customers-service:8080/owners | jq\n
    kubectl exec deploy/sleep -- curl -s customers-service:8080/owners/1/pets/1 | jq\n
    curl -s http://$LB_IP/api/customer/owners | jq\n
    curl -s http://$LB_IP/api/customer/owners/1/pets/1 | jq\n

    Give the owner George Franklin a new pet, Sir Hiss (a snake):

    InternalExternal
    kubectl exec deploy/sleep -- curl -s -v \\\n  -X POST -H 'Content-Type: application/json' \\\n  customers-service:8080/owners/1/pets \\\n  -d '{ \"name\": \"Sir Hiss\", \"typeId\": 4, \"birthDate\": \"2020-01-01\" }'\n
    curl -v -X POST -H 'Content-Type: application/json' \\\n  http://$LB_IP/api/customer/owners/1/pets \\\n  -d '{ \"name\": \"Sir Hiss\", \"typeId\": 4, \"birthDate\": \"2020-01-01\" }'\n

    This can also be performed directly from the UI.

    "},{"location":"api/#the-visits-service","title":"The Visits service","text":"

    Test one of the visits-service endpoints:

    InternalExternal
    kubectl exec deploy/sleep -- curl -s visits-service:8080/pets/visits?petId=8 | jq\n
    curl -s http://$LB_IP/api/visit/pets/visits?petId=8 | jq\n
    "},{"location":"api/#petclinic-frontend","title":"PetClinic Frontend","text":"

    Call petclinic-frontend endpoint that calls both the customers and visits services:

    InternalExternal
    kubectl exec deploy/sleep -- curl -s petclinic-frontend:8080/api/gateway/owners/6 | jq\n
    curl -s http://$LB_IP/api/gateway/owners/6 | jq\n
    "},{"location":"api/#summary","title":"Summary","text":"

    Now that we have some familiarity with some of the API endpoints that make up this application, let's turn our attention to configuring a small aspect of resilience: timeouts.

    "},{"location":"deploy/","title":"Build and Deploy PetClinic","text":""},{"location":"deploy/#deploy-each-microservices-backing-database","title":"Deploy each microservice's backing database","text":"

    Deployment decisions:

    "},{"location":"deploy/#preparatory-steps","title":"Preparatory steps","text":"

    We assume you already have helm installed.

    1. Add the helm repository:

      helm repo add bitnami https://charts.bitnami.com/bitnami\n
    2. Update it:

      helm repo update\n
    "},{"location":"deploy/#deploy-the-databases","title":"Deploy the databases","text":"

    Deploy the databases with a helm install command, one for each app/service:

    1. Vets:

      helm install vets-db-mysql bitnami/mysql --set auth.database=service_instance_db\n
    2. Visits:

      helm install visits-db-mysql bitnami/mysql --set auth.database=service_instance_db\n
    3. Customers:

      helm install customers-db-mysql bitnami/mysql --set auth.database=service_instance_db\n

    The databases should be up after ~ 1-2 minutes.

    Wait for the pods to be ready (2/2 containers).

    "},{"location":"deploy/#build-the-apps-create-the-docker-images-push-them-to-the-local-registry","title":"Build the apps, create the docker images, push them to the local registry","text":"

    We assume you already have maven installed locally.

    1. Compile the apps and run the tests:

      mvn clean package\n
    2. Build the images

      mvn spring-boot:build-image\n
    3. Publish the images

      ./push-images.sh\n
    "},{"location":"deploy/#deploy-the-apps","title":"Deploy the apps","text":"

    The deployment manifests are located in manifests/deploy.

    The services are vets, visits, customers, and petclinic-frontend. For each service we create a Kubernetes ServiceAccount, a Deployment, and a ClusterIP service.

    Apply the deployment manifests:

    cat manifests/deploy/*.yaml | envsubst | kubectl apply -f -\n

    The manifests reference the image registry environment variable, and so are passed through envsubst for resolution before being applied to the Kubernetes cluster.

    Wait for the pods to be ready (2/2 containers).

    Here is a simple diagnostic command that tails the logs of the customers service pod, showing that the Spring Boot application has come up and is listening on port 8080.

    kubectl logs --follow svc/customers-service\n
    "},{"location":"deploy/#test-database-connectivity","title":"Test database connectivity","text":"

    The below instructions are taken from the output from the prior helm install command.

    Connect directly to the vets-db-mysql database:

    1. Obtain the root password from the Kubernetes secret:

      bash shellfish shell
      MYSQL_ROOT_PASSWORD=$(kubectl get secret --namespace default \\\n  vets-db-mysql -o jsonpath=\"{.data.mysql-root-password}\" | base64 -d)\n
      set MYSQL_ROOT_PASSWORD $(kubectl get secret --namespace default \\\n  vets-db-mysql -o jsonpath=\"{.data.mysql-root-password}\" | base64 -d)\n
    2. Create, and shell into a mysql client pod:

      kubectl run vets-db-mysql-client \\\n  --rm --tty -i --restart='Never' \\\n  --image docker.io/bitnami/mysql:8.0.36-debian-11-r2 \\\n  --namespace default \\\n  --env MYSQL_ROOT_PASSWORD=$MYSQL_ROOT_PASSWORD \\\n  --command -- bash\n
    3. Use the mysql client to connect to the database:

      mysql -h vets-db-mysql.default.svc.cluster.local -uroot -p\"$MYSQL_ROOT_PASSWORD\"\n

    At the mysql prompt, select the database, list the tables, and query vet records:

    use service_instance_db;\n
    show tables;\n
    select * from vets;\n

    Exit the mysql prompt with \\q, then exit the pod with exit.

    One can similarly connect to and inspect the customers-db-mysql and visits-db-mysql databases.

    "},{"location":"deploy/#summary","title":"Summary","text":"

    At this point you should have all applications deployed and running, connected to their respective databases.

    But we cannot access the application's UI until we configure ingress, which is our next topic.

    "},{"location":"ingress/","title":"Configure Ingress","text":"

    The original project made use of the Spring Cloud Gateway project to configure ingress and routing.

    Ingress is Istio's bread and butter. Envoy provides those capabilities. And so the dependency was removed and replaced with a standard Istio Ingress Gateway.

    The Istio installation includes the Ingress Gateway component. You should be able to see the deployment in the istio-system namespace with:

    kubectl get deploy -n istio-system\n

    Ingress is configured with Istio in two parts: the gateway configuration proper, and the configuration to route requests to backing services.

    "},{"location":"ingress/#configure-the-gateway","title":"Configure the Gateway","text":"

    The below configuration creates a listener on the ingress gateway for HTTP traffic on port 80.

    gateway.yaml
    ---\napiVersion: networking.istio.io/v1beta1\nkind: Gateway\nmetadata:\n  name: main-gateway\nspec:\n  selector:\n    istio: ingressgateway\n  servers:\n    - port:\n        number: 80\n        name: http\n        protocol: HTTP\n      hosts:\n      - \"*\"\n

    Apply the gateway configuration to your cluster:

    kubectl apply -f manifests/ingress/gateway.yaml\n

    Since no routing has been configured yet for the gateway, a request to the gateway should return an HTTP 404 response:

    curl -v http://$LB_IP/\n
    "},{"location":"ingress/#configure-routing","title":"Configure routing","text":"

    The original Spring Cloud Gateway routing rules were replaced and are now captured with a standard Istio VirtualService in manifests/ingress/routes.yaml:

    routes.yaml configures routing for the Istio ingress gateway (which replaces spring cloud gateway) to the application's API endpoints.

    It exposes endpoints to each of the services, and in addition, routes requests with the /api/gateway prefix to the petclinic-frontend application. In the original version, the petclinic-frontend application and the gateway \"proper\" were bundled together as a single microservice.

    routes.yaml
    ---\napiVersion: networking.istio.io/v1beta1\nkind: VirtualService\nmetadata:\n  name: petclinic-routes\nspec:\n  hosts:\n  - \"*\"\n  gateways:\n  - main-gateway\n  http:\n    - match:\n      - uri:\n          prefix: \"/api/customer/\"\n      rewrite:\n        uri: \"/\"\n      route:\n      - destination:\n          host: customers-service.default.svc.cluster.local\n          port:\n            number: 8080\n    - match:\n      - uri:\n          prefix: \"/api/visit/\"\n      rewrite:\n        uri: \"/\"\n      route:\n      - destination:\n          host: visits-service.default.svc.cluster.local\n          port:\n            number: 8080\n      timeout: 4s\n    - match:\n      - uri:\n          prefix: \"/api/vet/\"\n      rewrite:\n        uri: \"/\"\n      route:\n      - destination:\n          host: vets-service.default.svc.cluster.local\n          port:\n            number: 8080\n    - match:\n        - uri:\n            prefix: \"/api/gateway\"\n      route:\n        - destination:\n            host: petclinic-frontend.default.svc.cluster.local\n            port:\n              number: 8080\n    - route:\n        - destination:\n            host: petclinic-frontend.default.svc.cluster.local\n            port:\n              number: 8080\n

    Apply the routing rules for the gateway:

    kubectl apply -f manifests/ingress/routes.yaml\n
    "},{"location":"ingress/#visit-the-app","title":"Visit the app","text":"

    With the application deployed and ingress configured, we can finally view the application's user interface.

    To see the running PetClinic application, open a browser tab and visit http://$LB_IP/.

    You should see a home page. Navigate to the Vets page, then the Pet Owners page, and finally, drill down to a specific pet owner, and otherwise get acquainted with the UI.

    "},{"location":"ingress/#analysis","title":"Analysis","text":"

    Prior to Istio, the common solution in the Spring ecosystem to issues of service discovery, resilience, load balancing was Spring Cloud. Spring Cloud consists of multiple projects that provide dependencies that developers add to their applications to help them deal with issues of client-side load-balancing, retries, circuit-breaking, service discovery and so on.

    In spring-petclinic-istio, those dependencies have been removed. What remains as dependencies inside each service are what you'd expect to find:

    "},{"location":"observability/","title":"Observability","text":""},{"location":"observability/#distributed-tracing","title":"Distributed Tracing","text":"

    The Istio documentation dedicates a page to guide users on how to propagate trace headers in calls between microservices, in order to support distributed tracing.

    In this version of PetClinic, all Spring Boot microservices have been configured to propagate trace headers using micrometer-tracing.

    Micrometer tracing is an elegant solution, in that we do not have to couple the trace header propagation with the application logic. Instead, it becomes a simple matter of static configuration.

    See the application.yaml resource files and the property management.tracing.baggage.remote-fields which configures the fields to propagate.

    To make testing this easier, Istio is configured with 100% trace sampling.

    "},{"location":"observability/#observe-distributed-traces","title":"Observe distributed traces","text":"

    In its samples directory, Istio provides sample deployment manifests for various observability tools, including Zipkin and Jaeger.

    Deploy Jaeger to your Kubernetes cluster:

    1. Navigate to the base directory of your Istio distribution:

      cd istio-1.20.2\n
    2. Deploy jaeger:

      kubectl apply -f samples/addons/jaeger.yaml\n
    3. Wait for the Jaeger pod to be ready:

      kubectl get pod -n istio-system\n

    Next, let us turn our attention to calling an endpoint that will generate a trace capture, and observe it in the Jaeger dashboard:

    1. Call the petclinic-frontend endpoint that calls both the customers and visits services. Feel free to make mulitple requests to generate multiple traces.

      curl -s http://$LB_IP/api/gateway/owners/6 | jq\n
    2. Launch the jaeger dashboard:

      istioctl dashboard jaeger\n
    3. In Jaeger, search for traces involving the services petclinic-frontend, customers, and visits.

    You should see one or more traces, each with six spans. Click on any one of them to display the full end-to-end request-response flow across all three services.

    Close the Jaeger dashboard.

    "},{"location":"observability/#exposing-metrics","title":"Exposing metrics","text":"

    Istio has built-in support for Prometheus as a mechanism for metrics collection.

    Each Spring Boot application is configured with a micrometer dependency to expose a scrape endpoint for Prometheus to collect metrics.

    Call the scrape endpoint and inspect the metrics exposed directly by the Spring Boot application:

    kubectl exec deploy/customers-v1 -c istio-proxy -- curl -s localhost:8080/actuator/prometheus\n

    Separately, Envoy collects a variety of metrics, often referred to as RED metrics, for: Requests, Errors, and Durations.

    Inspect the metrics collected and exposed by the Envoy sidecar:

    kubectl exec deploy/customers-v1 -c istio-proxy -- curl -s localhost:15090/stats/prometheus\n

    One common metric to note is the counter istio_requests_total:

    kubectl exec deploy/customers-v1 -c istio-proxy -- \\\n  curl -s localhost:15090/stats/prometheus | grep istio_requests_total\n

    Both the application metrics and envoy's metrics are aggregated (merged) and exposed on port 15020:

    kubectl exec deploy/customers-v1 -c istio-proxy -- curl -s localhost:15020/stats/prometheus\n

    What allows Istio to aggregate both scrape endpoints are annotations placed in the pod template specification for each application, communicating the URL of the Prometheus scrape endpoint.

    For example, here are the prometheus annotations for the customers service.

    For more information on metrics merging and Prometheus, see the Istio documentation.

    "},{"location":"observability/#send-requests-to-the-application","title":"Send requests to the application","text":"

    To send a steady stream of requests through the petclinic-frontend application, we use siege. Feel free to use other tools, or maybe a simple bash while loop.

    Run the following siege command to send requests to various endpoints in our application:

    siege --concurrent=6 --delay=2 --file=./urls.txt\n

    Leave the siege command running.

    Open a separate terminal in which to run subsequent commands.

    "},{"location":"observability/#the-prometheus-dashboard","title":"The Prometheus dashboard","text":"

    Deploy Prometheus to your Kubernetes cluster:

    kubectl apply -f samples/addons/prometheus.yaml\n

    The latest version of Spring Boot (3.2) takes advantage of a relatively recent feature of Prometheus known as \"exemplars.\" The current version of Istio uses an older version of Prometheus (2.41) that does not yet support exemplars.

    Before deploying Prometheus, patch the prometheus deployment to use the latest version of the image:

    kubectl patch deploy -n istio-system prometheus --patch-file=manifests/config/prom-patch.yaml\n

    Launch the Prometheus dashboard:

    istioctl dash prometheus\n

    Here are some PromQL queries you can try out, that will fetch metrics from Prometheus' metrics store:

    1. The number of requests made by petclinic-frontend to the cutomers service:

      istio_requests_total{source_app=\"petclinic-frontend\",destination_app=\"customers-service\",reporter=\"source\"}\n
    2. A business metric exposed by the application proper: the number of calls to the findPet method:

      petclinic_pet_seconds_count{method=\"findPet\"}\n
    "},{"location":"observability/#istios-grafana-metrics-dashboards","title":"Istio's Grafana metrics dashboards","text":"

    Istio provides standard service mesh dashboards, based on the standard metrics collected by Envoy and sent to Prometheus.

    Deploy Grafana:

    kubectl apply -f samples/addons/grafana.yaml\n

    Launch the Grafana dashboard:

    istioctl dash grafana\n

    Navigate to the dashboards section, you will see an Istio folder.

    Select the Istio service dashboard.

    Review the Istio Service Dashboards for the services petclinic-frontend, vets, customers, and visits.

    The dashboard exposes metrics such as the client request volume, client success rate, and client request durations:

    "},{"location":"observability/#petclinic-custom-grafana-dashboard","title":"PetClinic custom Grafana dashboard","text":"

    The version of PetClinic from which this version derives already contained a custom Grafana dashboard.

    To import the dashboard into Grafana:

    1. Navigate to \"Dashboards\"
    2. Click the \"New\" pulldown button, and select \"Import\"
    3. Select \"Upload dashboard JSON file\", and select the file grafana-petclinic-dashboard.json from the repository's base directory.
    4. Select \"Prometheus\" as the data source
    5. Finally, click \"Import\"

    The top two panels showing request latencies and request volumes are technically now redundant: both are now subsumed by the standard Istio dashboards.

    Below those panels are custom application metrics. Metrics such as number of owners, pets, and visits created or updated.

    Create a new Owner, give an existing owner a new pet, or add a visit for a pet, and watch those counters increment in Grafana.

    "},{"location":"observability/#kiali","title":"Kiali","text":"

    Kiali is a bespoke \"console\" for Istio Service Mesh. One of the features of Kiali that stands out are the visualizations of requests making their way through the call graph.

    1. Cancel the currently-running siege command. Relaunch siege, but with a different set of target endpoints:

      siege --concurrent=6 --delay=2 --file=./frontend-urls.txt\n
    2. Deploy Kiali:

      kubectl apply -f samples/addons/kiali.yaml\n
    3. Launch the Kiali dashboard:

      istioctl dashboard kiali\n

      Select the Graph view and the default namespace.

      The flow of requests through the applications call graph will be rendered.

    "},{"location":"resilience/","title":"Resilience","text":"

    The original Spring Cloud version of PetClinic used Resilience4j to configure calls to the visit service with a timeout of 4 seconds, and a fallback to return an empty list of visits in the event that the request to get visits timed out.

    In this version of the application, the Spring Cloud dependencies were removed. We can replace this configuration with an Istio Custom Resource.

    The file timeouts.yaml configures the equivalent 4s timeout on requests to the visits service, replacing the previous Resilience4j-based implementation.

    timeouts.yaml
    ---\napiVersion: networking.istio.io/v1beta1\nkind: VirtualService\nmetadata:\n  name: visits\nspec:\n  hosts:\n  - visits-service.default.svc.cluster.local\n  http:\n  - route:\n      - destination:\n          host: visits-service.default.svc.cluster.local\n    timeout: 4s\n

    Apply the timeout configuration to your cluster:

    kubectl apply -f manifests/config/timeouts.yaml\n

    The fallback logic in PetClinicController.getOwnerDetails was retrofitted to detect the Gateway Timeout (504) response code instead of using a Resilience4j API.

    To test this feature, the environment variable DELAY_MILLIS was introduced into the visits service to insert a delay when fetching visits.

    Here is how to test the behavior:

    1. Call visits-service directly:

      bash shellfish shell
      kubectl exec deploy/sleep -- curl -s visits-service:8080/pets/visits?petId=8 | jq\n
      kubectl exec deploy/sleep -- curl -s visits-service:8080/pets/visits\\?petId=8 | jq\n

      Observe the call succeed and return a list of visits for this particular pet.

    2. Call the petclinic-frontend endpoint, and note that for each pet, we see a list of visits:

      kubectl exec deploy/sleep -- curl -s petclinic-frontend:8080/api/gateway/owners/6 | jq\n
    3. Edit the deployment manifest for the visits-service so that the environment variable DELAY_MILLIS is set to the value \"5000\" (which is 5 seconds). One way to do this is to edit the file with (then save and exit):

      kubectl edit deploy visits-v1\n

      Wait until the new pod has rolled out and become ready.

    4. Once the new visits-service pod reaches Ready status, make the same call again:

      bash shellfish shell
      kubectl exec deploy/sleep -- curl -v visits-service:8080/pets/visits?petId=8 | jq\n
      kubectl exec deploy/sleep -- curl -v visits-service:8080/pets/visits\\?petId=8 | jq\n

      Observe the 504 (Gateway timeout) response this time around (because it exceeds the 4-second timeout).

    5. Call the petclinic-frontend endpoint once more, and note that for each pet, the list of visits is empty:

      kubectl exec deploy/sleep -- curl -s petclinic-frontend:8080/api/gateway/owners/6 | jq\n

      That is, the call succeeds, the timeout is caught, and the fallback empty list of visits is returned in its place.

    6. Tail the logs of petclinic-frontend and observe a log message indicating the fallback was triggered.

      kubectl logs --follow svc/petclinic-frontend\n

    Restore the original behavior with no delay: edit the visits-v1 deployment again and set the environment variable value to \"0\".

    Let us next turn our attention to security-related configuration.

    "},{"location":"security/","title":"Security","text":""},{"location":"security/#leverage-workload-identity","title":"Leverage workload identity","text":"

    Workloads in Istio are assigned a SPIFFE identity.

    Authorization policies can be applied that allow or deny access to a service as a function of that identity.

    For example, we can restrict access to each database exclusively to its corresponding service, i.e.:

    The above policy is specified in the file authorization-policies.yaml:

    authorization-policies.yaml
    ---\napiVersion: security.istio.io/v1beta1\nkind: AuthorizationPolicy\nmetadata:\n  name: vets-db-allow-vets-service\nspec:\n  selector:\n    matchLabels:\n      app.kubernetes.io/instance: vets-db-mysql\n  action: ALLOW\n  rules:\n  - from:\n    - source:\n        principals: [\"cluster.local/ns/default/sa/vets-service\"]\n    to:\n    - operation:\n        ports: [\"3306\"]\n---\napiVersion: security.istio.io/v1beta1\nkind: AuthorizationPolicy\nmetadata:\n  name: customers-db-allow-customers-service\nspec:\n  selector:\n    matchLabels:\n      app.kubernetes.io/instance: customers-db-mysql\n  action: ALLOW\n  rules:\n    - from:\n        - source:\n            principals: [\"cluster.local/ns/default/sa/customers-service\"]\n      to:\n        - operation:\n            ports: [\"3306\"]\n---\napiVersion: security.istio.io/v1beta1\nkind: AuthorizationPolicy\nmetadata:\n  name: visits-db-allow-visits-service\nspec:\n  selector:\n    matchLabels:\n      app.kubernetes.io/instance: visits-db-mysql\n  action: ALLOW\n  rules:\n    - from:\n        - source:\n            principals: [\"cluster.local/ns/default/sa/visits-service\"]\n      to:\n        - operation:\n            ports: [\"3306\"]\n

    The main aspects of each authorization policy are:

    1. The selector identifies the workload to apply the policy to
    2. The action in this case is to Allow requests that match the given rules
    3. The rules section which specify the source principal, aka workload identity.
    4. The to section applies the policy to requests on port 3306, the port that mysqld listens on.
    "},{"location":"security/#exercise","title":"Exercise","text":"
    1. Use the previous \"Test database connectivity\" instructions to create a client pod and to use it to connect to the \"vets\" database. This operation should succeed. You should be able to see the \"service_instance_db\" and see the tables and query them.

    2. Apply the authorization policies:

      kubectl apply -f manifests/config/authorization-policies.yaml\n
    3. Attempt once more to create a client pod to connect to the \"vets\" database. This time the operation will fail. That's because only the vets service is now allowed to connect to the database.

    4. Verify that the application itself continues to function because all database queries are performed via its associated service.

    "},{"location":"security/#summary","title":"Summary","text":"

    One problem in the enterprise is enforcing access to data via microservices. Giving another team direct access to data is a well-known anti-pattern, as it couples multiple applications to a specific storage technology, a specific database schema, one that cannot be allowed to evolve without impacting everyone.

    With the aid of Istio and workload identity, we can make sure that the manner in which data is stored by a microservice is an entirely internal concern, one that can be modified at a later time, perhaps to use a different storage backend, or perhaps simply to allow for the evolution of the schema without \"breaking all the clients\".

    After traffic management, resilience, and security, it is time to discuss the other important facet that servicec meshes help with: Observability.

    "},{"location":"setup/","title":"Setup","text":"

    Begin by cloning a local copy of the Spring PetClinic Istio repository from GitHub.

    "},{"location":"setup/#kubernetes","title":"Kubernetes","text":"

    Select whether you wish to provision Kubernetes locally or remotely using a cloud provider.

    Local SetupRemote Setup

    On a Mac running Docker Desktop or Rancher Desktop, make sure to give your VM plenty of CPU and memory. 16GB of memory and 6 CPUs seems to work for me.

    Deploy a local K3D Kubernetes cluster with a local registry:

    k3d cluster create my-istio-cluster \\\n  --api-port 6443 \\\n  --k3s-arg \"--disable=traefik@server:0\" \\\n  --port 80:80@loadbalancer \\\n  --registry-create my-cluster-registry:0.0.0.0:5010\n

    Above, we:

    Provision a k8s cluster in the cloud of your choice. For example, on GCP:

    gcloud container clusters create my-istio-cluster \\\n  --cluster-version latest \\\n  --machine-type \"e2-standard-2\" \\\n  --num-nodes \"3\" \\\n  --network \"default\"\n
    "},{"location":"setup/#environment-variables","title":"Environment variables","text":"

    Use envrc-template.sh as the basis for configuring environment variables.

    Be sure to:

    1. Set the local variable local_setup to either \"true\" or \"false\", depending on your choice of a local or remote cluster.
    2. If using a remote setup, set the value of PUSH_IMAGE_REGISTRY to the value of your image registry URL.

    I highly recommend using direnv, a convenient way of associating setting environment variables with a specific directory.

    If you choose to use direnv, then the variables can be automatically set by renaming the file to .envrc and running the command direnv allow.

    "},{"location":"setup/#istio","title":"Istio","text":"
    1. Follow the Istio documentation's instructions to download Istio.

    2. After you have added the istioctl CLI to your PATH, run the following installation command:

      istioctl install -f manifests/istio-install-manifest.yaml\n

    The above-referenced configuration manifest configures certain facets of the mesh, namely:

    1. Setting trace sampling at 100%, for ease of obtaining distributed traces
    2. Deploying sidecars (envoy proxies) not only alongside workloads, but also in front of mysql databases.

    Once Istio is installed, feel free to verify the installation with:

    istioctl verify-install\n

    In the next section, you will work on deploying the microservices to the default namespace.

    As a final step, label the default namespace for sidecar injection with:

    kubectl label ns default istio-injection=enabled\n
    "}]} \ No newline at end of file +{"config":{"lang":["en"],"separator":"[\\s\\-]+","pipeline":["stopWordFilter"]},"docs":[{"location":"","title":"Introduction","text":"

    A great deal of \"cruft\" accumulates inside many files in spring-petclinic-cloud: configuration for service discovery, load balancing, routing, retries, resilience, and so on.

    When you move to Istio, you get separation of concerns. It's ironic that the Spring framework's raison d'\u00eatre was separation of concerns, but its focus is inside a monolithic application, not between microservices. When you move to cloud-native applications, you end up with a tangle of concerns that Istio helps you untangle.

    And, little by little, our apps become sane again. It reminds me of one of Antoine de Saint-Exup\u00e9ry's famous quotes:

    Perfection is finally attained not when there is no longer anything to add, but when there is no longer anything to take away

    The following instructions will walk you through deploying spring-petclinic-istio either using a local Kubernetes cluster or a remote, cloud-based cluster.

    After the application is deployed, I walk you through some aspects of the application and additional benefits gained from running on the Istio platform: orthogonal configuration of traffic management and resilience concerns, stronger security and workload identity, and observability.

    Let's get started..

    "},{"location":"api/","title":"API Endpoints","text":"

    Below, we demonstrate calling endpoints on the application in either of two ways:

    1. Internally from within the Kubernetes cluster
    2. Through the \"front door\", via the ingress gateway

      The environment variable LB_IP captures the public IP address of the load balancer fronting the ingress gateway. We can access the service endpoints through that IP address.

    "},{"location":"api/#deploy-the-sleep-client","title":"Deploy the sleep client","text":"

    We make use of Istio's sleep sample application to facilitate the task of making calls to workloads from inside the cluster.

    The sleep deployment is a blank client Pod that can be used to send direct calls to specific microservices from within the Kubernetes cluster.

    Deploy sleep to your cluster:

    kubectl apply -f manifests/sleep.yaml\n

    Wait for the sleep pod to be ready (2/2 containers).

    "},{"location":"api/#test-individual-service-endpoints","title":"Test individual service endpoints","text":"

    We assume that you have the excellent jq utility already installed.

    "},{"location":"api/#call-the-vets-controller-endpoint","title":"Call the \"Vets\" controller endpoint","text":"InternalExternal
    kubectl exec deploy/sleep -- curl -s vets-service:8080/vets | jq\n
    curl -s http://$LB_IP/api/vet/vets | jq\n
    "},{"location":"api/#customers-service-endpoints","title":"Customers service endpoints","text":"

    Here are a couple of customers-service endpoints to test:

    InternalExternal
    kubectl exec deploy/sleep -- curl -s customers-service:8080/owners | jq\n
    kubectl exec deploy/sleep -- curl -s customers-service:8080/owners/1/pets/1 | jq\n
    curl -s http://$LB_IP/api/customer/owners | jq\n
    curl -s http://$LB_IP/api/customer/owners/1/pets/1 | jq\n

    Give the owner George Franklin a new pet, Sir Hiss (a snake):

    InternalExternal
    kubectl exec deploy/sleep -- curl -s -v \\\n  -X POST -H 'Content-Type: application/json' \\\n  customers-service:8080/owners/1/pets \\\n  -d '{ \"name\": \"Sir Hiss\", \"typeId\": 4, \"birthDate\": \"2020-01-01\" }'\n
    curl -v -X POST -H 'Content-Type: application/json' \\\n  http://$LB_IP/api/customer/owners/1/pets \\\n  -d '{ \"name\": \"Sir Hiss\", \"typeId\": 4, \"birthDate\": \"2020-01-01\" }'\n

    This can also be performed directly from the UI.

    "},{"location":"api/#the-visits-service","title":"The Visits service","text":"

    Test one of the visits-service endpoints:

    InternalExternal
    kubectl exec deploy/sleep -- curl -s visits-service:8080/pets/visits?petId=8 | jq\n
    curl -s http://$LB_IP/api/visit/pets/visits?petId=8 | jq\n
    "},{"location":"api/#petclinic-frontend","title":"PetClinic Frontend","text":"

    Call petclinic-frontend endpoint that calls both the customers and visits services:

    InternalExternal
    kubectl exec deploy/sleep -- curl -s petclinic-frontend:8080/api/gateway/owners/6 | jq\n
    curl -s http://$LB_IP/api/gateway/owners/6 | jq\n
    "},{"location":"api/#summary","title":"Summary","text":"

    Now that we have some familiarity with some of the API endpoints that make up this application, let's turn our attention to configuring a small aspect of resilience: timeouts.

    "},{"location":"deploy/","title":"Build and Deploy PetClinic","text":""},{"location":"deploy/#deploy-each-microservices-backing-database","title":"Deploy each microservice's backing database","text":"

    Deployment decisions:

    "},{"location":"deploy/#preparatory-steps","title":"Preparatory steps","text":"

    We assume you already have helm installed.

    1. Add the helm repository:

      helm repo add bitnami https://charts.bitnami.com/bitnami\n
    2. Update it:

      helm repo update\n
    "},{"location":"deploy/#deploy-the-databases","title":"Deploy the databases","text":"

    Deploy the databases with a helm install command, one for each app/service:

    1. Vets:

      helm install vets-db-mysql bitnami/mysql --set auth.database=service_instance_db\n
    2. Visits:

      helm install visits-db-mysql bitnami/mysql --set auth.database=service_instance_db\n
    3. Customers:

      helm install customers-db-mysql bitnami/mysql --set auth.database=service_instance_db\n

    The databases should be up after ~ 1-2 minutes.

    Wait for the pods to be ready (2/2 containers).

    "},{"location":"deploy/#build-the-apps-create-the-docker-images-push-them-to-the-local-registry","title":"Build the apps, create the docker images, push them to the local registry","text":"

    We assume you already have maven installed locally.

    1. Compile the apps and run the tests:

      mvn clean package\n
    2. Build the images

      mvn spring-boot:build-image\n
    3. Publish the images

      ./push-images.sh\n
    "},{"location":"deploy/#deploy-the-apps","title":"Deploy the apps","text":"

    The deployment manifests are located in manifests/deploy.

    The services are vets, visits, customers, and petclinic-frontend. For each service we create a Kubernetes ServiceAccount, a Deployment, and a ClusterIP service.

    Apply the deployment manifests:

    cat manifests/deploy/*.yaml | envsubst | kubectl apply -f -\n

    The manifests reference the image registry environment variable, and so are passed through envsubst for resolution before being applied to the Kubernetes cluster.

    Wait for the pods to be ready (2/2 containers).

    Here is a simple diagnostic command that tails the logs of the customers service pod, showing that the Spring Boot application has come up and is listening on port 8080.

    kubectl logs --follow svc/customers-service\n
    "},{"location":"deploy/#test-database-connectivity","title":"Test database connectivity","text":"

    The below instructions are taken from the output from the prior helm install command.

    Connect directly to the vets-db-mysql database:

    1. Obtain the root password from the Kubernetes secret:

      bash shellfish shell
      MYSQL_ROOT_PASSWORD=$(kubectl get secret --namespace default \\\n  vets-db-mysql -o jsonpath=\"{.data.mysql-root-password}\" | base64 -d)\n
      set MYSQL_ROOT_PASSWORD $(kubectl get secret --namespace default \\\n  vets-db-mysql -o jsonpath=\"{.data.mysql-root-password}\" | base64 -d)\n
    2. Create, and shell into a mysql client pod:

      kubectl run vets-db-mysql-client \\\n  --rm --tty -i --restart='Never' \\\n  --image docker.io/bitnami/mysql:8.0.36-debian-11-r2 \\\n  --namespace default \\\n  --env MYSQL_ROOT_PASSWORD=$MYSQL_ROOT_PASSWORD \\\n  --command -- bash\n
    3. Use the mysql client to connect to the database:

      mysql -h vets-db-mysql.default.svc.cluster.local -uroot -p\"$MYSQL_ROOT_PASSWORD\"\n

    At the mysql prompt, select the database, list the tables, and query vet records:

    use service_instance_db;\n
    show tables;\n
    select * from vets;\n

    Exit the mysql prompt with \\q, then exit the pod with exit.

    One can similarly connect to and inspect the customers-db-mysql and visits-db-mysql databases.

    "},{"location":"deploy/#summary","title":"Summary","text":"

    At this point you should have all applications deployed and running, connected to their respective databases.

    But we cannot access the application's UI until we configure ingress, which is our next topic.

    "},{"location":"ingress/","title":"Configure Ingress","text":"

    The original project made use of the Spring Cloud Gateway project to configure ingress and routing.

    Ingress is Istio's bread and butter. Envoy provides those capabilities. And so the dependency was removed and replaced with a standard Istio Ingress Gateway.

    The Istio installation includes the Ingress Gateway component. You should be able to see the deployment in the istio-system namespace with:

    kubectl get deploy -n istio-system\n

    Ingress is configured with Istio in two parts: the gateway configuration proper, and the configuration to route requests to backing services.

    "},{"location":"ingress/#configure-the-gateway","title":"Configure the Gateway","text":"

    The below configuration creates a listener on the ingress gateway for HTTP traffic on port 80.

    gateway.yaml
    ---\napiVersion: networking.istio.io/v1beta1\nkind: Gateway\nmetadata:\n  name: main-gateway\nspec:\n  selector:\n    istio: ingressgateway\n  servers:\n    - port:\n        number: 80\n        name: http\n        protocol: HTTP\n      hosts:\n      - \"*\"\n

    Apply the gateway configuration to your cluster:

    kubectl apply -f manifests/ingress/gateway.yaml\n

    Since no routing has been configured yet for the gateway, a request to the gateway should return an HTTP 404 response:

    curl -v http://$LB_IP/\n
    "},{"location":"ingress/#configure-routing","title":"Configure routing","text":"

    The original Spring Cloud Gateway routing rules were replaced and are now captured with a standard Istio VirtualService in manifests/ingress/routes.yaml:

    routes.yaml configures routing for the Istio ingress gateway (which replaces spring cloud gateway) to the application's API endpoints.

    It exposes endpoints to each of the services, and in addition, routes requests with the /api/gateway prefix to the petclinic-frontend application. In the original version, the petclinic-frontend application and the gateway \"proper\" were bundled together as a single microservice.

    routes.yaml
    ---\napiVersion: networking.istio.io/v1beta1\nkind: VirtualService\nmetadata:\n  name: petclinic-routes\nspec:\n  hosts:\n  - \"*\"\n  gateways:\n  - main-gateway\n  http:\n    - match:\n      - uri:\n          prefix: \"/api/customer/\"\n      rewrite:\n        uri: \"/\"\n      route:\n      - destination:\n          host: customers-service.default.svc.cluster.local\n          port:\n            number: 8080\n    - match:\n      - uri:\n          prefix: \"/api/visit/\"\n      rewrite:\n        uri: \"/\"\n      route:\n      - destination:\n          host: visits-service.default.svc.cluster.local\n          port:\n            number: 8080\n      timeout: 4s\n    - match:\n      - uri:\n          prefix: \"/api/vet/\"\n      rewrite:\n        uri: \"/\"\n      route:\n      - destination:\n          host: vets-service.default.svc.cluster.local\n          port:\n            number: 8080\n    - match:\n        - uri:\n            prefix: \"/api/gateway\"\n      route:\n        - destination:\n            host: petclinic-frontend.default.svc.cluster.local\n            port:\n              number: 8080\n    - route:\n        - destination:\n            host: petclinic-frontend.default.svc.cluster.local\n            port:\n              number: 8080\n

    Apply the routing rules for the gateway:

    kubectl apply -f manifests/ingress/routes.yaml\n
    "},{"location":"ingress/#visit-the-app","title":"Visit the app","text":"

    With the application deployed and ingress configured, we can finally view the application's user interface.

    To see the running PetClinic application, open a browser tab and visit http://$LB_IP/.

    You should see a home page. Navigate to the Vets page, then the Pet Owners page, and finally, drill down to a specific pet owner, and otherwise get acquainted with the UI.

    Next, let us explore some of the API endpoints exposed by the PetClinic application.

    "},{"location":"observability/","title":"Observability","text":""},{"location":"observability/#distributed-tracing","title":"Distributed Tracing","text":"

    The Istio documentation dedicates a page to guide users on how to propagate trace headers in calls between microservices, in order to support distributed tracing.

    In this version of PetClinic, all Spring Boot microservices have been configured to propagate trace headers using micrometer-tracing.

    Micrometer tracing is an elegant solution, in that we do not have to couple the trace header propagation with the application logic. Instead, it becomes a simple matter of static configuration.

    See the application.yaml resource files and the property management.tracing.baggage.remote-fields which configures the fields to propagate.

    To make testing this easier, Istio is configured with 100% trace sampling.

    "},{"location":"observability/#observe-distributed-traces","title":"Observe distributed traces","text":"

    In its samples directory, Istio provides sample deployment manifests for various observability tools, including Zipkin and Jaeger.

    Deploy Jaeger to your Kubernetes cluster:

    1. Navigate to the base directory of your Istio distribution:

      cd istio-1.20.2\n
    2. Deploy jaeger:

      kubectl apply -f samples/addons/jaeger.yaml\n
    3. Wait for the Jaeger pod to be ready:

      kubectl get pod -n istio-system\n

    Next, let us turn our attention to calling an endpoint that will generate a trace capture, and observe it in the Jaeger dashboard:

    1. Call the petclinic-frontend endpoint that calls both the customers and visits services. Feel free to make mulitple requests to generate multiple traces.

      curl -s http://$LB_IP/api/gateway/owners/6 | jq\n
    2. Launch the jaeger dashboard:

      istioctl dashboard jaeger\n
    3. In Jaeger, search for traces involving the services petclinic-frontend, customers, and visits.

    You should see one or more traces, each with six spans. Click on any one of them to display the full end-to-end request-response flow across all three services.

    Close the Jaeger dashboard.

    "},{"location":"observability/#exposing-metrics","title":"Exposing metrics","text":"

    Istio has built-in support for Prometheus as a mechanism for metrics collection.

    Each Spring Boot application is configured with a micrometer dependency to expose a scrape endpoint for Prometheus to collect metrics.

    Call the scrape endpoint and inspect the metrics exposed directly by the Spring Boot application:

    kubectl exec deploy/customers-v1 -c istio-proxy -- curl -s localhost:8080/actuator/prometheus\n

    Separately, Envoy collects a variety of metrics, often referred to as RED metrics, for: Requests, Errors, and Durations.

    Inspect the metrics collected and exposed by the Envoy sidecar:

    kubectl exec deploy/customers-v1 -c istio-proxy -- curl -s localhost:15090/stats/prometheus\n

    One common metric to note is the counter istio_requests_total:

    kubectl exec deploy/customers-v1 -c istio-proxy -- \\\n  curl -s localhost:15090/stats/prometheus | grep istio_requests_total\n

    Both the application metrics and envoy's metrics are aggregated (merged) and exposed on port 15020:

    kubectl exec deploy/customers-v1 -c istio-proxy -- curl -s localhost:15020/stats/prometheus\n

    What allows Istio to aggregate both scrape endpoints are annotations placed in the pod template specification for each application, communicating the URL of the Prometheus scrape endpoint.

    For example, here are the prometheus annotations for the customers service.

    For more information on metrics merging and Prometheus, see the Istio documentation.

    "},{"location":"observability/#send-requests-to-the-application","title":"Send requests to the application","text":"

    To send a steady stream of requests through the petclinic-frontend application, we use siege. Feel free to use other tools, or maybe a simple bash while loop.

    Run the following siege command to send requests to various endpoints in our application:

    siege --concurrent=6 --delay=2 --file=./urls.txt\n

    Leave the siege command running.

    Open a separate terminal in which to run subsequent commands.

    "},{"location":"observability/#the-prometheus-dashboard","title":"The Prometheus dashboard","text":"

    Deploy Prometheus to your Kubernetes cluster:

    kubectl apply -f samples/addons/prometheus.yaml\n

    The latest version of Spring Boot (3.2) takes advantage of a relatively recent feature of Prometheus known as \"exemplars.\" The current version of Istio uses an older version of Prometheus (2.41) that does not yet support exemplars.

    Before deploying Prometheus, patch the prometheus deployment to use the latest version of the image:

    kubectl patch deploy -n istio-system prometheus --patch-file=manifests/config/prom-patch.yaml\n

    Launch the Prometheus dashboard:

    istioctl dash prometheus\n

    Here are some PromQL queries you can try out, that will fetch metrics from Prometheus' metrics store:

    1. The number of requests made by petclinic-frontend to the cutomers service:

      istio_requests_total{source_app=\"petclinic-frontend\",destination_app=\"customers-service\",reporter=\"source\"}\n
    2. A business metric exposed by the application proper: the number of calls to the findPet method:

      petclinic_pet_seconds_count{method=\"findPet\"}\n
    "},{"location":"observability/#istios-grafana-metrics-dashboards","title":"Istio's Grafana metrics dashboards","text":"

    Istio provides standard service mesh dashboards, based on the standard metrics collected by Envoy and sent to Prometheus.

    Deploy Grafana:

    kubectl apply -f samples/addons/grafana.yaml\n

    Launch the Grafana dashboard:

    istioctl dash grafana\n

    Navigate to the dashboards section, you will see an Istio folder.

    Select the Istio service dashboard.

    Review the Istio Service Dashboards for the services petclinic-frontend, vets, customers, and visits.

    The dashboard exposes metrics such as the client request volume, client success rate, and client request durations:

    "},{"location":"observability/#petclinic-custom-grafana-dashboard","title":"PetClinic custom Grafana dashboard","text":"

    The version of PetClinic from which this version derives already contained a custom Grafana dashboard.

    To import the dashboard into Grafana:

    1. Navigate to \"Dashboards\"
    2. Click the \"New\" pulldown button, and select \"Import\"
    3. Select \"Upload dashboard JSON file\", and select the file grafana-petclinic-dashboard.json from the repository's base directory.
    4. Select \"Prometheus\" as the data source
    5. Finally, click \"Import\"

    The top two panels showing request latencies and request volumes are technically now redundant: both are now subsumed by the standard Istio dashboards.

    Below those panels are custom application metrics. Metrics such as number of owners, pets, and visits created or updated.

    Create a new Owner, give an existing owner a new pet, or add a visit for a pet, and watch those counters increment in Grafana.

    "},{"location":"observability/#kiali","title":"Kiali","text":"

    Kiali is a bespoke \"console\" for Istio Service Mesh. One of the features of Kiali that stands out are the visualizations of requests making their way through the call graph.

    1. Cancel the currently-running siege command. Relaunch siege, but with a different set of target endpoints:

      siege --concurrent=6 --delay=2 --file=./frontend-urls.txt\n
    2. Deploy Kiali:

      kubectl apply -f samples/addons/kiali.yaml\n
    3. Launch the Kiali dashboard:

      istioctl dashboard kiali\n

      Select the Graph view and the default namespace.

      The flow of requests through the applications call graph will be rendered.

    "},{"location":"resilience/","title":"Resilience","text":"

    The original Spring Cloud version of PetClinic used Resilience4j to configure calls to the visit service with a timeout of 4 seconds, and a fallback to return an empty list of visits in the event that the request to get visits timed out.

    In this version of the application, the Spring Cloud dependencies were removed. We can replace this configuration with an Istio Custom Resource.

    The file timeouts.yaml configures the equivalent 4s timeout on requests to the visits service, replacing the previous Resilience4j-based implementation.

    timeouts.yaml
    ---\napiVersion: networking.istio.io/v1beta1\nkind: VirtualService\nmetadata:\n  name: visits\nspec:\n  hosts:\n  - visits-service.default.svc.cluster.local\n  http:\n  - route:\n      - destination:\n          host: visits-service.default.svc.cluster.local\n    timeout: 4s\n

    Apply the timeout configuration to your cluster:

    kubectl apply -f manifests/config/timeouts.yaml\n

    The fallback logic in PetClinicController.getOwnerDetails was retrofitted to detect the Gateway Timeout (504) response code instead of using a Resilience4j API.

    To test this feature, the environment variable DELAY_MILLIS was introduced into the visits service to insert a delay when fetching visits.

    Here is how to test the behavior:

    1. Call visits-service directly:

      bash shellfish shell
      kubectl exec deploy/sleep -- curl -s visits-service:8080/pets/visits?petId=8 | jq\n
      kubectl exec deploy/sleep -- curl -s visits-service:8080/pets/visits\\?petId=8 | jq\n

      Observe the call succeed and return a list of visits for this particular pet.

    2. Call the petclinic-frontend endpoint, and note that for each pet, we see a list of visits:

      kubectl exec deploy/sleep -- curl -s petclinic-frontend:8080/api/gateway/owners/6 | jq\n
    3. Edit the deployment manifest for the visits-service so that the environment variable DELAY_MILLIS is set to the value \"5000\" (which is 5 seconds). One way to do this is to edit the file with (then save and exit):

      kubectl edit deploy visits-v1\n

      Wait until the new pod has rolled out and become ready.

    4. Once the new visits-service pod reaches Ready status, make the same call again:

      bash shellfish shell
      kubectl exec deploy/sleep -- curl -v visits-service:8080/pets/visits?petId=8 | jq\n
      kubectl exec deploy/sleep -- curl -v visits-service:8080/pets/visits\\?petId=8 | jq\n

      Observe the 504 (Gateway timeout) response this time around (because it exceeds the 4-second timeout).

    5. Call the petclinic-frontend endpoint once more, and note that for each pet, the list of visits is empty:

      kubectl exec deploy/sleep -- curl -s petclinic-frontend:8080/api/gateway/owners/6 | jq\n

      That is, the call succeeds, the timeout is caught, and the fallback empty list of visits is returned in its place.

    6. Tail the logs of petclinic-frontend and observe a log message indicating the fallback was triggered.

      kubectl logs --follow svc/petclinic-frontend\n

    Restore the original behavior with no delay: edit the visits-v1 deployment again and set the environment variable value to \"0\".

    Let us next turn our attention to security-related configuration.

    "},{"location":"security/","title":"Security","text":""},{"location":"security/#leverage-workload-identity","title":"Leverage workload identity","text":"

    Workloads in Istio are assigned a SPIFFE identity.

    Authorization policies can be applied that allow or deny access to a service as a function of that identity.

    For example, we can restrict access to each database exclusively to its corresponding service, i.e.:

    The above policy is specified in the file authorization-policies.yaml:

    authorization-policies.yaml
    ---\napiVersion: security.istio.io/v1beta1\nkind: AuthorizationPolicy\nmetadata:\n  name: vets-db-allow-vets-service\nspec:\n  selector:\n    matchLabels:\n      app.kubernetes.io/instance: vets-db-mysql\n  action: ALLOW\n  rules:\n  - from:\n    - source:\n        principals: [\"cluster.local/ns/default/sa/vets-service\"]\n    to:\n    - operation:\n        ports: [\"3306\"]\n---\napiVersion: security.istio.io/v1beta1\nkind: AuthorizationPolicy\nmetadata:\n  name: customers-db-allow-customers-service\nspec:\n  selector:\n    matchLabels:\n      app.kubernetes.io/instance: customers-db-mysql\n  action: ALLOW\n  rules:\n    - from:\n        - source:\n            principals: [\"cluster.local/ns/default/sa/customers-service\"]\n      to:\n        - operation:\n            ports: [\"3306\"]\n---\napiVersion: security.istio.io/v1beta1\nkind: AuthorizationPolicy\nmetadata:\n  name: visits-db-allow-visits-service\nspec:\n  selector:\n    matchLabels:\n      app.kubernetes.io/instance: visits-db-mysql\n  action: ALLOW\n  rules:\n    - from:\n        - source:\n            principals: [\"cluster.local/ns/default/sa/visits-service\"]\n      to:\n        - operation:\n            ports: [\"3306\"]\n

    The main aspects of each authorization policy are:

    1. The selector identifies the workload to apply the policy to
    2. The action in this case is to Allow requests that match the given rules
    3. The rules section which specify the source principal, aka workload identity.
    4. The to section applies the policy to requests on port 3306, the port that mysqld listens on.
    "},{"location":"security/#exercise","title":"Exercise","text":"
    1. Use the previous \"Test database connectivity\" instructions to create a client pod and to use it to connect to the \"vets\" database. This operation should succeed. You should be able to see the \"service_instance_db\" and see the tables and query them.

    2. Apply the authorization policies:

      kubectl apply -f manifests/config/authorization-policies.yaml\n
    3. Attempt once more to create a client pod to connect to the \"vets\" database. This time the operation will fail. That's because only the vets service is now allowed to connect to the database.

    4. Verify that the application itself continues to function because all database queries are performed via its associated service.

    "},{"location":"security/#summary","title":"Summary","text":"

    One problem in the enterprise is enforcing access to data via microservices. Giving another team direct access to data is a well-known anti-pattern, as it couples multiple applications to a specific storage technology, a specific database schema, one that cannot be allowed to evolve without impacting everyone.

    With the aid of Istio and workload identity, we can make sure that the manner in which data is stored by a microservice is an entirely internal concern, one that can be modified at a later time, perhaps to use a different storage backend, or perhaps simply to allow for the evolution of the schema without \"breaking all the clients\".

    After traffic management, resilience, and security, it is time to discuss the other important facet that servicec meshes help with: Observability.

    "},{"location":"setup/","title":"Setup","text":"

    Begin by cloning a local copy of the Spring PetClinic Istio repository from GitHub.

    "},{"location":"setup/#kubernetes","title":"Kubernetes","text":"

    Select whether you wish to provision Kubernetes locally or remotely using a cloud provider.

    Local SetupRemote Setup

    On a Mac running Docker Desktop or Rancher Desktop, make sure to give your VM plenty of CPU and memory. 16GB of memory and 6 CPUs seems to work for me.

    Deploy a local K3D Kubernetes cluster with a local registry:

    k3d cluster create my-istio-cluster \\\n  --api-port 6443 \\\n  --k3s-arg \"--disable=traefik@server:0\" \\\n  --port 80:80@loadbalancer \\\n  --registry-create my-cluster-registry:0.0.0.0:5010\n

    Above, we:

    Provision a k8s cluster in the cloud of your choice. For example, on GCP:

    gcloud container clusters create my-istio-cluster \\\n  --cluster-version latest \\\n  --machine-type \"e2-standard-2\" \\\n  --num-nodes \"3\" \\\n  --network \"default\"\n
    "},{"location":"setup/#environment-variables","title":"Environment variables","text":"

    Use envrc-template.sh as the basis for configuring environment variables.

    Be sure to:

    1. Set the local variable local_setup to either \"true\" or \"false\", depending on your choice of a local or remote cluster.
    2. If using a remote setup, set the value of PUSH_IMAGE_REGISTRY to the value of your image registry URL.

    I highly recommend using direnv, a convenient way of associating setting environment variables with a specific directory.

    If you choose to use direnv, then the variables can be automatically set by renaming the file to .envrc and running the command direnv allow.

    "},{"location":"setup/#istio","title":"Istio","text":"
    1. Follow the Istio documentation's instructions to download Istio.

    2. After you have added the istioctl CLI to your PATH, run the following installation command:

      istioctl install -f manifests/istio-install-manifest.yaml\n

    The above-referenced configuration manifest configures certain facets of the mesh, namely:

    1. Setting trace sampling at 100%, for ease of obtaining distributed traces
    2. Deploying sidecars (envoy proxies) not only alongside workloads, but also in front of mysql databases.

    Once Istio is installed, feel free to verify the installation with:

    istioctl verify-install\n

    In the next section, you will work on deploying the microservices to the default namespace.

    As a final step, label the default namespace for sidecar injection with:

    kubectl label ns default istio-injection=enabled\n
    "},{"location":"summary/","title":"Summary","text":"

    Prior to Istio, the common solution in the Spring ecosystem to issues of service discovery, resilience, load balancing was Spring Cloud. Spring Cloud consists of multiple projects that provide dependencies that developers add to their applications to help them deal with issues of client-side load-balancing, retries, circuit-breaking, service discovery and so on.

    In spring-petclinic-istio, those dependencies have been removed. What remains as dependencies inside each service are what you'd expect to find:

    "}]} \ No newline at end of file diff --git a/security/index.html b/security/index.html index 6199243..3ecb5e7 100644 --- a/security/index.html +++ b/security/index.html @@ -473,6 +473,26 @@ + + + + + + +
  • + + + + + Summary + + + + +
  • + + + diff --git a/setup/index.html b/setup/index.html index 5cdd3e1..1a6713f 100644 --- a/setup/index.html +++ b/setup/index.html @@ -467,6 +467,26 @@ + + + + + + +
  • + + + + + Summary + + + + +
  • + + + diff --git a/sitemap.xml.gz b/sitemap.xml.gz index b05c0bc..b4190b4 100644 Binary files a/sitemap.xml.gz and b/sitemap.xml.gz differ diff --git a/summary/index.html b/summary/index.html new file mode 100644 index 0000000..2611e72 --- /dev/null +++ b/summary/index.html @@ -0,0 +1,555 @@ + + + + + + + + + + + + + + + + + + + + + Summary - Spring PetClinic on Istio + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    + +
    +
    + +
    + + + + + + +
    + + +
    + +
    + + + + + + +
    +
    + + + +
    +
    +
    + + + + + +
    +
    +
    + + + +
    +
    +
    + + + +
    +
    +
    + + + +
    +
    + + + + + + + +

    Summary

    + +

    Prior to Istio, the common solution in the Spring ecosystem to issues of service discovery, resilience, load balancing was Spring Cloud. Spring Cloud consists of multiple projects that provide dependencies that developers add to their applications to help them deal with issues of client-side load-balancing, retries, circuit-breaking, service discovery and so on.

    +

    In spring-petclinic-istio, those dependencies have been removed. What remains as dependencies inside each service are what you'd expect to find:

    + + + + + + + + + + + + + + +
    +
    + + + +
    + + + +
    + + + +
    +
    +
    +
    + + + + + + + + + + \ No newline at end of file