Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Single VPC Lattice Service with multiple listeners (each explicitly linked to a different backendRef) #644

Open
HannesBBR opened this issue May 29, 2024 · 6 comments
Labels
enhancement New feature or request

Comments

@HannesBBR
Copy link

Hi,

One of our applications running in k8s accepts both GRPC and HTTP traffic (its pods have different containers with a different port). To handle traffic for this application, we have 2 k8s services:

  • service-rest => forwards traffic to the port of the rest-container
  • service-grpc => forwards traffic to the port of the grpc-container

Our current ingress into this k8s application is an ALB with two listeners that forward traffic to the respective k8s service, based on the port:

  • 443 listener => forwards to target group tg-rest, which sends traffic to the service-rest in k8s (targets are managed by the AWS load-balancer controller)
  • 50051 listener => forwards to target group tg-grpc, which sends traffic to the service-grpc in k8s (targets are managed by the AWS load-balancer controller)

The domain name used by the clients is the same for both REST and GRPC traffic, and they choose the respective ALB listener port depending on whether they want to talk REST or GRPC.

As we are onboarding services into VPC Lattice, we'd now like the achieve the same kind of setup with VPC Lattice for this application:

  • A single VPC Lattice service with a custom domain name
  • 2 listeners in this VPC Lattice service
    • 443 listener => forward all traffic on this port to target group vpc-lattice-tg-rest
    • 50051 listener => forward all traffic on this port to target group vpc-lattice-tg-grpc

I have tried some things to achieve this setup using the Gateway API controller, but I didn't find a way to do this:

  • Two different HTTPRoute resources with the same name, but a different sectionName in the parentRefs property and a different backendRef service:
apiVersion: gateway.networking.k8s.io/v1beta1
kind: HTTPRoute
metadata:
  name: test-application
spec:
  hostnames:
    - test-app.test.com
  parentRefs:
    - name: test-service-network
      sectionName: rest
      namespace: aws-application-networking-system
  rules:
    - backendRefs:
      - name: testapp-service-rest
         kind: Service
         port: 80
---
apiVersion: gateway.networking.k8s.io/v1beta1
kind: HTTPRoute
metadata:
  name: test-application
spec:
  hostnames:
    - test-app.test.com
  parentRefs:
    - name: test-service-network
      sectionName: grpc
      namespace: aws-application-networking-system
  rules:
    - backendRefs:
      - name: testapp-service-grpc
         kind: Service
         port: 50051

=> this creates a single VPC Lattice service with one listener and two target groups. However it causes the controller to periodically flipflop/overwrite the (single) listener of the service between the two ports related to the two sectionNames, instead of adding a second listener to the service that would then forward traffic to its respective targetgroup.

  • A single HTTPRoute resource, with two parentRefs and two backendRefs:
apiVersion: gateway.networking.k8s.io/v1beta1
kind: HTTPRoute
metadata:
  name: test-application
spec:
  hostnames:
    - test-app.test.com
  parentRefs:
    - name: test-service-network
      sectionName: rest
      namespace: aws-application-networking-system
    - name: test-service-network
      sectionName: grpc
      namespace: aws-application-networking-system
  rules:
    - backendRefs:
      - name: testapp-service-rest
         kind: Service
         port: 80
      - name: test-app-service-grpc
         kind: Service
         port: 50051

=> the result is a single VPC Lattice service with two listeners, and two target groups. However, the rules in each of the two listeners basically split traffic evenly between the two created target groups, while we'd like to have all traffic for a listener be forwarded to the 'correct' target group only.

Ideally, I would hope something like this would be possible, where you explicitly define which backendRef should be used by which listener/parent:

apiVersion: gateway.networking.k8s.io/v1beta1
kind: HTTPRoute
metadata:
  name: test-application
spec:
  hostnames:
    - test-app.test.com
  parentRefs:
    - name: test-service-network
      sectionName: rest
      namespace: aws-application-networking-system
    - name: test-service-network
      sectionName: grpc
      namespace: aws-application-networking-system
  rules:
    - backendRefs:
      - name: testapp-service-rest
         kind: Service
         port: 80
         parent: rest --> some kind of ref to a parent above
      - name: test-app-service-grpc
         kind: Service
         port: 50051
         parent: grpc --> some kind of ref to a parent above

But perhaps there already is another way of achieving the situation described above with the existing controller?
Many thanks!

@zijun726911
Copy link
Contributor

zijun726911 commented May 29, 2024

Could you help me to try this way and let me know does this work for you? (if it's not work, can you help to paste some controller error logs?) Thank you so much! Maybe you can create one HTTPRoute and one GRPCRoute:

apiVersion: gateway.networking.k8s.io/v1beta1
kind: HTTPRoute
metadata:
  name: test-application
spec:
  hostnames:
    - test-app.test.com
  parentRefs:
    - name: test-service-network
      sectionName: rest
      namespace: aws-application-networking-system
  rules:
    - backendRefs:
      - name: testapp-service-rest
         kind: Service
         port: 80
apiVersion: gateway.networking.k8s.io/v1alpha2
kind: GRPCRoute
metadata:
  name: test-application
spec:
  hostnames:
    - test-app.test.com
  parentRefs:
    - name: test-service-network
      sectionName: grpc
      namespace: aws-application-networking-system
  rules:
    - backendRefs:
         - name: test-app-service-grpc
            kind: Service
            port: 50051

https://github.com/aws/aws-application-networking-k8s/blob/55532132df0bda0d10e64192ff5c8a6a88860e4a/docs/api-types/grpc-route.md

@HannesBBR
Copy link
Author

HannesBBR commented May 30, 2024

Hi, thanks for your response! I forgot to mention that I also tried this option already indeed, but what then seems to happen is:

  • 1 VPC Lattice Service is created
  • 2 Target groups are created (one for grpc, one for rest)
  • 1 Listener (GRPC) is created in the service (forwarding traffic to the GRPC target group)

When describing the HTTPRoute resource the message says :

Message:  error during service synthesis failed ServiceManager.Upsert test-app-test-app due to service test-app/test-app had a conflict: Found existing resource with conflicting service name: arn:aws:vpc-lattice:eu-central-1:123456789:service/svc-12345789`

So it looks like the controller tries to create a new Lattice service for the HTTPRoute resource, and stops because there already is a service with that name. Ideally for my use case, it would then rather acknowledge that a service for that name already exists and then adds a new listener to that service based on the backendRef and parentRef that are configured in the HTTPRoute. Hard though to assess what kind of impact such a change would have in the whole flow.

@zijun726911 zijun726911 added bug Something isn't working enhancement New feature or request and removed bug Something isn't working labels May 30, 2024
@zijun726911
Copy link
Contributor

zijun726911 commented May 31, 2024

Hi @HannesBBR , thanks for your reply. The current controller version don't support your use case to translate k8s resource into 2 listeners in a same lattice Service and 2 listeners route traffic to different target groups. However, the vpc lattice itself did support that set up, for example:
image

And for your suggest: parent: grpc --> some kind of ref to a parent above I think it's not in the gateway api spec HTTPRoute, and seems gateway api HTTPRoute don't have a proper way to represent it.

To immediately unblock your use case, probably you can try this workaround: Use K8s ServiceExport and k8s TargetGroupPolicy only. Don't create any k8s Gateway, k8s HTTPRoute, k8s ServiceImport. Don't use the controller to manage the VPC Lattice service. Instead, manage the VPC Lattice service outside of the k8s. i.e., Use the aws console, CloudFormation, Terraform to create the VPC Lattice service network and service.

For example, following this steps:
0. Make sure the aws gateway api controller, testapp-service-rest and testapp-service-grpc running in your k8s clusters.

  1. Create K8s ServiceExport and k8s TargetGroupPolicy:
apiVersion: application-networking.k8s.aws/v1alpha1
kind: ServiceExport
metadata:
  name: testapp-service-rest
  annotations:
    application-networking.k8s.aws/federation: "amazon-vpc-lattice"
    application-networking.k8s.aws/port: "80"
apiVersion: application-networking.k8s.aws/v1alpha1
kind: TargetGroupPolicy
metadata:
    name: tg-policy-for-testapp-service-rest (this TargetGroupPolicy is optional if you just use  protocol: HTTP  and protocolVersion: HTTP1)
spec:
    targetRef:
        group: application-networking.k8s.aws
        kind: ServiceExport
        name: testapp-service-rest
    protocol: HTTP
    protocolVersion: HTTP1
apiVersion: application-networking.k8s.aws/v1alpha1
kind: ServiceExport
metadata:
  name: testapp-service-grpc
  annotations:
    application-networking.k8s.aws/federation: "amazon-vpc-lattice"
    application-networking.k8s.aws/port: "50051"
apiVersion: application-networking.k8s.aws/v1alpha1
kind: TargetGroupPolicy
metadata:
    name: tg-policy-for-testapp-service-grpc
spec:
    targetRef:
        group: application-networking.k8s.aws
        kind: ServiceExport
        name: testapp-service-grpc
    protocol: HTTP
    protocolVersion: GRPC
  1. Go to the aws vpc lattice target group console, you should be able to see the TargetGroups with name k8s-<namespace>-testapp-service-http-<randomstring> and k8s-<namespace>-testapp-service-grpc-<randomstring> have been created
  2. In the console (or CloudFormation, Terraform), manually create VPC Lattice Service and 2 listeners:
  • One https:443 listener with a rule route to the k8s-<namespace>-testapp-service-http-<randomstring> Target Group
  • And another HTTP(S):50051 listener with rule route to the k8s-<namespace>-testapp-service-grpc-<randomstring> Target Group

And we are open to discuss how to represent this lattice setup in the k8s resource in long term.

(My personal suggestion is we do that way #644 (comment) but fix the controller issue: Found existing resource with conflicting service name)

@HannesBBR
Copy link
Author

Thanks a lot for the example, I didn't know only having the ServiceExport (and no e.g. HTTPRoute) would still create the target groups, but that seems to be the case! So deploying your suggestion indeeds creates the two target groups, which can then be used as targets for a service and listeners created in e.g. CDK 🙇

For reference, one thing to keep in mind is to not yet apply the 'application-networking.k8s.aws/pod-readiness-gate-inject': 'enabled' label to the namespace, as there will be a point where you already create target groups (through the ServiceExport and TargetGroupPolicy resources), but where they are not yet used by a service listener (since that needs to be created async in the second step). That causes the readiness gate to be 'False', making the pods temporarily unavailable, until the service is created and linked to the target groups. So best to first fully onboard the application, and only afterwards set the readiness gate label on the namespace.

@zijun726911
Copy link
Contributor

zijun726911 commented May 31, 2024

That causes the readiness gate to be 'False', making the pods temporarily unavailable

The controller did set application-networking.k8s.aws/pod-readiness-gate==false for UNUSED status lattice targets.

newCond := corev1.PodCondition{
Type: LatticeReadinessGateConditionType,
Status: corev1.ConditionFalse,

So best to first fully onboard the application, and only afterwards set the readiness gate label on the namespace.

Yes, you need to use this way to setup your resource. I think this user experience is fine? but change to:

case vpclattice.TargetStatusUnused:
				newCond.Status = corev1.ConditionTrue
				newCond.Reason = ReadinessReasonUnused

also make sense to me.
https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle/#pod-readiness-status

@HannesBBR
Copy link
Author

Yeah I think it's fine like this as well, just wanted to mention it for others, as it might not be obvious. Thanks again for your help!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
enhancement New feature or request
Projects
None yet
Development

No branches or pull requests

2 participants