diff --git a/v1.1.0/guides/deploy/index.html b/v1.1.0/guides/deploy/index.html index 4155caee..b8671e01 100644 --- a/v1.1.0/guides/deploy/index.html +++ b/v1.1.0/guides/deploy/index.html @@ -1462,14 +1462,14 @@

Install the Controller# Run helm with either install or upgrade helm install gateway-api-controller \ oci://public.ecr.aws/aws-application-networking-k8s/aws-gateway-controller-chart \ - --version=v1.0.7 \ + --version=v1.1.0 \ --set=serviceAccount.create=false \ --namespace aws-application-networking-system \ --set=log.level=info # use "debug" for debug level logs
-
kubectl apply -f https://raw.githubusercontent.com/aws/aws-application-networking-k8s/main/files/controller-installation/deploy-v1.0.7.yaml
+
kubectl apply -f https://raw.githubusercontent.com/aws/aws-application-networking-k8s/main/files/controller-installation/deploy-v1.1.0.yaml
 
diff --git a/v1.1.0/search/search_index.json b/v1.1.0/search/search_index.json index 9f0c4b7a..a9622bae 100644 --- a/v1.1.0/search/search_index.json +++ b/v1.1.0/search/search_index.json @@ -1 +1 @@ -{"config":{"lang":["en"],"separator":"[\\s\\-]+","pipeline":["stopWordFilter"]},"docs":[{"location":"api-reference/","title":"API Specification","text":"

This page contains the API field specification for Gateway API.

Packages:

"},{"location":"api-reference/#application-networking.k8s.aws/v1alpha1","title":"application-networking.k8s.aws/v1alpha1","text":"

Resource Types:

"},{"location":"api-reference/#application-networking.k8s.aws/v1alpha1.AccessLogPolicy","title":"AccessLogPolicy","text":"Field Description apiVersion string application-networking.k8s.aws/v1alpha1 kind string AccessLogPolicy metadata Kubernetes meta/v1.ObjectMeta Refer to the Kubernetes API documentation for the fields of the metadata field. spec AccessLogPolicySpec destinationArn string

The Amazon Resource Name (ARN) of the destination that will store access logs. Supported values are S3 Bucket, CloudWatch Log Group, and Firehose Delivery Stream ARNs.

Changes to this value results in replacement of the VPC Lattice Access Log Subscription.

targetRef sigs.k8s.io/gateway-api/apis/v1alpha2.NamespacedPolicyTargetReference

TargetRef points to the Kubernetes Gateway, HTTPRoute, or GRPCRoute resource that will have this policy attached.

This field is following the guidelines of Kubernetes Gateway API policy attachment.

status AccessLogPolicyStatus

Status defines the current state of AccessLogPolicy.

"},{"location":"api-reference/#application-networking.k8s.aws/v1alpha1.IAMAuthPolicy","title":"IAMAuthPolicy","text":"Field Description apiVersion string application-networking.k8s.aws/v1alpha1 kind string IAMAuthPolicy metadata Kubernetes meta/v1.ObjectMeta Refer to the Kubernetes API documentation for the fields of the metadata field. spec IAMAuthPolicySpec policy string

IAM auth policy content. It is a JSON string that uses the same syntax as AWS IAM policies. Please check the VPC Lattice documentation to get the common elements in an auth policy

targetRef sigs.k8s.io/gateway-api/apis/v1alpha2.NamespacedPolicyTargetReference

TargetRef points to the Kubernetes Gateway, HTTPRoute, or GRPCRoute resource that will have this policy attached.

This field is following the guidelines of Kubernetes Gateway API policy attachment.

status IAMAuthPolicyStatus

Status defines the current state of IAMAuthPolicy.

"},{"location":"api-reference/#application-networking.k8s.aws/v1alpha1.ServiceExport","title":"ServiceExport","text":"

ServiceExport declares that the Service with the same name and namespace as this export should be consumable from other clusters.

Field Description apiVersion string application-networking.k8s.aws/v1alpha1 kind string ServiceExport metadata Kubernetes meta/v1.ObjectMeta (Optional) Refer to the Kubernetes API documentation for the fields of the metadata field. status ServiceExportStatus (Optional)

status describes the current state of an exported service. Service configuration comes from the Service that had the same name and namespace as this ServiceExport. Populated by the multi-cluster service implementation\u2019s controller.

"},{"location":"api-reference/#application-networking.k8s.aws/v1alpha1.ServiceImport","title":"ServiceImport","text":"

ServiceImport describes a service imported from clusters in a ClusterSet.

Field Description apiVersion string application-networking.k8s.aws/v1alpha1 kind string ServiceImport metadata Kubernetes meta/v1.ObjectMeta (Optional) Refer to the Kubernetes API documentation for the fields of the metadata field. spec ServiceImportSpec (Optional)

spec defines the behavior of a ServiceImport.

ports []ServicePort ips []string (Optional)

ip will be used as the VIP for this service when type is ClusterSetIP.

type ServiceImportType

type defines the type of this service. Must be ClusterSetIP or Headless.

sessionAffinity Kubernetes core/v1.ServiceAffinity (Optional)

Supports \u201cClientIP\u201d and \u201cNone\u201d. Used to maintain session affinity. Enable client IP based session affinity. Must be ClientIP or None. Defaults to None. Ignored when type is Headless More info: https://kubernetes.io/docs/concepts/services-networking/service/#virtual-ips-and-service-proxies

sessionAffinityConfig Kubernetes core/v1.SessionAffinityConfig (Optional)

sessionAffinityConfig contains session affinity configuration.

status ServiceImportStatus (Optional)

status contains information about the exported services that form the multi-cluster service referenced by this ServiceImport.

"},{"location":"api-reference/#application-networking.k8s.aws/v1alpha1.TargetGroupPolicy","title":"TargetGroupPolicy","text":"Field Description apiVersion string application-networking.k8s.aws/v1alpha1 kind string TargetGroupPolicy metadata Kubernetes meta/v1.ObjectMeta Refer to the Kubernetes API documentation for the fields of the metadata field. spec TargetGroupPolicySpec protocol string (Optional)

The protocol to use for routing traffic to the targets. Supported values are HTTP (default) and HTTPS.

Changes to this value results in a replacement of VPC Lattice target group.

protocolVersion string (Optional)

The protocol version to use. Supported values are HTTP1 (default) and HTTP2. When a policy is behind GRPCRoute, this field value will be ignored as GRPC is only supported through HTTP/2.

Changes to this value results in a replacement of VPC Lattice target group.

targetRef sigs.k8s.io/gateway-api/apis/v1alpha2.NamespacedPolicyTargetReference

TargetRef points to the kubernetes Service resource that will have this policy attached.

This field is following the guidelines of Kubernetes Gateway API policy attachment.

healthCheck HealthCheckConfig (Optional)

The health check configuration.

Changes to this value will update VPC Lattice resource in place.

status TargetGroupPolicyStatus"},{"location":"api-reference/#application-networking.k8s.aws/v1alpha1.VpcAssociationPolicy","title":"VpcAssociationPolicy","text":"Field Description apiVersion string application-networking.k8s.aws/v1alpha1 kind string VpcAssociationPolicy metadata Kubernetes meta/v1.ObjectMeta Refer to the Kubernetes API documentation for the fields of the metadata field. spec VpcAssociationPolicySpec securityGroupIds []SecurityGroupId (Optional)

SecurityGroupIds defines the security groups enforced on the VpcServiceNetworkAssociation. Security groups does not take effect if AssociateWithVpc is set to false.

For more details, please check the VPC Lattice documentation https://docs.aws.amazon.com/vpc-lattice/latest/ug/security-groups.html

associateWithVpc bool (Optional)

AssociateWithVpc indicates whether the VpcServiceNetworkAssociation should be created for the current VPC of k8s cluster.

This value will be considered true by default.

targetRef sigs.k8s.io/gateway-api/apis/v1alpha2.NamespacedPolicyTargetReference

TargetRef points to the kubernetes Gateway resource that will have this policy attached.

This field is following the guidelines of Kubernetes Gateway API policy attachment.

status VpcAssociationPolicyStatus"},{"location":"api-reference/#application-networking.k8s.aws/v1alpha1.AccessLogPolicySpec","title":"AccessLogPolicySpec","text":"

(Appears on:AccessLogPolicy)

AccessLogPolicySpec defines the desired state of AccessLogPolicy.

Field Description destinationArn string

The Amazon Resource Name (ARN) of the destination that will store access logs. Supported values are S3 Bucket, CloudWatch Log Group, and Firehose Delivery Stream ARNs.

Changes to this value results in replacement of the VPC Lattice Access Log Subscription.

targetRef sigs.k8s.io/gateway-api/apis/v1alpha2.NamespacedPolicyTargetReference

TargetRef points to the Kubernetes Gateway, HTTPRoute, or GRPCRoute resource that will have this policy attached.

This field is following the guidelines of Kubernetes Gateway API policy attachment.

"},{"location":"api-reference/#application-networking.k8s.aws/v1alpha1.AccessLogPolicyStatus","title":"AccessLogPolicyStatus","text":"

(Appears on:AccessLogPolicy)

AccessLogPolicyStatus defines the observed state of AccessLogPolicy.

Field Description conditions []Kubernetes meta/v1.Condition (Optional)

Conditions describe the current conditions of the AccessLogPolicy.

Implementations should prefer to express Policy conditions using the PolicyConditionType and PolicyConditionReason constants so that operators and tools can converge on a common vocabulary to describe AccessLogPolicy state.

Known condition types are:

"},{"location":"api-reference/#application-networking.k8s.aws/v1alpha1.ClusterStatus","title":"ClusterStatus","text":"

(Appears on:ServiceImportStatus)

ClusterStatus contains service configuration mapped to a specific source cluster

Field Description cluster string

cluster is the name of the exporting cluster. Must be a valid RFC-1123 DNS label.

"},{"location":"api-reference/#application-networking.k8s.aws/v1alpha1.HealthCheckConfig","title":"HealthCheckConfig","text":"

(Appears on:TargetGroupPolicySpec)

HealthCheckConfig defines health check configuration for given VPC Lattice target group. For the detailed explanation and supported values, please refer to VPC Lattice documentationon health checks.

Field Description enabled bool (Optional)

Indicates whether health checking is enabled.

intervalSeconds int64 (Optional)

The approximate amount of time, in seconds, between health checks of an individual target.

timeoutSeconds int64 (Optional)

The amount of time, in seconds, to wait before reporting a target as unhealthy.

healthyThresholdCount int64 (Optional)

The number of consecutive successful health checks required before considering an unhealthy target healthy.

unhealthyThresholdCount int64 (Optional)

The number of consecutive failed health checks required before considering a target unhealthy.

statusMatch string (Optional)

A regular expression to match HTTP status codes when checking for successful response from a target.

path string (Optional)

The destination for health checks on the targets.

port int64

The port used when performing health checks on targets. If not specified, health check defaults to the port that a target receives traffic on.

protocol HealthCheckProtocol (Optional)

The protocol used when performing health checks on targets.

protocolVersion HealthCheckProtocolVersion (Optional)

The protocol version used when performing health checks on targets. Defaults to HTTP/1.

"},{"location":"api-reference/#application-networking.k8s.aws/v1alpha1.HealthCheckProtocol","title":"HealthCheckProtocol (string alias)","text":"

(Appears on:HealthCheckConfig)

Value Description

\"HTTP\"

\"HTTPS\"

"},{"location":"api-reference/#application-networking.k8s.aws/v1alpha1.HealthCheckProtocolVersion","title":"HealthCheckProtocolVersion (string alias)","text":"

(Appears on:HealthCheckConfig)

Value Description

\"HTTP1\"

\"HTTP2\"

"},{"location":"api-reference/#application-networking.k8s.aws/v1alpha1.IAMAuthPolicySpec","title":"IAMAuthPolicySpec","text":"

(Appears on:IAMAuthPolicy)

IAMAuthPolicySpec defines the desired state of IAMAuthPolicy. When the controller handles IAMAuthPolicy creation, if the targetRef k8s and VPC Lattice resource exists, the controller will change the auth_type of that VPC Lattice resource to AWS_IAM and attach this policy. When the controller handles IAMAuthPolicy deletion, if the targetRef k8s and VPC Lattice resource exists, the controller will change the auth_type of that VPC Lattice resource to NONE and detach this policy.

Field Description policy string

IAM auth policy content. It is a JSON string that uses the same syntax as AWS IAM policies. Please check the VPC Lattice documentation to get the common elements in an auth policy

targetRef sigs.k8s.io/gateway-api/apis/v1alpha2.NamespacedPolicyTargetReference

TargetRef points to the Kubernetes Gateway, HTTPRoute, or GRPCRoute resource that will have this policy attached.

This field is following the guidelines of Kubernetes Gateway API policy attachment.

"},{"location":"api-reference/#application-networking.k8s.aws/v1alpha1.IAMAuthPolicyStatus","title":"IAMAuthPolicyStatus","text":"

(Appears on:IAMAuthPolicy)

IAMAuthPolicyStatus defines the observed state of IAMAuthPolicy.

Field Description conditions []Kubernetes meta/v1.Condition (Optional)

Conditions describe the current conditions of the IAMAuthPolicy.

Implementations should prefer to express Policy conditions using the PolicyConditionType and PolicyConditionReason constants so that operators and tools can converge on a common vocabulary to describe IAMAuthPolicy state.

Known condition types are:

"},{"location":"api-reference/#application-networking.k8s.aws/v1alpha1.SecurityGroupId","title":"SecurityGroupId (string alias)","text":"

(Appears on:VpcAssociationPolicySpec)

"},{"location":"api-reference/#application-networking.k8s.aws/v1alpha1.ServiceExportCondition","title":"ServiceExportCondition","text":"

(Appears on:ServiceExportStatus)

ServiceExportCondition contains details for the current condition of this service export.

Once KEP-1623 is implemented, this will be replaced by metav1.Condition.

Field Description type ServiceExportConditionType status Kubernetes core/v1.ConditionStatus

Status is one of {\u201cTrue\u201d, \u201cFalse\u201d, \u201cUnknown\u201d}

lastTransitionTime Kubernetes meta/v1.Time (Optional) reason string (Optional) message string (Optional)"},{"location":"api-reference/#application-networking.k8s.aws/v1alpha1.ServiceExportConditionType","title":"ServiceExportConditionType (string alias)","text":"

(Appears on:ServiceExportCondition)

ServiceExportConditionType identifies a specific condition.

Value Description

\"Conflict\"

ServiceExportConflict means that there is a conflict between two exports for the same Service. When \u201cTrue\u201d, the condition message should contain enough information to diagnose the conflict: field(s) under contention, which cluster won, and why. Users should not expect detailed per-cluster information in the conflict message.

\"Valid\"

ServiceExportValid means that the service referenced by this service export has been recognized as valid by a controller. This will be false if the service is found to be unexportable (ExternalName, not found).

"},{"location":"api-reference/#application-networking.k8s.aws/v1alpha1.ServiceExportStatus","title":"ServiceExportStatus","text":"

(Appears on:ServiceExport)

ServiceExportStatus contains the current status of an export.

Field Description conditions []ServiceExportCondition (Optional)"},{"location":"api-reference/#application-networking.k8s.aws/v1alpha1.ServiceImportSpec","title":"ServiceImportSpec","text":"

(Appears on:ServiceImport)

ServiceImportSpec describes an imported service and the information necessary to consume it.

Field Description ports []ServicePort ips []string (Optional)

ip will be used as the VIP for this service when type is ClusterSetIP.

type ServiceImportType

type defines the type of this service. Must be ClusterSetIP or Headless.

sessionAffinity Kubernetes core/v1.ServiceAffinity (Optional)

Supports \u201cClientIP\u201d and \u201cNone\u201d. Used to maintain session affinity. Enable client IP based session affinity. Must be ClientIP or None. Defaults to None. Ignored when type is Headless More info: https://kubernetes.io/docs/concepts/services-networking/service/#virtual-ips-and-service-proxies

sessionAffinityConfig Kubernetes core/v1.SessionAffinityConfig (Optional)

sessionAffinityConfig contains session affinity configuration.

"},{"location":"api-reference/#application-networking.k8s.aws/v1alpha1.ServiceImportStatus","title":"ServiceImportStatus","text":"

(Appears on:ServiceImport)

ServiceImportStatus describes derived state of an imported service.

Field Description clusters []ClusterStatus (Optional)

clusters is the list of exporting clusters from which this service was derived.

"},{"location":"api-reference/#application-networking.k8s.aws/v1alpha1.ServiceImportType","title":"ServiceImportType (string alias)","text":"

(Appears on:ServiceImportSpec)

ServiceImportType designates the type of a ServiceImport

Value Description

\"ClusterSetIP\"

ClusterSetIP are only accessible via the ClusterSet IP.

\"Headless\"

Headless services allow backend pods to be addressed directly.

"},{"location":"api-reference/#application-networking.k8s.aws/v1alpha1.ServicePort","title":"ServicePort","text":"

(Appears on:ServiceImportSpec)

ServicePort represents the port on which the service is exposed

Field Description name string (Optional)

The name of this port within the service. This must be a DNS_LABEL. All ports within a ServiceSpec must have unique names. When considering the endpoints for a Service, this must match the \u2018name\u2019 field in the EndpointPort. Optional if only one ServicePort is defined on this service.

protocol Kubernetes core/v1.Protocol (Optional)

The IP protocol for this port. Supports \u201cTCP\u201d, \u201cUDP\u201d, and \u201cSCTP\u201d. Default is TCP.

appProtocol string (Optional)

The application protocol for this port. This field follows standard Kubernetes label syntax. Un-prefixed names are reserved for IANA standard service names (as per RFC-6335 and http://www.iana.org/assignments/service-names). Non-standard protocols should use prefixed names such as mycompany.com/my-custom-protocol. Field can be enabled with ServiceAppProtocol feature gate.

port int32

The port that will be exposed by this service.

"},{"location":"api-reference/#application-networking.k8s.aws/v1alpha1.TargetGroupPolicySpec","title":"TargetGroupPolicySpec","text":"

(Appears on:TargetGroupPolicy)

TargetGroupPolicySpec defines the desired state of TargetGroupPolicy.

Field Description protocol string (Optional)

The protocol to use for routing traffic to the targets. Supported values are HTTP (default) and HTTPS.

Changes to this value results in a replacement of VPC Lattice target group.

protocolVersion string (Optional)

The protocol version to use. Supported values are HTTP1 (default) and HTTP2. When a policy is behind GRPCRoute, this field value will be ignored as GRPC is only supported through HTTP/2.

Changes to this value results in a replacement of VPC Lattice target group.

targetRef sigs.k8s.io/gateway-api/apis/v1alpha2.NamespacedPolicyTargetReference

TargetRef points to the kubernetes Service resource that will have this policy attached.

This field is following the guidelines of Kubernetes Gateway API policy attachment.

healthCheck HealthCheckConfig (Optional)

The health check configuration.

Changes to this value will update VPC Lattice resource in place.

"},{"location":"api-reference/#application-networking.k8s.aws/v1alpha1.TargetGroupPolicyStatus","title":"TargetGroupPolicyStatus","text":"

(Appears on:TargetGroupPolicy)

TargetGroupPolicyStatus defines the observed state of TargetGroupPolicy.

Field Description conditions []Kubernetes meta/v1.Condition (Optional)

Conditions describe the current conditions of the AccessLogPolicy.

Implementations should prefer to express Policy conditions using the PolicyConditionType and PolicyConditionReason constants so that operators and tools can converge on a common vocabulary to describe AccessLogPolicy state.

Known condition types are:

"},{"location":"api-reference/#application-networking.k8s.aws/v1alpha1.VpcAssociationPolicySpec","title":"VpcAssociationPolicySpec","text":"

(Appears on:VpcAssociationPolicy)

VpcAssociationPolicySpec defines the desired state of VpcAssociationPolicy.

Field Description securityGroupIds []SecurityGroupId (Optional)

SecurityGroupIds defines the security groups enforced on the VpcServiceNetworkAssociation. Security groups does not take effect if AssociateWithVpc is set to false.

For more details, please check the VPC Lattice documentation https://docs.aws.amazon.com/vpc-lattice/latest/ug/security-groups.html

associateWithVpc bool (Optional)

AssociateWithVpc indicates whether the VpcServiceNetworkAssociation should be created for the current VPC of k8s cluster.

This value will be considered true by default.

targetRef sigs.k8s.io/gateway-api/apis/v1alpha2.NamespacedPolicyTargetReference

TargetRef points to the kubernetes Gateway resource that will have this policy attached.

This field is following the guidelines of Kubernetes Gateway API policy attachment.

"},{"location":"api-reference/#application-networking.k8s.aws/v1alpha1.VpcAssociationPolicyStatus","title":"VpcAssociationPolicyStatus","text":"

(Appears on:VpcAssociationPolicy)

VpcAssociationPolicyStatus defines the observed state of VpcAssociationPolicy.

Field Description conditions []Kubernetes meta/v1.Condition (Optional)

Conditions describe the current conditions of the VpcAssociationPolicy.

Implementations should prefer to express Policy conditions using the PolicyConditionType and PolicyConditionReason constants so that operators and tools can converge on a common vocabulary to describe VpcAssociationPolicy state.

Known condition types are:

Generated with gen-crd-api-reference-docs on git commit 5de8f32.

"},{"location":"conformance-test/","title":"Report on Gateway API Conformance Testing","text":"

Kubernetes Gateway API Conformance

"},{"location":"conformance-test/#summary-of-test-result","title":"Summary of Test Result","text":"Category Test Cases Status Notes GatewayClass GatewayClassObservedGenerationBump ok Gateway GatewayObservedGenerationBump ok GatewayInvalidRouteKind ok GatewayWithAttachedRoutes ok GatewaySecretInvalidReferenceGrants N/A VPC Lattice supports ACM certs GatewaySecretMissingReferenceGrant N/A VPC Lattice supports ACM certs GatewaySecretReferenceGrantAllInNamespace N/A VPC Lattice supports ACM Certs GatewaySecretReferenceGrantSpecific N/A VPC Lattice supports ACM certs HTTPRoute HTTPRouteCrossNamespace ok HTTPExactPathMatching ok HTTPRouteHeaderMatching fail Test data exceeds Lattice limit on # of rules HTTPRouteSimpleSameNamespace ok HTTPRouteListenerHostnameMatching N/A Listener hostname not supported HTTPRouteMatchingAcrossRoutes N/A Custom domain name conflict not allowed HTTPRouteMatching fail Route precedence HTTPRouteObservedGenerationBump ok HTTPRoutePathMatchOrder fail Test data exceeds Lattice limit on # of rules HTTPRouteReferenceGrant N/A HTTPRouteDisallowedKind N/A Only HTTPRoute is supported HTTPRouteInvalidNonExistentBackendRef fail #277 HTTPRouteInvalidBackendRefUnknownKind fail #277 HTTPRouteInvalidCrossNamespaceBackendRef fail #277 HTTPRouteInvalidCrossNamespaceParentRef fail #277 HTTPRouteInvalidParentRefNotMatchingListenerPort fail #277 HTTPRouteInvalidParentRefNotMatchingSectionName fail #277 HTTPRouteMethodMatching fail not supported in controller yet. #123 HTTPRouteHostnameIntersection N/A VPC lattice only supports one custom domain HTTPRouteQueryParamMatching N/A Not supported by lattice HTTPRouteRedirectHostAndStatus N/A Not supported by lattice HTTPRouteRedirectPath N/A Not supported by lattice HTTPRouteRedirectPort N/A Not supported by lattice HTTPRouteRedirectScheme N/A Not supported by lattice HTTPRouteRequestHeaderModifier N/A Not supported by lattice HTTPRouteResponseHeaderModifier N/A Not supported by lattice HTTPRouteRewriteHost N/A Not supported by lattice HTTPRouteRewritePath N/A Not supported by lattice"},{"location":"conformance-test/#running-gateway-api-conformance","title":"Running Gateway API Conformance","text":""},{"location":"conformance-test/#running-controller-from-cloud-desktop","title":"Running controller from cloud desktop","text":"
# run controller in following mode\n\nREGION=us-west-2 DEFAULT_SERVICE_NETWORK=my-cluster-default ENABLE_SERVICE_NETWORK_OVERRIDE=true \\\nmake run\n
"},{"location":"conformance-test/#run-individual-conformance-test","title":"Run individual conformance test","text":"

Conformance tests directly send traffic, so they should run inside the VPC that the cluster is operating on.

go test ./conformance/ --run \"TestConformance/HTTPRouteCrossNamespace$\" -v -args -gateway-class amazon-vpc-lattice \\\n-supported-features Gateway,HTTPRoute,GatewayClassObservedGenerationBump\n
"},{"location":"faq/","title":"Frequently Asked Questions (FAQ)","text":"

How can I get involved with AWS Gateway API Controller?

We welcome general feedback, questions, feature requests, or bug reports by creating a Github issue.

Where can I find AWS Gateway API Controller releases?

AWS Gateway API Controller releases are tags of the Github repository. The Github releases page shows all the releases.

Which EKS CNI versions are supported?

Your AWS VPC CNI must be v1.8.0 or later to work with VPC Lattice.

Which versions of Gateway API are supported?

AWS Gateway API Controller supports Gateway API CRD bundle versions between v0.6.1 and v1.0.0. The controller does not reject other versions, but will provide \"best effort support\" to it. Not all features of Gateway API are supported - for detailed features and limitation, please refer to individual API references. By default, Gateway API v0.6.1 CRD bundle is included in the helm chart.

"},{"location":"api-types/access-log-policy/","title":"AccessLogPolicy API Reference","text":""},{"location":"api-types/access-log-policy/#introduction","title":"Introduction","text":"

The AccessLogPolicy custom resource allows you to define access logging configurations on Gateways, HTTPRoutes, and GRPCRoutes by specifying a destination for the access logs to be published to.

"},{"location":"api-types/access-log-policy/#features","title":"Features","text":""},{"location":"api-types/access-log-policy/#example-configurations","title":"Example Configurations","text":""},{"location":"api-types/access-log-policy/#example-1","title":"Example 1","text":"

This configuration results in access logs being published to the S3 Bucket, my-bucket, when traffic is sent to any HTTPRoute or GRPCRoute that is a child of Gateway my-hotel.

apiVersion: application-networking.k8s.aws/v1alpha1\nkind: AccessLogPolicy\nmetadata:\n  name: my-access-log-policy\nspec:\n  destinationArn: \"arn:aws:s3:::my-bucket\"\n  targetRef:\n    group: gateway.networking.k8s.io\n    kind: Gateway\n    name: my-hotel\n
"},{"location":"api-types/access-log-policy/#example-2","title":"Example 2","text":"

This configuration results in access logs being published to the CloudWatch Log Group, myloggroup, when traffic is sent to HTTPRoute inventory.

apiVersion: application-networking.k8s.aws/v1alpha1\nkind: AccessLogPolicy\nmetadata:\n  name: my-access-log-policy\nspec:\n  destinationArn: \"arn:aws:logs:us-west-2:123456789012:log-group:myloggroup:*\"\n  targetRef:\n    group: gateway.networking.k8s.io\n    kind: HTTPRoute\n    name: inventory\n
"},{"location":"api-types/access-log-policy/#aws-permissions-required","title":"AWS Permissions Required","text":"

Per the VPC Lattice documentation, IAM permissions are required to enable access logs:

{\n    \"Version\": \"2012-10-17\",\n    \"Statement\": [\n        {\n            \"Effect\": \"Allow\",\n            \"Sid\": \"ManageVPCLatticeAccessLogSetup\",\n            \"Action\": [\n                \"logs:CreateLogDelivery\",\n                \"logs:GetLogDelivery\",\n                \"logs:UpdateLogDelivery\",\n                \"logs:DeleteLogDelivery\",\n                \"logs:ListLogDeliveries\",\n                \"vpc-lattice:CreateAccessLogSubscription\",\n                \"vpc-lattice:GetAccessLogSubscription\",\n                \"vpc-lattice:UpdateAccessLogSubscription\",\n                \"vpc-lattice:DeleteAccessLogSubscription\",\n                \"vpc-lattice:ListAccessLogSubscriptions\"\n            ],\n            \"Resource\": [\n                \"*\"\n            ]\n        }\n    ]\n}\n
"},{"location":"api-types/access-log-policy/#statuses","title":"Statuses","text":"

AccessLogPolicies fit under the definition of Gateway API Policy Objects. As a result, status conditions are applied on every modification of an AccessLogPolicy, and can be viewed by describing it.

"},{"location":"api-types/access-log-policy/#status-condition-reasons","title":"Status Condition Reasons","text":""},{"location":"api-types/access-log-policy/#accepted","title":"Accepted","text":"

The spec of the AccessLogPolicy is valid and has been accepted for reconciliation by the controller.

"},{"location":"api-types/access-log-policy/#conflicted","title":"Conflicted","text":"

The target already has an AccessLogPolicy for the same destination type (i.e. a target can have 1 AccessLogPolicy for an S3 Bucket, 1 for a CloudWatch Log Group, and 1 for a Firehose Delivery Stream at a time).

"},{"location":"api-types/access-log-policy/#invalid","title":"Invalid","text":"

Any of the following: - The target's Group is not gateway.networking.k8s.io - The target's Kind is not Gateway, HTTPRoute, or GRPCRoute - The target's namespace does not match the AccessLogPolicy's namespace

"},{"location":"api-types/access-log-policy/#targetnotfound","title":"TargetNotFound","text":"

The target does not exist.

"},{"location":"api-types/access-log-policy/#annotations","title":"Annotations","text":"

Upon successful creation or modification of an AccessLogPolicy, the controller may add or update an annotation in the AccessLogPolicy. The annotation applied by the controller has the key application-networking.k8s.aws/accessLogSubscription, and its value is the corresponding VPC Lattice Access Log Subscription's ARN.

When an AccessLogPolicy's destinationArn is changed such that the resource type changes (e.g. from S3 Bucket to CloudWatch Log Group), or the AccessLogPolicy's targetRef is changed, the annotation's value will be updated because a new Access Log Subscription will be created to replace the previous one.

When creation of an AccessLogPolicy fails, no annotation is added to the AccessLogPolicy because no corresponding Access Log Subscription exists.

When modification or deletion of an AccessLogPolicy fails, the previous value of the annotation is left unchanged because the corresponding Access Log Subscription is also left unchanged.

"},{"location":"api-types/gateway/","title":"Gateway API Reference","text":""},{"location":"api-types/gateway/#introduction","title":"Introduction","text":"

Gateway allows you to configure network traffic through AWS Gateway API Controller. When a Gateway is defined with amazon-vpc-lattice GatewayClass, the controller will watch for the gateway and the resources under them, creating required resources under Amazon VPC Lattice.

Internally, a Gateway points to a VPC Lattice service network. Service networks are identified by Gateway name (without namespace) - for example, a Gateway named my-gateway will point to a VPC Lattice service network my-gateway. If multiple Gateways share the same name, all of them will point to the same service network.

VPC Lattice service networks must be managed separately, as it is a broader concept that can cover resources outside the Kubernetes cluster. To create and manage a service network, you can either:

Gateways with amazon-vpc-lattice GatewayClass do not create a single entrypoint to bind Listeners and Routes under them. Instead, each Route will have its own domain name assigned. To see an example of how domain names are assigned, please refer to our Getting Started Guide.

"},{"location":"api-types/gateway/#supported-gatewayclass","title":"Supported GatewayClass","text":""},{"location":"api-types/gateway/#limitations","title":"Limitations","text":""},{"location":"api-types/gateway/#example-configuration","title":"Example Configuration","text":"

Here is a sample configuration that demonstrates how to set up a Gateway:

apiVersion: gateway.networking.k8s.io/v1\nkind: Gateway\nmetadata:\n  name: my-hotel\nspec:\n  gatewayClassName: amazon-vpc-lattice\n  listeners:\n    - name: http\n      protocol: HTTP\n      port: 80\n    - name: https\n      protocol: HTTPS\n      port: 443\n      tls:\n        mode: Terminate\n        certificateRefs:\n          - name: unused\n        options:\n          application-networking.k8s.aws/certificate-arn: <certificate-arn>\n

The created Gateway will point to a VPC Lattice service network named my-hotel. Routes under this Gateway can have either http or https listener as a parent based on their desired protocol to use.

This Gateway documentation provides a detailed introduction, feature set, and a basic example of how to configure and use the resource within AWS Gateway API Controller project. For in-depth details and specifications, you can refer to the official Gateway API documentation.

"},{"location":"api-types/grpc-route/","title":"GRPCRoute API Reference","text":""},{"location":"api-types/grpc-route/#introduction","title":"Introduction","text":"

With integration of the Gateway API, AWS Gateway API Controller supports GRPCRoute. This allows you to define and manage the routing of gRPC traffic within your Kubernetes cluster.

"},{"location":"api-types/grpc-route/#grpcroute-key-features-limitations","title":"GRPCRoute Key Features & Limitations","text":"

Features:

Limitations:

"},{"location":"api-types/grpc-route/#annotations","title":"Annotations","text":""},{"location":"api-types/grpc-route/#example-configuration","title":"Example Configuration","text":"

Here is a sample configuration that demonstrates how to set up a GRPCRoute for a HelloWorld gRPC service:

apiVersion: gateway.networking.k8s.io/v1\nkind: GRPCRoute\nmetadata:\n  name: greeter-grpc-route\nspec:\n  parentRefs:\n    - name: my-hotel\n      sectionName: https\n  rules:\n    - matches:\n        - headers:\n            - name: testKey1\n              value: testValue1\n      backendRefs:\n        - name: greeter-grpc-server\n          kind: Service\n          port: 50051\n          weight: 10\n    - matches:\n        - method:\n            service: helloworld.Greeter\n            method: SayHello\n      backendRefs:\n        - name: greeter-grpc-server\n          kind: Service\n          port: 443\n

In this example:

This GRPCRoute documentation provides a detailed introduction, feature set, and a basic example of how to configure and use the resource within AWS Gateway API Controller project. For in-depth details and specifications, you can refer to the official Gateway API documentation.

"},{"location":"api-types/http-route/","title":"HTTPRoute API Reference","text":""},{"location":"api-types/http-route/#introduction","title":"Introduction","text":"

With integration of the Gateway API, AWS Gateway API Controller supports HTTPRoute. This allows you to define and manage the routing of HTTP and HTTPS traffic within your Kubernetes cluster.

"},{"location":"api-types/http-route/#httproute-key-features-limitations","title":"HTTPRoute Key Features & Limitations","text":"

Features:

Limitations:

"},{"location":"api-types/http-route/#annotations","title":"Annotations","text":""},{"location":"api-types/http-route/#example-configuration","title":"Example Configuration","text":""},{"location":"api-types/http-route/#example-1","title":"Example 1","text":"

Here is a sample configuration that demonstrates how to set up an HTTPRoute that forwards HTTP traffic to a Service and ServiceImport, using rules to determine which backendRef to route traffic to.

apiVersion: gateway.networking.k8s.io/v1\nkind: HTTPRoute\nmetadata:\n  name: inventory\nspec:\n  parentRefs:\n    - name: my-hotel\n      sectionName: http\n  rules:\n    - backendRefs:\n        - name: inventory-ver1\n          kind: Service\n          port: 80\n      matches:\n        - path:\n            type: PathPrefix\n            value: /ver1\n    - backendRefs:\n        - name: inventory-ver2\n          kind: ServiceImport\n          port: 80\n      matches:\n        - path:\n            type: PathPrefix\n            value: /ver2\n

In this example:

"},{"location":"api-types/http-route/#example-2","title":"Example 2","text":"

Here is a sample configuration that demonstrates how to set up a HTTPRoute that forwards HTTP and HTTPS traffic to a Service and ServiceImport, using weighted rules to route more traffic to one backendRef than the other. Weighted rules simplify the process of creating blue/green deployments by shifting rule weight from one backendRef to another.

apiVersion: gateway.networking.k8s.io/v1\nkind: HTTPRoute\nmetadata:\n  name: inventory\nspec:\n  parentRefs:\n    - name: my-hotel\n      sectionName: http\n    - name: my-hotel\n      sectionName: https\n  rules:\n    - backendRefs:\n        - name: inventory-ver1\n          kind: Service\n          port: 80\n          weight: 10\n        - name: inventory-ver2\n          kind: ServiceImport\n          port: 80\n          weight: 90\n

In this example:

This HTTPRoute documentation provides a detailed introduction, feature set, and a basic example of how to configure and use the resource within AWS Gateway API Controller project. For in-depth details and specifications, you can refer to the official Gateway API documentation.

"},{"location":"api-types/iam-auth-policy/","title":"IAMAuthPolicy API Reference","text":""},{"location":"api-types/iam-auth-policy/#introduction","title":"Introduction","text":"

VPC Lattice Auth Policies are IAM policy documents that are attached to VPC Lattice Service Networks or Services to control authorization of principal's access the attached Service Network's Services, or the specific attached Service.

IAMAuthPolicy implements Direct Policy Attachment of Gateway APIs GEP-713: Metaresources and Policy Attachment. An IAMAuthPolicy can be attached to a Gateway, HTTPRoute, or GRPCRoute.

Please visit the VPC Lattice Auth Policy documentation page for more details about Auth Policies.

"},{"location":"api-types/iam-auth-policy/#features","title":"Features","text":"

Note: IAMAuthPolicy can only do authorization for traffic that travels through Gateways, HTTPRoutes, and GRPCRoutes. The authorization will not take effect if the client directly sends traffic to the k8s service DNS.

This article is also a good reference on how to set up VPC Lattice Auth Policies in Kubernetes.

"},{"location":"api-types/iam-auth-policy/#example-configuration","title":"Example Configuration","text":""},{"location":"api-types/iam-auth-policy/#example-1","title":"Example 1","text":"

This configuration attaches a policy to the Gateway, default/my-hotel. The policy only allows traffic with the header, header1=value1, through the Gateway. This means, for every child HTTPRoute and GRPCRoute of the Gateway, only traffic with the specified header will be authorized to access it.

apiVersion: application-networking.k8s.aws/v1alpha1\nkind: IAMAuthPolicy\nmetadata:\n    name: test-iam-auth-policy\nspec:\n    targetRef:\n        group: \"gateway.networking.k8s.io\"\n        kind: Gateway\n        name: my-hotel\n    policy: |\n        {\n            \"Version\": \"2012-10-17\",\n            \"Statement\": [\n                {\n                    \"Effect\": \"Allow\",\n                    \"Principal\": \"*\",\n                    \"Action\": \"vpc-lattice-svcs:Invoke\",\n                    \"Resource\": \"*\",\n                    \"Condition\": {\n                        \"StringEquals\": {\n                            \"vpc-lattice-svcs:RequestHeader/header1\": \"value1\"\n                        }\n                    }\n                }\n            ]\n        }\n
"},{"location":"api-types/iam-auth-policy/#example-2","title":"Example 2","text":"

This configuration attaches a policy to the HTTPRoute, examplens/my-route. The policy only allows traffic from the principal, 123456789012, to the HTTPRoute. Note that the traffic from the specified principal must be SIGv4-signed to be authorized.

apiVersion: application-networking.k8s.aws/v1alpha1\nkind: IAMAuthPolicy\nmetadata:\n    name: test-iam-auth-policy\nspec:\n    targetRef:\n        group: \"gateway.networking.k8s.io\"\n        kind: HTTPRoute\n        namespace: examplens\n        name: my-route\n    policy: |\n        {\n            \"Version\": \"2012-10-17\",\n            \"Statement\": [\n                {\n                    \"Effect\": \"Allow\",\n                    \"Principal\": \"123456789012\",\n                    \"Action\": \"vpc-lattice-svcs:Invoke\",\n                    \"Resource\": \"*\"\n                }\n            ]\n        }\n
"},{"location":"api-types/service-export/","title":"ServiceExport API Reference","text":""},{"location":"api-types/service-export/#introduction","title":"Introduction","text":"

In AWS Gateway API Controller, ServiceExport enables a Service for multi-cluster traffic setup. Clusters can import the exported service with ServiceImport resource.

Internally, creating a ServiceExport creates a standalone VPC Lattice target group. Even without ServiceImports, creating ServiceExports can be useful in case you only need the target groups created; for example, using target groups in the VPC Lattice setup outside Kubernetes.

Note that ServiceExport is not the implementation of Kubernetes Multicluster Service APIs; instead AWS Gateway API Controller uses its own version of the resource for the purpose of Gateway API integration.

"},{"location":"api-types/service-export/#limitations","title":"Limitations","text":""},{"location":"api-types/service-export/#annotations","title":"Annotations","text":""},{"location":"api-types/service-export/#example-configuration","title":"Example Configuration","text":"

The following yaml will create a ServiceExport for a Service named service-1:

apiVersion: application-networking.k8s.aws/v1alpha1\nkind: ServiceExport\nmetadata:\n  name: service-1\n  annotations:\n    application-networking.k8s.aws/port: \"9200\"\nspec: {}\n

"},{"location":"api-types/service-import/","title":"ServiceImport API Reference","text":""},{"location":"api-types/service-import/#introduction","title":"Introduction","text":"

ServiceImport is a resource referring to a Service outside the cluster, paired with ServiceExport resource defined in the other clusters.

Just like Services, ServiceImports can be a backend reference of HTTPRoutes. Along with the cluster's own Services (and ServiceImports from even more clusters), you can distribute the traffic across multiple VPCs and clusters.

Note that ServiceImport is not the implementation of Kubernetes Multicluster Service APIs; instead AWS Gateway API Controller uses its own version of the resource for the purpose of Gateway API integration.

"},{"location":"api-types/service-import/#limitations","title":"Limitations","text":""},{"location":"api-types/service-import/#annotations","title":"Annotations","text":""},{"location":"api-types/service-import/#example-configuration","title":"Example Configuration","text":"

The following yaml imports service-1 exported from the designated cluster.

apiVersion: application-networking.k8s.aws/v1alpha1\nkind: ServiceImport\nmetadata:\n  name: service-1\n  annotations:\n    application-networking.k8s.aws/aws-eks-cluster-name: \"service-1-owner-cluster\"\n    application-networking.k8s.aws/aws-vpc: \"service-1-owner-vpc-id\"\nspec: {}\n

The following example HTTPRoute directs traffic to the above ServiceImport.

apiVersion: gateway.networking.k8s.io/v1\nkind: HTTPRoute\nmetadata:\n  name: my-route\nspec:\n  parentRefs:\n    - name: my-gateway\n      sectionName: http\n  rules:\n    - backendRefs:\n        - name: service-1\n          kind: ServiceImport\n

"},{"location":"api-types/service/","title":"Service API Reference","text":""},{"location":"api-types/service/#introduction","title":"Introduction","text":"

Kubernetes Services define a logical set of Pods and a policy by which to access them, often referred to as a microservice. The set of Pods targeted by a Service is determined by a selector.

"},{"location":"api-types/service/#service-key-features-limitations","title":"Service Key Features & Limitations","text":"

Features:

Limitations:

"},{"location":"api-types/service/#example-configuration","title":"Example Configuration:","text":""},{"location":"api-types/service/#example-1","title":"Example 1","text":"

Here's a basic example of a Service that routes traffic to Pods with the label app=MyApp:

apiVersion: v1\nkind: Service\nmetadata:\n  name: my-service\nspec:\n  selector:\n    app: MyApp\n  ports:\n    - protocol: TCP\n      port: 80\n      targetPort: 8080\n

In this example:

This Service documentation provides an overview of its key features, limitations, and basic examples of configuration within Kubernetes. For detailed specifications and advanced configurations, refer to the official Kubernetes Service documentation.

"},{"location":"api-types/target-group-policy/","title":"TargetGroupPolicy API Reference","text":""},{"location":"api-types/target-group-policy/#introduction","title":"Introduction","text":"

By default, AWS Gateway API Controller assumes plaintext HTTP/1 traffic for backend Kubernetes resources. TargetGroupPolicy is a CRD that can be attached to Service or ServiceExport, which allows the users to define protocol, protocol version and health check configurations of those backend resources.

When attaching a policy to a resource, the following restrictions apply:

The policy will not take effect if: - The resource does not exist - The resource is not referenced by any route - The resource is referenced by a route of unsupported type - The ProtocolVersion is non-empty if the TargetGroupPolicy protocol is TCP

Please check the TargetGroupPolicy API Reference for more details. TargetGroupPolicy API Reference

These restrictions are not forced; for example, users may create a policy that targets a service that is not created yet. However, the policy will not take effect unless the target is valid.

"},{"location":"api-types/target-group-policy/#limitations-and-considerations","title":"Limitations and Considerations","text":""},{"location":"api-types/target-group-policy/#example-configuration","title":"Example Configuration","text":"

This will enable HTTPS traffic between the gateway and Kubernetes service, with customized health check configuration.

apiVersion: application-networking.k8s.aws/v1alpha1\nkind: TargetGroupPolicy\nmetadata:\n    name: test-policy\nspec:\n    targetRef:\n        group: \"\"\n        kind: Service\n        name: my-parking-service\n    protocol: HTTPS\n    protocolVersion: HTTP1\n    healthCheck:\n        enabled: true\n        intervalSeconds: 5\n        timeoutSeconds: 1\n        healthyThresholdCount: 3\n        unhealthyThresholdCount: 2\n        path: \"/healthcheck\"\n        port: 80\n        protocol: HTTP\n        protocolVersion: HTTP1\n        statusMatch: \"200\"\n
"},{"location":"api-types/tls-route/","title":"TLSRoute API Reference","text":""},{"location":"api-types/tls-route/#introduction","title":"Introduction","text":"

With integration of the Gateway API, AWS Gateway API Controller supports TLSRoute. This allows you to define and manage end-to-end TLS encrypted traffic routing to your Kubernetes clusters.

"},{"location":"api-types/tls-route/#considerations","title":"Considerations","text":""},{"location":"api-types/tls-route/#example-configuration","title":"Example Configuration","text":"

Here is a sample configuration that demonstrates how to set up a TLSRoute resource to route end-to-end TLS encrypted traffic to a nginx service:

apiVersion: gateway.networking.k8s.io/v1alpha2\nkind: TLSRoute\nmetadata:\n  name: nginx-tls-route\nspec:\n  hostnames:\n    - nginx-test.my-test.com\n  parentRefs:\n    - name: my-hotel-tls-passthrough\n      sectionName: tls\n  rules:\n    - backendRefs:\n        - name: nginx-tls\n          kind: Service\n          port: 443\n

In this example:

For the detailed tls passthrough traffic connectivity setup, please refer the user guide here.

For the detailed Gateway API TLSRoute resource specifications, you can refer to the Kubernetes official documentation.

For the VPC Lattice tls passthrough Listener configuration details, you can refer to the VPC Lattice documentation.

"},{"location":"api-types/vpc-association-policy/","title":"VpcAssociationPolicy API Reference","text":""},{"location":"api-types/vpc-association-policy/#introduction","title":"Introduction","text":"

VpcAssociationPolicy is a Custom Resource Definition (CRD) that can be attached to a Gateway to define the configuration of the ServiceNetworkVpcAssociation between the Gateway's associated VPC Lattice Service Network and the cluster VPC.

"},{"location":"api-types/vpc-association-policy/#recommended-security-group-inbound-rules","title":"Recommended Security Group Inbound Rules","text":"Source Protocol Port Range Comment Kubernetes cluster VPC CIDR or security group reference Protocols defined in the gateway's listener section Ports defined in the gateway's listener section Allow inbound traffic from current cluster vpc to gateway"},{"location":"api-types/vpc-association-policy/#limitations-and-considerations","title":"Limitations and Considerations","text":"

When attaching a VpcAssociationPolicy to a resource, the following restrictions apply:

The security group will not take effect if:

"},{"location":"api-types/vpc-association-policy/#removing-security-groups","title":"Removing Security Groups","text":"

The VPC Lattice UpdateServiceNetworkVpcAssociation API cannot be used to remove all security groups. If you have a VpcAssociationPolicy attached to a gateway that already has security groups applied, updating the VpcAssociationPolicy with empty security group ids or deleting the VpcAssociationPolicy will NOT remove the security groups from the gateway.

To remove security groups, instead, you should delete VPC Association and re-create a new VPC Association without security group ids by following steps: 1. Update the VpcAssociationPolicy by setting associateWithVpc to false and empty security group ids. 2. Update the VpcAssociationPolicy by setting associateWithVpc to true and empty security group ids. Note: SettingassociateWithVpc` to false will disable traffic from the current cluster workloads to the gateway.

"},{"location":"api-types/vpc-association-policy/#example-configuration","title":"Example Configuration","text":"

This configuration attaches a policy to the Gateway, default/my-hotel. The ServiceNetworkVpcAssociation between the Gateway's corresponding VPC Lattice Service Network and the cluster VPC is updated based on the policy contents.

If the expected ServiceNetworkVpcAssociation does not exist, it is created since associateWithVpc is set to true. This allows traffic from clients in the cluster VPC to VPC Lattice Services in the associated Service Network. Additionally, two security groups (sg-1234567890 and sg-0987654321) are attached to the ServiceNetworkVpcAssociation.

apiVersion: application-networking.k8s.aws/v1alpha1\nkind: VpcAssociationPolicy\nmetadata:\n    name: test-vpc-association-policy\nspec:\n    targetRef:\n        group: \"gateway.networking.k8s.io\"\n        kind: Gateway\n        name: my-hotel\n    securityGroupIds:\n        - sg-1234567890\n        - sg-0987654321\n    associateWithVpc: true\n
"},{"location":"concepts/concepts/","title":"AWS Gateway API Controller User Guide","text":"

As part of the VPC Lattice launch, AWS introduced the AWS Gateway API Controller ; an implementation of the Kubernetes Gateway API. Gateway API is an open-source standard interface to enable Kubernetes application networking through expressive, extensible, and role-oriented interfaces. AWS Gateway API controller extends custom resources, defined by Gateway API, which allows you to create VPC Lattice resources using Kubernetes APIs.

When installed in your cluster, the controller watches for the creation of Gateway API resources such as gateways and routes and provisions corresponding Amazon VPC Lattice objects. This enables users to configure VPC Lattice Services, VPC Lattice service networks and Target Groups using Kubernetes APIs, without needing to write custom code or manage sidecar proxies. The AWS Gateway API Controller is an open-source project and fully supported by Amazon.

AWS Gateway API Controller integrates with Amazon VPC Lattice and allows you to:

This documentation describes how to set up the AWS Gateway API Controller, provides example use cases, development concepts, and API references. AWS Gateway API Controller will provide developers the ability to publish services running on Kubernetes cluster and other compute platforms on AWS such as AWS Lambda or Amazon EC2. Once the AWS Gateway API controller deployed and running, you will be able to manage services for multiple Kubernetes clusters and other compute targets on AWS through the following:

Integrating with the Kubernetes Gateway API provides a kubernetes-native experience for developers to create services, manage network routing and traffic behaviour without the heavy lifting managing the underlying networking infrastrcuture. This lets you work with Kubernetes service-related resources using Kubernetes APIs and custom resource definitions (CRDs) defined by the Kubernetes networking.k8s.io specification.

For more information on this technology, see Kubernetes Gateway API.

"},{"location":"concepts/overview/","title":"Understanding the Gateway API Controller","text":"

For medium and large-scale customers, applications can often spread across multiple areas of a cloud. For example, information pertaining to a company\u2019s authentication, billing, and inventory may each be served by services running on different VPCs in AWS. Someone wanting to run an application that is spread out in this way might find themselves having to work with multiple ways to configure:

This is not a new problem. A common approach to interconnecting services that span multiple VPCs is to use service meshes. However, these require sidecars, which can introduce scaling problems and present their own management challenges, such as dealing with control plane and data plane at scale.

If you just want to run an application, you should be shielded from details needed to find assets across multiple VPCs and multiple clusters. You should also have consistent ways of working with assets across your VPCs, even if those assets include different combinations of instances, clusters, containers, and serverless. And while making it simpler to run multi-VPC applications easier for users, administrators still need the tools to control and audit their resources to suit their company\u2019s compliance needs.

"},{"location":"concepts/overview/#service-directory-networks-policies-and-gateways","title":"Service Directory, Networks, Policies and Gateways","text":"

The goal of VPC Lattice is to provide a way to have a single, overarching services view of all services across multiple VPCs. You should also have consistent ways of working with assets across your VPCs, even if those assets include different combinations of instances, clusters, containers, and serverless. The components making up that view include:

Service

An independently deployable unit of software that delivers a specific task or function. A service can run on EC2 instances or ECS containers, or as Lambda functions, within an account or a virtual private cloud (VPC).

A VPC Lattice service has the following components: target groups, listeners, and rules.

Service Network

A logical boundary for a collection of services. A client is any resource deployed in a VPC that is associated with the service network. Clients and services that are associated with the same service network can communicate with each other if they are authorized to do so.

In the following figure, the clients can communicate with both services, because the VPC and services are associated with the same service network.

Service Directory A central registry of all VPC Lattice services that you own or are shared with your account through AWS Resource Access Manager (AWS RAM).

Auth Policies Fine-grained authorization policies that can be used to define access to services. You can attach separate authorization policies to individual services or to the service network. For example, you can create a policy for how a payment service running on an auto scaling group of EC2 instances should interact with a billing service running in AWS Lambda.

"},{"location":"concepts/overview/#use-cases","title":"Use-cases","text":"

In the context of Kubernetes, Amazon VPC Lattice helps to simplify the following:

With VPC Lattice you can also avoid some of these common problems:

"},{"location":"concepts/overview/#relationship-between-vpc-lattice-and-kubernetes","title":"Relationship between VPC Lattice and Kubernetes","text":"

As a Kubernetes user, you can have a very Kubernetes-native experience using the VPC Lattice APIs. The following figure illustrates how VPC Lattice objects connect to Kubernetes Gateway API objects:

As shown in the figure, there are different personas associated with different levels of control in VPC Lattice. Notice that the Kubernetes Gateway API syntax is used to create the gateway, HTTPRoute and services, but Kubernetes gets the details of those items from VPC Lattice:

This is all done by checking the related VPC Lattice Services (and related policies), Target Groups, and Targets.

Keep in mind that you can have different Target Groups spread across different clusters/VPCs receiving traffic from the same VPC Lattice Service (HTTPRoute).

"},{"location":"contributing/developer-cheat-sheet/","title":"Developer Cheat Sheet","text":""},{"location":"contributing/developer-cheat-sheet/#startup","title":"Startup","text":"

The program flow is roughly as follows. On startup, cmd/aws-application-networking-k8s/main.go#main() runs. This initializes all the key components and registers various controllers (in controllers/) with the Kubernetes control plane. These controllers include event handlers (in controllers/eventhandlers/) whose basic function is to convert object notifications to more specific types and enqueue them for processing by the various controllers.

"},{"location":"contributing/developer-cheat-sheet/#build-and-deploy","title":"Build and Deploy","text":"

Processing takes place in a controller's Reconcile() method, which will check if the object has been changed/created or deleted. Sometimes this invokes different paths, but most involve a buildAndDeployModel() step.

"},{"location":"contributing/developer-cheat-sheet/#build","title":"Build","text":"

In the \"build\" step, the controller invokes a \"model builder\" with the changed Kubernetes object. Model builders (in pkg/gateway/) convert the Kubernetes object state into an intermediate VPC Lattice object called a \"model type\". These model types (in pkg/model/lattice) are basic structs which contain the important information about the source Kubernetes object and fields containing Lattice-related values. These models are built only using information from Kubernetes and are intended to contain all details needed to make updates against the VPC Lattice API. Once created, model ojbects are stored in a \"stack\" (/pkg/model/core/stack.go). The stack contains basic methods for storing and retrieving objects based on model type.

"},{"location":"contributing/developer-cheat-sheet/#deploy","title":"Deploy","text":"

Once the \"stack\" is populated, it is serialized for logging then passed to the \"deploy\" step. Here, a \"deployer\" (/pkg/deployer/stack_deployer.go) will invoke a number of \"synthesizers\" (in /pkg/deploy/lattice/) to read model objects from the stack and convert these to VPC Lattice API calls (via Synthesize()). Most synthesizer interaction with the Lattice APIs is deferred to various \"managers\" (also /pkg/deploy/lattice). These managers each have their own interface definition with CRUD methods that bridge model types and Lattice API types. Managers, in turn, use /pkg/aws/services/vpclattice.go for convenience methods against the Lattice API only using Lattice API types.

"},{"location":"contributing/developer-cheat-sheet/#other-notes","title":"Other notes","text":"

Model Status Structs On each model type is a <Type>Status struct which contains information about the object from the Lattice API, such as ARNs or IDs. These are populated either on creation against the Lattice API or in cases where the API object already exists but needs updating via the model.

Resource Cleanup A typical part of model synthesis (though it is also present in ) is checking all resources in the VPC Lattice API to see if they belong to deleted or missing Kubernetes resources. This is to help ensure Lattice resources are not leaked due to bugs in the reconciliation process.

"},{"location":"contributing/developer-cheat-sheet/#code-consistency","title":"Code Consistency","text":"

To help reduce cognitive load while working through the code, it is critical to be consistent. For now, this applies most directly to imports and variable naming.

"},{"location":"contributing/developer-cheat-sheet/#imports","title":"Imports","text":"

Common imports should all use the same alias (or lack of alias) so that when we see aws. or vpclattice. or sdk. in the code we know what they refer to without having to double-check. Here are the conventions:

import (\n  \"github.com/aws/aws-sdk-go/service/vpclattice\" // no alias\n  \"github.com/aws/aws-sdk-go/aws\" // no alias\n\n  corev1 \"k8s.io/api/core/v1\"\n  gwv1 \"sigs.k8s.io/gateway-api/apis/v1\"\n  ctrl \"sigs.k8s.io/controller-runtime\"\n\n  pkg_aws \"github.com/aws/aws-application-networking-k8s/pkg/aws\"\n  model \"github.com/aws/aws-application-networking-k8s/pkg/model/lattice\"\n  anv1alpha1 \"github.com/aws/aws-application-networking-k8s/pkg/apis/applicationnetworking/v1alpha1\"\n)\n

For unit tests, this changes slightly for more readability when using the imports strictly for mocking

import (\n  mocks_aws \"github.com/aws/aws-application-networking-k8s/pkg/aws\"\n  mocks \"github.com/aws/aws-application-networking-k8s/pkg/aws/services\"\n  mock_client \"github.com/aws/aws-application-networking-k8s/mocks/controller-runtime/client\"\n)\n

"},{"location":"contributing/developer-cheat-sheet/#service-service-or-service","title":"Service, Service, or Service?","text":"

Similarly, because there is overlap between the Gateway API spec, model types, and Lattice API nouns, it is important to use differentiating names between the types in components where the name could be ambiguous. The safest approach is to use a prefix:

An example in practice would be

  var k8sSvc *corev1.Service\n  var modelSvc *model.Service\n  var latticeSvc *vpclattice.ServiceSummary\n

There are some objects which interact unambiguously with an underlying type, for example in vpclattice.go the types are always VPC Lattice API types, so disambiguating is less important there.

"},{"location":"contributing/developer/","title":"Developer Guide","text":""},{"location":"contributing/developer/#prerequisites","title":"Prerequisites","text":"

Tools

Install these tools before proceeding:

  1. AWS CLI,
  2. kubectl - the Kubernetes CLI,
  3. helm - the package manager for Kubernetes,
  4. eksctl- the CLI for Amazon EKS,
  5. go v1.20.x - language,
  6. yq - CLI to manipulate yaml files,
  7. jq - CLI to manipulate json files,
  8. make- build automation tool.

Cluster creation and setup

Before proceeding to the next sections, you need to:

  1. Create a and set up a cluster dev-cluster with the controller following the AWS Gateway API Controller installation guide on Amazon EKS.

    Note

    You can either install the Controller and CRDs following the steps in the installation guide or using the steps below if you prefer to create the individual CRDs.

  2. Clone the AWS Gateway API Controller repository.

    git clone git@github.com:aws/aws-application-networking-k8s.git\ncd aws-application-networking-k8s\n

  3. Install dependencies with toolchain.sh script:
    make toolchain\n
"},{"location":"contributing/developer/#setup","title":"Setup","text":"

Once cluster is ready, we need to apply CRDs for gateway-api resources. First install core gateway-api CRDs:

v1.2 CRDs

Install the latest v1 CRDs:

kubectl kustomize \"github.com/kubernetes-sigs/gateway-api/config/crd?ref=v1.2.0\" | kubectl apply -f -\n

Note

Note that v1 CRDs are not included in deploy-*.yaml and helm chart.

And install additional CRDs for the controller:

kubectl apply -f config/crds/bases/externaldns.k8s.io_dnsendpoints.yaml\nkubectl apply -f config/crds/bases/gateway.networking.k8s.io_tlsroutes.yaml\nkubectl apply -f config/crds/bases/application-networking.k8s.aws_serviceexports.yaml\nkubectl apply -f config/crds/bases/application-networking.k8s.aws_serviceimports.yaml\nkubectl apply -f config/crds/bases/application-networking.k8s.aws_targetgrouppolicies.yaml\nkubectl apply -f config/crds/bases/application-networking.k8s.aws_vpcassociationpolicies.yaml\nkubectl apply -f config/crds/bases/application-networking.k8s.aws_accesslogpolicies.yaml\nkubectl apply -f config/crds/bases/application-networking.k8s.aws_iamauthpolicies.yaml\n

When e2e tests are terminated during execution, it might break clean-up stage and resources will leak. To delete dangling resources manually use cleanup script:

make e2e-clean\n
"},{"location":"contributing/developer/#local-development","title":"Local Development","text":"

A minimal test of changes can be done with make presubmit. This command will also run on PR.

make presubmit\n

Start controller in development mode, that will point to cluster (see setup above).

// should be region of the cluster\nREGION=us-west-2 make run\n

You can explore a collection of different yaml configurations in the examples folder that can be applied to cluster.

To run it against specific lattice service endpoint.

LATTICE_ENDPOINT=https://vpc-lattice.us-west-2.amazonaws.com/ make run\n

To easier load environment variables, if you hope to run the controller by GoLand IDE locally, you could run the ./scripts/load_env_variables.sh And use \"EnvFile\" GoLand plugin to read the env variables from the generated .env file.

"},{"location":"contributing/developer/#end-to-end-testing","title":"End-to-End Testing","text":"

For larger changes it's recommended to run e2e suites on your local cluster. E2E tests require a service network named test-gateway with cluster VPC associated to run. You can either set up service network manually or use DEFAULT_SERVICE_NETWORK option when running controller locally. (e.g. DEFAULT_SERVICE_NETWORK=test-gateway make run)

REGION=us-west-2 make e2e-test\n

For the RAM Share test suite, which runs cross-account e2e tests, you will need a secondary AWS Account with a role that can be assumed by the primary account during test execution. You can create an IAM Role, with a Trust Policy allowing the primary account to assume it, via the AWS IAM Console.

export SECONDARY_ACCOUNT_TEST_ROLE_ARN=arn:aws:iam::000000000000:role/MyRole\nexport FOCUS=\"RAM Share\"\nREGION=us-west-2 make e2e-test\n

You can use the FOCUS environment variable to run some specific test cases based on filter condition. You could assign the string in the Describe(\"xxxxxx\") or It(\"xxxxxx\") to the FOCUS environment variable to run the specific test cases.

var _ = Describe(\"HTTPRoute path matches\", func() {\n    It(\"HTTPRoute should support multiple path matches\", func() {\n        // test case body\n    })\n

For example, to run the test case \"HTTPRoute should support multiple path matches\", you could run the following command:

export FOCUS=\"HTTPRoute should support multiple path matches\"\nexport REGION=us-west-2\nmake e2e-test\n

Conversely, you can use the SKIP environment variable to skip specific test cases.

For example, to skip the same test as above, you would run the following command:

export SKIP=\"HTTPRoute should support multiple path matches\"\n

For more detail on filter condition for ginkgo https://onsi.github.io/ginkgo/#focused-specs https://onsi.github.io/ginkgo/#description-based-filtering

After all test cases running finished, in the AfterSuite() function, it will clean up k8s and vpc lattice resource created by current test cases running.

"},{"location":"contributing/developer/#documentations","title":"Documentations","text":"

The controller documentation is managed in docs/ directory, and built with mkdocs. It uses mike to manage versioning. To build and verify your changes locally:

pip install -r requirements.txt\nmake docs\n
The website will be located in site/ directory. You can also run a local dev-server by running mike serve or mkdocs serve.

"},{"location":"contributing/developer/#contributing","title":"Contributing","text":"

Before sending a Pull Request, you should run unit tests:

make presubmit\n

For larger, functional changes, run e2e tests:

make e2e-test\n

"},{"location":"contributing/developer/#make-docker-image","title":"Make Docker Image","text":"
make docker-build\n
"},{"location":"contributing/developer/#deploy-controller-inside-a-kubernetes-cluster","title":"Deploy Controller inside a Kubernetes Cluster","text":"

Generate deploy.yaml

make build-deploy\n

Then follow Deploying the AWS Gateway API Controller to configure and deploy the docker image.

"},{"location":"guides/advanced-configurations/","title":"Advanced configurations","text":"

The section below covers advanced configuration techniques for installing and using the AWS Gateway API Controller. This includes things such as running the controller on a self-hosted cluster on AWS or using an IPv6 EKS cluster.

"},{"location":"guides/advanced-configurations/#using-a-self-managed-kubernetes-cluster","title":"Using a self-managed Kubernetes cluster","text":"

You can install AWS Gateway API Controller to a self-managed Kubernetes cluster in AWS.

However, the controller utilizes IMDS to get necessary information from instance metadata, such as AWS account ID and VPC ID. So:

"},{"location":"guides/advanced-configurations/#ipv6-support","title":"IPv6 support","text":"

IPv6 address type is automatically used for your services and pods if your cluster is configured to use IPv6 addresses.

If your cluster is configured to be dual-stack, you can set the IP address type of your service using the ipFamilies field. For example:

parking_service.yaml
apiVersion: v1\nkind: Service\nmetadata:\n  name: ipv4-target-in-dual-stack-cluster\nspec:\n  ipFamilies:\n    - \"IPv4\"\n  selector:\n    app: parking\n  ports:\n    - protocol: TCP\n      port: 80\n      targetPort: 8090\n
"},{"location":"guides/custom-domain-name/","title":"Configure a Custom Domain Name for HTTPRoute","text":"

When you create a HTTPRoute under amazon-vpc-lattice gatewayclass, the controller creates a AWS VPC Lattice Service during reconciliation. VPC Lattice generates a unique Fully Qualified Domain Name (FQDN) for you; however, this auto-generated domain name is not easy to remember.

If you'd prefer to use a custom domain name for a HTTPRoute, you can specify them in hostname field of HTTPRoute. Here is one example:

custom-domain-route.yaml
apiVersion: gateway.networking.k8s.io/v1\nkind: HTTPRoute\nmetadata:\n  name: review\nspec:\n  hostnames:\n  - review.my-test.com  # this is the custom domain name\n  parentRefs:\n  - name: my-hotel\n    sectionName: http\n  rules:    \n  - backendRefs:\n    - name: review2\n      kind: Service\n      port: 8090\n    matches:\n    - path:\n        type: PathPrefix\n        value: /review2\n
"},{"location":"guides/custom-domain-name/#managing-dns-records-using-externaldns","title":"Managing DNS records using ExternalDNS","text":"

To register custom domain names to your DNS provider, we recommend using ExternalDNS. AWS Gateway API Controller supports ExternalDNS integration through CRD source - the controller will manage DNSEndpoint resource for you.

To use ExternalDNS with the AWS Gateway API Controller, you need to:

  1. Install DNSEndpoint CRD. This is bundled with both Gateway API Controller Helm chart and files/controller-installation/deploy-*.yaml manifest, but also can be installed manually by the following command:

    kubectl apply -f config/crds/bases/externaldns.k8s.io_dnsendpoints.yaml\n

    Note

    If the DNSEndpoint CRD does not exist, DNSEndpoint resource will not be created nor will be managed by the controller.

  2. Restart the controller if running already.

  3. Run ExternalDNS controller watching crd source. The following example command runs ExternalDNS compiled from source, using AWS Route53 provider:
    build/external-dns --source crd --crd-source-apiversion externaldns.k8s.io/v1alpha1 \\\n--crd-source-kind DNSEndpoint --provider aws --txt-prefix \"prefix.\"\n
  4. Create HTTPRoutes and Services. The controller should create DNSEndpoint resource owned by the HTTPRoute you created.
  5. ExternalDNS will watch the changes and create DNS record on the configured DNS provider.
"},{"location":"guides/custom-domain-name/#notes","title":"Notes","text":""},{"location":"guides/deploy/","title":"Deploy the AWS Gateway API Controller on Amazon EKS","text":"

This Deployment Guide provides an end-to-end procedure to install the AWS Gateway API Controller with Amazon Elastic Kubernetes Service.

Amazon EKS is a simple, recommended way of preparing a cluster for running services with AWS Gateway API Controller, however the AWS Gateway API Controller can be used on any Kubernetes cluster on AWS. Check out the Advanced Configurations section below for instructions on how to install and run the controller on self-hosted Kubernetes clusters on AWS.

"},{"location":"guides/deploy/#prerequisites","title":"Prerequisites","text":"

Install these tools before proceeding:

  1. AWS CLI,
  2. kubectl - the Kubernetes CLI,
  3. helm - the package manager for Kubernetes,
  4. eksctl- the CLI for Amazon EKS,
  5. jq - CLI to manipulate json files.
"},{"location":"guides/deploy/#setup","title":"Setup","text":"

Set your AWS Region and Cluster Name as environment variables. See the Amazon VPC Lattice FAQs for a list of supported regions.

export AWS_REGION=<cluster_region>\nexport CLUSTER_NAME=<cluster_name>\n

Create a cluster (optional)

You can easily create a cluster with eksctl, the CLI for Amazon EKS:

eksctl create cluster --name $CLUSTER_NAME --region $AWS_REGION\n

Allow traffic from Amazon VPC Lattice

You must set up security groups so that they allow all Pods communicating with VPC Lattice to allow traffic from the VPC Lattice managed prefix lists. See Control traffic to resources using security groups for details. Lattice has both an IPv4 and IPv6 prefix lists available.

  1. Configure the EKS nodes' security group to receive traffic from the VPC Lattice network.

    CLUSTER_SG=<your_node_security_group>\n

    Note

    If you have created the cluster with eksctl create cluster --name $CLUSTER_NAME --region $AWS_REGION command, you can use this command to export the Security Group ID:

    CLUSTER_SG=$(aws eks describe-cluster --name $CLUSTER_NAME --output json| jq -r '.cluster.resourcesVpcConfig.clusterSecurityGroupId')\n
    PREFIX_LIST_ID=$(aws ec2 describe-managed-prefix-lists --query \"PrefixLists[?PrefixListName==\"\\'com.amazonaws.$AWS_REGION.vpc-lattice\\'\"].PrefixListId\" | jq -r '.[]')\naws ec2 authorize-security-group-ingress --group-id $CLUSTER_SG --ip-permissions \"PrefixListIds=[{PrefixListId=${PREFIX_LIST_ID}}],IpProtocol=-1\"\nPREFIX_LIST_ID_IPV6=$(aws ec2 describe-managed-prefix-lists --query \"PrefixLists[?PrefixListName==\"\\'com.amazonaws.$AWS_REGION.ipv6.vpc-lattice\\'\"].PrefixListId\" | jq -r '.[]')\naws ec2 authorize-security-group-ingress --group-id $CLUSTER_SG --ip-permissions \"PrefixListIds=[{PrefixListId=${PREFIX_LIST_ID_IPV6}}],IpProtocol=-1\"\n

Set up IAM permissions

The AWS Gateway API Controller needs to have necessary permissions to operate.

  1. Create a policy (recommended-inline-policy.json) in IAM with the following content that can invoke the Gateway API and copy the policy arn for later use:

    curl https://raw.githubusercontent.com/aws/aws-application-networking-k8s/main/files/controller-installation/recommended-inline-policy.json  -o recommended-inline-policy.json\n\naws iam create-policy \\\n    --policy-name VPCLatticeControllerIAMPolicy \\\n    --policy-document file://recommended-inline-policy.json\n\nexport VPCLatticeControllerIAMPolicyArn=$(aws iam list-policies --query 'Policies[?PolicyName==`VPCLatticeControllerIAMPolicy`].Arn' --output text)\n
  2. Create the aws-application-networking-system namespace:

    kubectl apply -f https://raw.githubusercontent.com/aws/aws-application-networking-k8s/main/files/controller-installation/deploy-namesystem.yaml\n

You can choose from Pod Identities (recommended) and IAM Roles For Service Accounts to set up controller permissions.

Pod Identities (recommended)IRSA

Set up the Pod Identities Agent

To use Pod Identities, we need to set up the Agent and to configure the controller's Kubernetes Service Account to assume necessary permissions with EKS Pod Identity.

Read if you are using a custom node role

The node role needs to have permissions for the Pod Identity Agent to do the AssumeRoleForPodIdentity action in the EKS Auth API. Follow the documentation if you are not using the AWS managed policy AmazonEKSWorkerNodePolicy.

  1. Run the following AWS CLI command to create the Pod Identity addon.
    aws eks create-addon --cluster-name $CLUSTER_NAME --addon-name eks-pod-identity-agent --addon-version v1.0.0-eksbuild.1\n
    kubectl get pods -n kube-system | grep 'eks-pod-identity-agent'\n

Assign role to Service Account

Create an IAM role and associate it with a Kubernetes service account.

  1. Create a Service Account.

    cat >gateway-api-controller-service-account.yaml <<EOF\napiVersion: v1\nkind: ServiceAccount\nmetadata:\n    name: gateway-api-controller\n    namespace: aws-application-networking-system\nEOF\nkubectl apply -f gateway-api-controller-service-account.yaml\n
  2. Create a trust policy file for the IAM role.

    cat >trust-relationship.json <<EOF\n{\n    \"Version\": \"2012-10-17\",\n    \"Statement\": [\n        {\n            \"Sid\": \"AllowEksAuthToAssumeRoleForPodIdentity\",\n            \"Effect\": \"Allow\",\n            \"Principal\": {\n                \"Service\": \"pods.eks.amazonaws.com\"\n            },\n            \"Action\": [\n                \"sts:AssumeRole\",\n                \"sts:TagSession\"\n            ]\n        }\n    ]\n}\nEOF\n
  3. Create the role.

    aws iam create-role --role-name VPCLatticeControllerIAMRole --assume-role-policy-document file://trust-relationship.json --description \"IAM Role for AWS Gateway API Controller for VPC Lattice\"\naws iam attach-role-policy --role-name VPCLatticeControllerIAMRole --policy-arn=$VPCLatticeControllerIAMPolicyArn\nexport VPCLatticeControllerIAMRoleArn=$(aws iam list-roles --query 'Roles[?RoleName==`VPCLatticeControllerIAMRole`].Arn' --output text)\n
  4. Create the association

    aws eks create-pod-identity-association --cluster-name $CLUSTER_NAME --role-arn $VPCLatticeControllerIAMRoleArn --namespace aws-application-networking-system --service-account gateway-api-controller\n

You can use AWS IAM Roles for Service Accounts (IRSA) to assign the Controller necessary permissions via a ServiceAccount.

  1. Create an IAM OIDC provider: See Creating an IAM OIDC provider for your cluster for details.

    eksctl utils associate-iam-oidc-provider --cluster $CLUSTER_NAME --approve --region $AWS_REGION\n

  2. Create an iamserviceaccount for pod level permission:

    eksctl create iamserviceaccount \\\n    --cluster=$CLUSTER_NAME \\\n    --namespace=aws-application-networking-system \\\n    --name=gateway-api-controller \\\n    --attach-policy-arn=$VPCLatticeControllerIAMPolicyArn \\\n    --override-existing-serviceaccounts \\\n    --region $AWS_REGION \\\n    --approve\n
"},{"location":"guides/deploy/#install-the-controller","title":"Install the Controller","text":"
  1. Run either kubectl or helm to deploy the controller. Check Environment Variables for detailed explanation of each configuration option.

    HelmKubectl
    # login to ECR\naws ecr-public get-login-password --region us-east-1 | helm registry login --username AWS --password-stdin public.ecr.aws\n# Run helm with either install or upgrade\nhelm install gateway-api-controller \\\n    oci://public.ecr.aws/aws-application-networking-k8s/aws-gateway-controller-chart \\\n    --version=v1.0.7 \\\n    --set=serviceAccount.create=false \\\n    --namespace aws-application-networking-system \\\n    --set=log.level=info # use \"debug\" for debug level logs\n
    kubectl apply -f https://raw.githubusercontent.com/aws/aws-application-networking-k8s/main/files/controller-installation/deploy-v1.0.7.yaml\n
  2. Create the amazon-vpc-lattice GatewayClass:

    kubectl apply -f https://raw.githubusercontent.com/aws/aws-application-networking-k8s/main/files/controller-installation/gatewayclass.yaml\n

"},{"location":"guides/environment/","title":"Configuration","text":""},{"location":"guides/environment/#environment-variables","title":"Environment Variables","text":"

AWS Gateway API Controller for VPC Lattice supports a number of configuration options, which are set through environment variables. The following environment variables are available, and all of them are optional.

"},{"location":"guides/environment/#cluster_name","title":"CLUSTER_NAME","text":"

Type: string

Default: Inferred from IMDS metadata

A unique name to identify a cluster. This will be used in AWS resource tags to record ownership. This variable is required except for EKS cluster. This needs to be specified if IMDS is not available.

"},{"location":"guides/environment/#cluster_vpc_id","title":"CLUSTER_VPC_ID","text":"

Type: string

Default: Inferred from IMDS metadata

When running AWS Gateway API Controller outside the Kubernetes Cluster, this specifies the VPC of the cluster. This needs to be specified if IMDS is not available.

"},{"location":"guides/environment/#aws_account_id","title":"AWS_ACCOUNT_ID","text":"

Type: string

Default: Inferred from IMDS metadata

When running AWS Gateway API Controller outside the Kubernetes Cluster, this specifies the AWS account. This needs to be specified if IMDS is not available.

"},{"location":"guides/environment/#region","title":"REGION","text":"

Type: string

Default: Inferred from IMDS metadata

When running AWS Gateway API Controller outside the Kubernetes Cluster, this specifies the AWS Region of VPC Lattice Service endpoint. This needs to be specified if IMDS is not available.

"},{"location":"guides/environment/#log_level","title":"LOG_LEVEL","text":"

Type: string

Default: \"info\"

When set as \"debug\", the AWS Gateway API Controller will emit debug level logs.

"},{"location":"guides/environment/#default_service_network","title":"DEFAULT_SERVICE_NETWORK","text":"

Type: string

Default: \"\"

When set as a non-empty value, creates a service network with that name. The created service network will be also associated with cluster VPC.

"},{"location":"guides/environment/#enable_service_network_override","title":"ENABLE_SERVICE_NETWORK_OVERRIDE","text":"

Type: string

Default: \"\"

When set as \"true\", the controller will run in \"single service network\" mode that will override all gateways to point to default service network, instead of searching for service network with the same name. Can be used for small setups and conformance tests.

"},{"location":"guides/environment/#webhook_enabled","title":"WEBHOOK_ENABLED","text":"

Type: string

Default: \"\"

When set as \"true\", the controller will start the webhook listener responsible for pod readiness gate injection (see pod-readiness-gates.md). This is disabled by default for deploy.yaml because the controller will not start successfully without the TLS certificate for the webhook in place. While this can be fixed by running scripts/gen-webhook-cert.sh, it requires manual action. The webhook is enabled by default for the Helm install as the Helm install will also generate the necessary certificate.

"},{"location":"guides/environment/#disable_tagging_service_api","title":"DISABLE_TAGGING_SERVICE_API","text":"

Type: string

Default: \"\"

When set as \"true\", the controller will not use the AWS Resource Groups Tagging API.

The Resource Groups Tagging API is only available on the public internet and customers using private clusters will need to enable this feature. When enabled, the controller will use VPC Lattice APIs to lookup tags which are not as performant and requires more API calls.

The Helm chart sets this value to \"false\" by default.

"},{"location":"guides/environment/#route_max_concurrent_reconciles","title":"ROUTE_MAX_CONCURRENT_RECONCILES","text":"

Type: int

Default: 1

Maximum number of concurrently running reconcile loops per route type (HTTP, GRPC, TLS)

"},{"location":"guides/getstarted/","title":"Getting Started with AWS Gateway API Controller","text":"

This guide helps you get started using the controller.

Following this guide, you will:

Using these examples as a foundation, see the Concepts section for ways to further configure service-to-service communications.

"},{"location":"guides/getstarted/#prerequisites","title":"Prerequisites","text":"

Before proceeding to the next sections, you need to:

"},{"location":"guides/getstarted/#single-cluster","title":"Single cluster","text":"

This example creates a single cluster in a single VPC, then configures two HTTPRoutes (rates and inventory) and three kubetnetes services (parking, review, and inventory-1). The following figure illustrates this setup:

Setup in-cluster service-to-service communications

  1. AWS Gateway API Controller needs a VPC Lattice service network to operate.

    HelmAWS CLI

    If you installed the controller with helm, you can update chart configurations by specifying the defaultServiceNetwork variable:

    aws ecr-public get-login-password --region us-east-1 | helm registry login --username AWS --password-stdin public.ecr.aws\nhelm upgrade gateway-api-controller \\\noci://public.ecr.aws/aws-application-networking-k8s/aws-gateway-controller-chart \\\n--version=v1.0.6 \\\n--reuse-values \\\n--namespace aws-application-networking-system \\\n--set=defaultServiceNetwork=my-hotel \n

    You can use AWS CLI to manually create a VPC Lattice service network association to the previously created my-hotel service network:

    aws vpc-lattice create-service-network --name my-hotel\nSERVICE_NETWORK_ID=$(aws vpc-lattice list-service-networks --query \"items[?name==\"\\'my-hotel\\'\"].id\" | jq -r '.[]')\nCLUSTER_VPC_ID=$(aws eks describe-cluster --name $CLUSTER_NAME | jq -r .cluster.resourcesVpcConfig.vpcId)\naws vpc-lattice create-service-network-vpc-association --service-network-identifier $SERVICE_NETWORK_ID --vpc-identifier $CLUSTER_VPC_ID\n

    Ensure the service network created above is ready to accept traffic from the new VPC, by checking if the VPC association status is ACTIVE:

    aws vpc-lattice list-service-network-vpc-associations --vpc-id $CLUSTER_VPC_ID\n
    {\n    \"items\": [\n        {\n            ...\n            \"status\": \"ACTIVE\",\n            ...\n        }\n    ]\n}\n

  2. Create the Kubernetes Gateway my-hotel:

    kubectl apply -f files/examples/my-hotel-gateway.yaml\n

    Verify that my-hotel Gateway is created with PROGRAMMED status equals to True:

    kubectl get gateway\n
    NAME       CLASS                ADDRESS   PROGRAMMED   AGE\nmy-hotel   amazon-vpc-lattice               True      7d12h\n

  3. Create the Kubernetes HTTPRoute rates that can has path matches routing to the parking service and review service:

    kubectl apply -f files/examples/parking.yaml\nkubectl apply -f files/examples/review.yaml\nkubectl apply -f files/examples/rate-route-path.yaml\n

  4. Create another Kubernetes HTTPRoute inventory:
    kubectl apply -f files/examples/inventory-ver1.yaml\nkubectl apply -f files/examples/inventory-route.yaml\n
  5. Find out HTTPRoute's DNS name from HTTPRoute status:

    kubectl get httproute\n
    NAME        HOSTNAMES   AGE\ninventory               51s\nrates                   6m11s\n

  6. Check VPC Lattice generated DNS address for HTTPRoute inventory and rates (this could take up to one minute to populate):

    kubectl get httproute inventory -o yaml \n
        apiVersion: gateway.networking.k8s.io/v1\n    kind: HTTPRoute\n    metadata:\n        annotations:\n        application-networking.k8s.aws/lattice-assigned-domain-name: inventory-default-xxxxxx.xxxxx.vpc-lattice-svcs.us-west-2.on.aws\n    ...\n

    kubectl get httproute rates -o yaml\n
        apiVersion: gateway.networking.k8s.io/v1\n    kind: HTTPRoute\n    metadata:\n        annotations:\n        application-networking.k8s.aws/lattice-assigned-domain-name: rates-default-xxxxxx.xxxxxxxxx.vpc-lattice-svcs.us-west-2.on.aws\n    ...\n

  7. If the previous step returns the expected response, store VPC Lattice assigned DNS names to variables.

    ratesFQDN=$(kubectl get httproute rates -o json | jq -r '.metadata.annotations.\"application-networking.k8s.aws/lattice-assigned-domain-name\"')\ninventoryFQDN=$(kubectl get httproute inventory -o json | jq -r '.metadata.annotations.\"application-networking.k8s.aws/lattice-assigned-domain-name\"')\n

    Confirm that the URLs are stored correctly:

    echo \"$ratesFQDN \\n$inventoryFQDN\"\n
    rates-default-xxxxxx.xxxxxxxxx.vpc-lattice-svcs.us-west-2.on.aws \ninventory-default-xxxxxx.xxxxx.vpc-lattice-svcs.us-west-2.on.aws\n

Verify service-to-service communications

  1. Check connectivity from the inventory-ver1 service to parking and review services:

    kubectl exec deploy/inventory-ver1 -- curl -s $ratesFQDN/parking $ratesFQDN/review\n
    Requsting to Pod(parking-xxxxx): parking handler pod\nRequsting to Pod(review-xxxxxx): review handler pod\n
  2. Check connectivity from the parking service to the inventory-ver1 service:

    kubectl exec deploy/parking -- curl -s $inventoryFQDN\n
    Requsting to Pod(inventory-xxx): Inventory-ver1 handler pod\n
    Now you could confirm the service-to-service communications within one cluster is working as expected.

"},{"location":"guides/getstarted/#multi-cluster","title":"Multi-cluster","text":"

This section builds on the previous one. We will be migrating the Kubernetes inventory service from a the EKS cluster we previously created to new cluster in a different VPC, located in the same AWS Account.

Set up inventory-ver2 service and serviceExport in the second cluster

Warning

VPC Lattice is a regional service, so you will need to create this second cluster in the same AWS Region as gw-api-controller-demo. Keep this in mind when setting AWS_REGION variable in the following steps.

export AWS_REGION=<clusters_region>\n

  1. Create a second Kubernetes cluster gw-api-controller-demo-2 with the Controller installed (using the same instructions to create and install the controller on Amazon EKS used to create the first).

  2. For sake of simplicity, lets set some alias for our clusters in the Kubernetes config file.

    aws eks update-kubeconfig --name gw-api-controller-demo --region $AWS_REGION --alias gw-api-controller-demo\naws eks update-kubeconfig --name gw-api-controller-demo-2 --region $AWS_REGION --alias gw-api-controller-demo-2\nkubectl config get-contexts\n
  3. Ensure you're using the second cluster's kubectl context.

    kubectl config current-context\n
    If your context is set to the first cluster, switch it to use the second cluster one:
    kubectl config use-context gw-api-controller-demo-2\n

  4. Create the service network association.

    HelmAWS CLI

    If you installed the controller with helm, you can update chart configurations by specifying the defaultServiceNetwork variable:

    aws ecr-public get-login-password --region us-east-1 | helm registry login --username AWS --password-stdin public.ecr.aws\nhelm upgrade gateway-api-controller \\\noci://public.ecr.aws/aws-application-networking-k8s/aws-gateway-controller-chart \\\n--version=v1.0.6 \\\n--reuse-values \\\n--namespace aws-application-networking-system \\\n--set=defaultServiceNetwork=my-hotel \n

    You can use AWS CLI to manually create a VPC Lattice service network association to the previously created my-hotel service network:

    SERVICE_NETWORK_ID=$(aws vpc-lattice list-service-networks --query \"items[?name==\"\\'my-hotel\\'\"].id\" | jq -r '.[]')\nCLUSTER_VPC_ID=$(aws eks describe-cluster --name $CLUSTER_NAME | jq -r .cluster.resourcesVpcConfig.vpcId)\naws vpc-lattice create-service-network-vpc-association --service-network-identifier $SERVICE_NETWORK_ID --vpc-identifier $CLUSTER_VPC_ID\n

    Ensure the service network created above is ready to accept traffic from the new VPC, by checking if the VPC association status is ACTIVE:

    aws vpc-lattice list-service-network-vpc-associations --vpc-id $CLUSTER_VPC_ID\n
    {\n    \"items\": [\n        {\n            ...\n            \"status\": \"ACTIVE\",\n            ...\n        }\n    ]\n}\n

  5. Create the Kubernetes Gateway my-hotel:

    kubectl apply -f files/examples/my-hotel-gateway.yaml\n

    Verify that my-hotel Gateway is created with PROGRAMMED status equals to True:

    kubectl get gateway\n
    NAME       CLASS                ADDRESS   PROGRAMMED   AGE\nmy-hotel   amazon-vpc-lattice               True      7d12h\n

  6. Create a Kubernetes inventory-ver2 service in the second cluster:

    kubectl apply -f files/examples/inventory-ver2.yaml\n

  7. Export this Kubernetes inventory-ver2 from the second cluster, so that it can be referenced by HTTPRoute in the first cluster:
    kubectl apply -f files/examples/inventory-ver2-export.yaml\n

Switch back to the first cluster

  1. Switch context back to the first cluster
    kubectl config use-context gw-api-controller-demo\n
  2. Create Kubernetes service import inventory-ver2 in the first cluster:
    kubectl apply -f files/examples/inventory-ver2-import.yaml\n
  3. Update the HTTPRoute inventory rules to route 10% traffic to the first cluster and 90% traffic to the second cluster:
    kubectl apply -f files/examples/inventory-route-bluegreen.yaml\n
  4. Check the service-to-service connectivity from parking(in the first cluster) to inventory-ver1(in in the first cluster) and inventory-ver2(in in the second cluster):

    inventoryFQDN=$(kubectl get httproute inventory -o json | jq -r '.metadata.annotations.\"application-networking.k8s.aws/lattice-assigned-domain-name\"')\nkubectl exec deploy/parking -- sh -c 'for ((i=1; i<=30; i++)); do curl -s \"$0\"; done' \"$inventoryFQDN\"\n

    Requesting to Pod(inventory-ver2-6dc74b45d8-rlnlt): Inventory-ver2 handler pod <----> in 2nd cluster\nRequesting to Pod(inventory-ver2-6dc74b45d8-rlnlt): Inventory-ver2 handler pod\nRequesting to Pod(inventory-ver2-6dc74b45d8-rlnlt): Inventory-ver2 handler pod\nRequesting to Pod(inventory-ver2-6dc74b45d8-rlnlt): Inventory-ver2 handler pod\nRequesting to Pod(inventory-ver2-6dc74b45d8-95rsr): Inventory-ver1 handler pod <----> in 1st cluster\nRequesting to Pod(inventory-ver2-6dc74b45d8-rlnlt): Inventory-ver2 handler pod\nRequesting to Pod(inventory-ver2-6dc74b45d8-95rsr): Inventory-ver2 handler pod\nRequesting to Pod(inventory-ver2-6dc74b45d8-95rsr): Inventory-ver2 handler pod\nRequesting to Pod(inventory-ver1-74fc59977-wg8br): Inventory-ver1 handler pod....\n

    You can see that the traffic is distributed between inventory-ver1 and inventory-ver2 as expected.

"},{"location":"guides/getstarted/#cleanup","title":"Cleanup","text":"

To avoid additional charges, remove the demo infrastructure from your AWS account.

Multi-cluster

Delete resources in the Multi-cluster walkthrough.

Warning

Remember that you need to have the AWS Region set.

export AWS_REGION=<cluster_region>\n

  1. Cleanup VPC Lattice service and service import in gw-api-controller-demo cluster:

    kubectl config use-context gw-api-controller-demo\nkubectl delete -f files/examples/inventory-route-bluegreen.yaml\nkubectl delete -f files/examples/inventory-ver2-import.yaml\n

  2. Delete service export and applications in gw-api-controller-demo-2 cluster:

    kubectl config use-context gw-api-controller-demo-2\nkubectl delete -f files/examples/inventory-ver2-export.yaml\nkubectl delete -f files/examples/inventory-ver2.yaml\n

  3. Delete the service network association (this could take up to one minute):

    CLUSTER_NAME=gw-api-controller-demo-2\nCLUSTER_VPC_ID=$(aws eks describe-cluster --name $CLUSTER_NAME | jq -r .cluster.resourcesVpcConfig.vpcId)\nSERVICE_NETWORK_ASSOCIATION_IDENTIFIER=$(aws vpc-lattice list-service-network-vpc-associations --vpc-id $CLUSTER_VPC_ID --query \"items[?serviceNetworkName==\"\\'my-hotel\\'\"].id\" | jq -r '.[]')\naws vpc-lattice delete-service-network-vpc-association  --service-network-vpc-association-identifier $SERVICE_NETWORK_ASSOCIATION_IDENTIFIER\n

Single cluster

Delete resources in the Single cluster walkthrough.

Warning

Remember that you need to have the AWS Region set.

export AWS_REGION=<cluster_region>\n

  1. Delete VPC Lattice services and applications in gw-api-controller-demo cluster:

    kubectl config use-context gw-api-controller-demo\nkubectl delete -f files/examples/inventory-route.yaml\nkubectl delete -f files/examples/inventory-ver1.yaml\nkubectl delete -f files/examples/rate-route-path.yaml\nkubectl delete -f files/examples/parking.yaml\nkubectl delete -f files/examples/review.yaml\nkubectl delete -f files/examples/my-hotel-gateway.yaml\n

  2. Delete the service network association (this could take up to one minute):

    CLUSTER_NAME=gw-api-controller-demo\nCLUSTER_VPC_ID=$(aws eks describe-cluster --name $CLUSTER_NAME | jq -r .cluster.resourcesVpcConfig.vpcId)\nSERVICE_NETWORK_ASSOCIATION_IDENTIFIER=$(aws vpc-lattice list-service-network-vpc-associations --vpc-id $CLUSTER_VPC_ID --query \"items[?serviceNetworkName==\"\\'my-hotel\\'\"].id\" | jq -r '.[]')\naws vpc-lattice delete-service-network-vpc-association  --service-network-vpc-association-identifier $SERVICE_NETWORK_ASSOCIATION_IDENTIFIER\n

Cleanup VPC Lattice Resources

  1. Cleanup controllers in gw-api-controller-demo and gw-api-controller-demo-2 clusters:

    HelmKubectl
    kubectl config use-context gw-api-controller-demo\nCLUSTER_NAME=gw-api-controller-demo\naws ecr-public get-login-password --region us-east-1 | helm registry login --username AWS --password-stdin public.ecr.aws\nhelm uninstall gateway-api-controller --namespace aws-application-networking-system \nkubectl config use-context gw-api-controller-demo-2\nCLUSTER_NAME=gw-api-controller-demo-2\naws ecr-public get-login-password --region us-east-1 | helm registry login --username AWS --password-stdin public.ecr.aws\nhelm uninstall gateway-api-controller --namespace aws-application-networking-system \n
    kubectl config use-context gw-api-controller-demo\nkubectl delete -f https://raw.githubusercontent.com/aws/aws-application-networking-k8s/main/files/controller-installation/deploy-v1.0.4.yaml\nkubectl config use-context gw-api-controller-demo-2\nkubectl delete -f https://raw.githubusercontent.com/aws/aws-application-networking-k8s/main/files/controller-installation/deploy-v1.0.4.yaml\n
  2. Delete the service network:

    1. Ensure the service network associations have been deleted (do not move forward if the deletion is still IN PROGRESS):
      SN_IDENTIFIER=$(aws vpc-lattice list-service-networks --query \"items[?name==\"\\'my-hotel\\'\"].id\" | jq -r '.[]')\naws vpc-lattice list-service-network-vpc-associations --service-network-identifier $SN_IDENTIFIER\n
    2. Delete my-hotel service network:
      aws vpc-lattice delete-service-network --service-network-identifier $SN_IDENTIFIER\n
    3. Ensure the service network my-hotel is deleted:
      aws vpc-lattice list-service-networks\n

Cleanup the clusters

Finally, remember to delete the clusters you created for this walkthrough:

  1. Delete gw-api-controller-demo cluster:

    eksctl delete cluster --name=gw-api-controller-demo\nk config delete-context gw-api-controller-demo\n

  2. Delete gw-api-controller-demo-2 cluster:

    eksctl delete cluster --name=gw-api-controller-demo-2\nk config delete-context gw-api-controller-demo-2\n

"},{"location":"guides/grpc/","title":"GRPCRoute Support","text":""},{"location":"guides/grpc/#what-is-grpcroute","title":"What is GRPCRoute?","text":"

The GRPCRoute is a custom resource defined in the Gateway API that specifies how gRPC traffic should be routed. It allows you to set up routing rules based on various match criteria, such as service names and methods. With GRPCRoute, you can ensure that your gRPC traffic is directed to the appropriate backend services in a Kubernetes environment.

For a detailed reference on GRPCRoute from the Gateway API, please check the official Gateway API documentation.

"},{"location":"guides/grpc/#setting-up-a-helloworld-grpc-server","title":"Setting up a HelloWorld gRPC Server","text":"

In this section, we'll walk you through deploying a simple \"HelloWorld\" gRPC server and setting up the required routing rules using the Gateway API.

Deploying the Necessary Resources

  1. Apply the Gateway Configuration: This YAML file contains the definition for a gateway with an HTTPS listener.

    kubectl apply -f files/examples/my-hotel-gateway-multi-listeners.yaml\n

  2. Deploy the gRPC Server: Deploy the example gRPC server which will respond to the SayHello gRPC request.

    kubectl apply -f files/examples/greeter-grpc-server.yaml\n

  3. Set Up the gRPC Route:This YAML file contains the GRPCRoute resource which directs the gRPC traffic to our example server.

    kubectl apply -f files/examples/greeter-grpc-route.yaml\n

  4. Verify the Deployment:Check to make sure that our gRPC server pod is running and get its name.

    kubectl get pods -A\n

Testing the gRPC Server

  1. Access the gRPC Server Pod: Copy the name of the pod running the greeter-grpc-server and use it to access the pod's shell.

    kubectl exec -it <name-of-grpc-server-pod> -- bash\n

  2. Prepare the Test Client: Inside the pod shell, create a test client by pasting the provided Go code.

    cat << EOF > test.go\npackage main\n\nimport (\n   \"crypto/tls\"\n   \"log\"\n   \"os\"\n\n   \"golang.org/x/net/context\"\n   \"google.golang.org/grpc\"\n   \"google.golang.org/grpc/credentials\"\n   pb \"google.golang.org/grpc/examples/helloworld/helloworld\"\n)\n\nfunc main() {\n   if len(os.Args) < 3 {\n   log.Fatalf(\"Usage: %s <address> <port>\", os.Args[0])\n   }\n\n   address := os.Args[1] + \":\" + os.Args[2]\n\n   // Create a connection with insecure TLS (no server verification).\n   creds := credentials.NewTLS(&tls.Config{\n       InsecureSkipVerify: true,\n   })\n   conn, err := grpc.Dial(address, grpc.WithTransportCredentials(creds))\n   if err != nil {\n       log.Fatalf(\"did not connect: %v\", err)\n   }\n   defer conn.Close()\n   c := pb.NewGreeterClient(conn)\n\n   // Contact the server and print out its response.\n   name := \"world\"\n   if len(os.Args) > 3 {\n       name = os.Args[3]\n   }\n   r, err := c.SayHello(context.Background(), &pb.HelloRequest{Name: name})\n   if err != nil {\n       log.Fatalf(\"could not greet: %v\", err)\n   }\n   log.Printf(\"Greeting: %s\", r.Message)\n}\nEOF\n

  3. Run the Test Client: Execute the test client, making sure to replace <SERVICE DNS> with the VPC Lattice service DNS and <PORT> with the port your Lattice listener uses (in this example, we use 443).

    go run test.go <SERVICE DNS> <PORT>\n

Expected Output

If everything is set up correctly, you should see the following output:

Greeting: Hello world\n

This confirms that our gRPC request was successfully routed through VPC Lattice and processed by our greeter-grpc-server.

"},{"location":"guides/https/","title":"HTTPS","text":""},{"location":"guides/https/#configure-https-connections","title":"Configure HTTPs connections","text":"

The Getting Started guide uses HTTP communications by default. Using the examples here, you can change that to HTTPs. If you choose, you can further customize your HTTPs connections by adding custom domain names and certificates, as described below.

NOTE: You can get the yaml files used on this page by cloning the AWS Gateway API Controller for VPC Lattice site. The files are in the files/examples/ directory.

"},{"location":"guides/https/#securing-traffic-using-https","title":"Securing Traffic using HTTPs","text":"

By adding https to the amazon-vpc-lattice gateway, you can tell the listener to use HTTPs communications. The following modifications to the files/examples/my-hotel-gateway.yaml file add HTTPs communications:

my-hotel-gateway.yaml
apiVersion: gateway.networking.k8s.io/v1\nkind: Gateway\nmetadata:\n  name: my-hotel\nspec:\n  gatewayClassName: amazon-vpc-lattice\n  listeners:\n  - name: http\n    protocol: HTTP\n    port: 80\n  - name: https         # Specify https listener\n    protocol: HTTPS     # Specify HTTPS protocol\n    port: 443           # Specify communication on port 443\n...\n

Next, the following modifications to the files/examples/rate-route-path.yaml file tell the rates HTTPRoute to use HTTPs for communications:

rate-route-path.yaml
apiVersion: gateway.networking.k8s.io/v1\nkind: HTTPRoute\nmetadata:\n  name: rates\nspec:\n  parentRefs:\n  - name: my-hotel\n    sectionName: http \n  - name: my-hotel      # Specify the parentRefs name\n    sectionName: https  # Specify all traffic MUST use HTTPs\n  rules:\n...\n

In this case, the VPC Lattice service automatically generates a managed ACM certificate and uses it for encryting client to service traffic.

"},{"location":"guides/https/#bring-your-own-certificate-byoc","title":"Bring Your Own Certificate (BYOC)","text":"

If you want to use a custom domain name along with its own certificate, you can:

The following shows modifications to files/examples/my-hotel-gateway.yaml to add a custom certificate:

my-hotel-gateway.yaml

apiVersion: gateway.networking.k8s.io/v1\nkind: Gateway\nmetadata:\n  name: my-hotel\n  annotations:\n    application-networking.k8s.aws/lattice-vpc-association: \"true\"\nspec:\n  gatewayClassName: amazon-vpc-lattice\n  listeners:\n  - name: http\n    protocol: HTTP\n    port: 80\n  - name: https\n    protocol: HTTPS      # This is required\n    port: 443\n    tls:\n      mode: Terminate    # This is required\n      certificateRefs:   # This is required per API spec, but currently not used by the controller\n      - name: unused\n      options:           # Instead, we specify ACM certificate ARN under this section\n        application-networking.k8s.aws/certificate-arn: arn:aws:acm:us-west-2:<account>:certificate/<certificate-id>\n
Note that only Terminate mode is supported (Passthrough is not supported).

Next, associate the HTTPRoute to the listener configuration you just configured:

rate-route-path.yaml
apiVersion: gateway.networking.k8s.io/v1\nkind: HTTPRoute\nmetadata:\n  name: rates\nspec:\n  hostnames:\n    - review.my-test.com               # MUST match the DNS in the certificate\n  parentRefs:\n  - name: my-hotel\n    sectionName: http \n  - name: my-hotel                     # Use the listener defined above as parentRef\n    sectionName: https\n...\n
"},{"location":"guides/https/#enabling-tls-connection-on-the-backend","title":"Enabling TLS connection on the backend","text":"

Currently, TLS Passthrough mode is not supported in the controller, but it allows TLS re-encryption to support backends that only allow TLS connections. To handle this use case, you need to configure your service to receive HTTPs traffic instead:

target-group.yaml
apiVersion: application-networking.k8s.aws/v1alpha1\nkind: TargetGroupPolicy\nmetadata:\n    name: test-policy\nspec:\n    targetRef:\n        group: \"\"\n        kind: Service\n        name: my-parking-service # Put service name here\n    protocol: HTTPS\n    protocolVersion: HTTP1\n

This will create VPC Lattice TargetGroup with HTTPs protocol option, which can receive TLS traffic. Note that certificate validation is not supported.

For more details, please refer to TargetGroupPolicy API reference.

"},{"location":"guides/pod-readiness-gates/","title":"Pod readiness gate","text":"

AWS Gateway API controller supports \u00bbPod readiness gates\u00ab to indicate that pod is registered to the VPC Lattice and healthy to receive traffic. The controller automatically injects the necessary readiness gate configuration to the pod spec via mutating webhook during pod creation.

For readiness gate configuration to be injected to the pod spec, you need to apply the label application-networking.k8s.aws/pod-readiness-gate-inject: enabled to the pod namespace.

The pod readiness gate is needed under certain circumstances to achieve full zero downtime rolling deployments. Consider the following example:

In order to avoid this situation, the AWS Gateway API controller can set the readiness condition on the pods that constitute your ingress or service backend. The condition status on a pod will be set to True only when the corresponding target in the VPC Lattice target group shows a health state of \u00bbHealthy\u00ab. This prevents the rolling update of a deployment from terminating old pods until the newly created pods are \u00bbHealthy\u00ab in the VPC Lattice target group and ready to take traffic.

"},{"location":"guides/pod-readiness-gates/#setup","title":"Setup","text":"

Pod readiness gates rely on \u00bbadmission webhooks\u00ab, where the Kubernetes API server makes calls to the AWS Gateway API controller as part of pod creation. This call is made using TLS, so the controller must present a TLS certificate. This certificate is stored as a standard Kubernetes secret. If you are using Helm, the certificate will automatically be configured as part of the Helm install.

If you are manually deploying the controller using the deploy.yaml file, you will need to either patch the deploy.yaml file (see scripts/patch-deploy-yaml.sh) or generate the secret following installation (see scripts/gen-webhook-secret.sh) and manually enable the webhook via the WEBHOOK_ENABLED environment variable.

Note that, without the secret in place, the controller cannot start successfully, and you will see an error message like the following:

{\"level\":\"error\",\"ts\":\"...\",\"logger\":\"setup\",\"caller\":\"workspace/main.go:240\",\"msg\":\"tls: failed to find any PEM data in certificate inputproblem running manager\"}\n
For this reason, the webhook is DISABLED by default in the controller for the non-Helm install. You can enable the webhook by setting the WEBHOOK_ENABLED environment variable to \"true\" in the deploy.yaml file.
apiVersion: apps/v1\nkind: Deployment\nmetadata:\n  name: gateway-api-controller\n  namespace: aws-application-networking-system\n  labels:\n    control-plane: gateway-api-controller\nspec:\n  ...\n  template:\n    metadata:\n      annotations:\n        kubectl.kubernetes.io/default-container: manager\n      labels:\n        control-plane: gateway-api-controller\n    spec:\n      securityContext:\n        runAsNonRoot: true\n      containers:\n      - command:\n        ...\n        name: manager\n        ...\n        env:\n          - name: WEBHOOK_ENABLED\n            value: \"true\"   # <-- value of \"true\" enables the webhook in the controller\n
If you run scripts/patch-deploy-yaml.sh prior to installing deploy.yaml, the script will create the necessary TLS certificates and configuration and will enable the webhook in the controller. Note that, even with the webhook enabled, the webhook will only run for namespaces labeled with application-networking.k8s.aws/pod-readiness-gate-inject: enabled.

"},{"location":"guides/pod-readiness-gates/#enabling-the-readiness-gate","title":"Enabling the readiness gate","text":"

After a Helm install or manually configuring and enabling the webhook, you are ready to begin using pod readiness gates. Apply a label to each namespace you would like to use this feature. You can create and label a namespace as follows -

$ kubectl create namespace example-ns\nnamespace/example-ns created\n\n$ kubectl label namespace example-ns application-networking.k8s.aws/pod-readiness-gate-inject=enabled\nnamespace/example-ns labeled\n\n$ kubectl describe namespace example-ns\nName:         example-ns\nLabels:       application-networking.k8s.aws/pod-readiness-gate-inject=enabled\n              kubernetes.io/metadata.name=example-ns\nAnnotations:  <none>\nStatus:       Active\n

Once labelled, the controller will add the pod readiness gates to all subsequently created pods in the namespace.

The readiness gates have the condition type application-networking.k8s.aws/pod-readiness-gate and the controller injects the config to the pod spec only during pod creation.

"},{"location":"guides/pod-readiness-gates/#object-selector","title":"Object Selector","text":"

The default webhook configuration matches all pods in the namespaces containing the label application-networking.k8s.aws/pod-readiness-gate-inject=enabled. You can modify the webhook configuration further to select specific pods from the labeled namespace by specifying the objectSelector. For example, in order to select ONLY pods with application-networking.k8s.aws/pod-readiness-gate-inject: enabled label instead of all pods in the labeled namespace, you can add the following objectSelector to the webhook:

  objectSelector:\n    matchLabels:\n      application-networking.k8s.aws/pod-readiness-gate-inject: enabled\n
To edit,
$ kubectl edit mutatingwebhookconfigurations aws-appnet-gwc-mutating-webhook\n  ...\n  name: mpod.gwc.k8s.aws\n  namespaceSelector:\n    matchExpressions:\n    - key: application-networking.k8s.aws/pod-readiness-gate-inject\n      operator: In\n      values:\n      - enabled\n  objectSelector:\n    matchLabels:\n      application-networking.k8s.aws/pod-readiness-gate-inject: enabled\n  ...\n
When you specify multiple selectors, pods matching all the conditions will get mutated.

"},{"location":"guides/pod-readiness-gates/#checking-the-pod-condition-status","title":"Checking the pod condition status","text":"

The status of the readiness gates can be verified with kubectl get pod -o wide:

NAME                          READY   STATUS    RESTARTS   AGE   IP         NODE                       READINESS GATES\nnginx-test-5744b9ff84-7ftl9   1/1     Running   0          81s   10.1.2.3   ip-10-1-2-3.ec2.internal   0/1\n

When the target is registered and healthy in the VPC Lattice target group, the output will look like:

NAME                          READY   STATUS    RESTARTS   AGE   IP         NODE                       READINESS GATES\nnginx-test-5744b9ff84-7ftl9   1/1     Running   0          81s   10.1.2.3   ip-10-1-2-3.ec2.internal   1/1\n

If a readiness gate doesn't get ready, you can check the reason via:

$ kubectl get pod nginx-test-545d8f4d89-l7rcl -o yaml | grep -B7 'type: application-networking.k8s.aws/pod-readiness-gate'\nstatus:\n  conditions:\n  - lastProbeTime: null\n    lastTransitionTime: null\n    reason: HEALTHY\n    status: \"True\"\n    type: application-networking.k8s.aws/pod-readiness-gate\n
"},{"location":"guides/ram-sharing/","title":"Share Kubernetes Gateway (VPC Lattice Service Network) between different AWS accounts","text":"

AWS Resource Access Manager (AWS RAM) helps you share your resources across AWS Accounts, within your AWS Organization or Organizational Units (OUs). RAM supports 2 types of VPC Lattice resource sharing: VPC Lattice services and service networks.

Let's build an example where Account A (sharer account) shares its service network with Account B (sharee account), and Account B can access all Kubernetes services (VPC Lattice Target Groups) and Kubernetes HTTPRoutes(VPC Lattice services) within this sharer account's service network.

Create a VPC Lattice Resources

In Account A, set up a cluster with the Controller and an example application installed. You can follow the Getting Started guide up to the \"Single Cluster\" section.

Share the VPC Lattice Service Network

Now that we have a VPC Lattice service network and service in Account A , share this service network to Account B.

  1. Retrieve the my-hotel service network Identifier:

    aws vpc-lattice list-service-networks --query \"items[?name==\"\\'my-hotel\\'\"].id\" | jq -r '.[]'\n

  2. Share the my-hotel service network, using the identifier retrieved in the previous step.

    1. Open the AWS RAM console in Account A and create a resource share.
    2. Select VPC Lattice service network resource sharing type.
    3. Select the my-hotel service network identifier retrieved in the previous step.
    4. Associate AWS Managed Permissions.
    5. Set Account B as principal.
    6. Review and create the resource share.
  3. Open the Account B's AWS RAM console and accept Account A's service network sharing invitation in the \"Shared with me\" section.

  4. Switch back to Account A, retrieve the service network ID.

    SERVICE_NETWORK_ID=$(aws vpc-lattice list-service-networks --query \"items[?name==\"\\'my-hotel\\'\"].id\" | jq -r '.[]')\necho $SERVICE_NETWORK_ID\n
  5. Switch to Account B and verify that my-hotel service network resource is available in Account B (referring to the SERVICE_NETWORK_ID retrived in the previous step).

  6. Now choose an Amazon VPC in Account B to attach to the my-hotel service network.

    VPC_ID=<your_vpc_id>\naws vpc-lattice create-service-network-vpc-association --service-network-identifier $SERVICE_NETWORK_ID --vpc-identifier $VPC_ID\n

    Warning

    VPC Lattice is a regional service, therefore the VPC must be in the same AWS Region of the service network you created in Account A.

Test cross-account connectivity

You can verify that the parking and review microservices - in Account A - can be consumed from resources in the assocuated VPC in Account B.

  1. To simplify, let's create and connect to a Cloud9 environment in the VPC you previously attached to the my-hotel service network.

  2. In Account A, retrieve the VPC Lattice services urls.

    ratesFQDN=$(aws vpc-lattice list-services --query \"items[?name==\"\\'rates-default\\'\"].dnsEntry\" | jq -r '.[].domainName')\ninventoryFQDN=$(aws vpc-lattice list-services --query \"items[?name==\"\\'inventory-default\\'\"].dnsEntry\" | jq -r '.[].domainName')\necho \"$ratesFQDN \\n$inventoryFQDN\"\n
    ```

  3. In the Cloud9 instance in Account B, install curl in the instance and curl parking and rates microservices:

    sudo apt-get install curl\ncurl $ratesFQDN/parking $ratesFQDN/review\n

Cleanup

To avoid additional charges, remove the demo infrastructure from your AWS Accounts.

  1. Delete the service network Association you created in Account B. In Account A:

    VPC_ID=<accountB_vpc_id>\nSERVICE_NETWORK_ASSOCIATION_IDENTIFIER=$(aws vpc-lattice list-service-network-vpc-associations --vpc-id $VPC_ID --query \"items[?serviceNetworkName==\"\\'my-hotel\\'\"].id\" | jq -r '.[]')\naws vpc-lattice delete-service-network-vpc-association  --service-network-vpc-association-identifier $SERVICE_NETWORK_ASSOCIATION_IDENTIFIER\n

    Ensure the service network Association is deleted:

    aws vpc-lattice list-service-network-vpc-associations --vpc-id $VPC_ID\n

  2. Delete the service network RAM share resource in AWS RAM Console.

  3. Follow the cleanup section of the getting Started guide to delete Cluster and service network Resources in Account A.

  4. Delete the Cloud9 Environment in Account B.

"},{"location":"guides/tls-passthrough/","title":"TLS Passthrough Support","text":"

Kubernetes Gateway API lays out the general guidelines on how to configure TLS passthrough. Here are examples on how to use them against AWS Gateway Api controller and VPC Lattice.

"},{"location":"guides/tls-passthrough/#install-gateway-api-tlsroute-crd","title":"Install Gateway API TLSRoute CRD","text":"

The TLSRoute CRD is already included in the helm chart and deployment.yaml, if you are using these 2 methods to install the controller no extra steps needed. If you want to install the TLSRoute CRD manually by yourself:

# Install CRD\nkubectl apply -f config/crds/bases/gateway.networking.k8s.io_tlsroutes.yaml\n# Verfiy TLSRoute CRD \nkubectl get crd tlsroutes.gateway.networking.k8s.io \nNAME                                  CREATED AT\ntlsroutes.gateway.networking.k8s.io   2024-03-07T23:16:22Z\n

"},{"location":"guides/tls-passthrough/#setup-tls-passthrough-connectivity-in-a-single-cluster","title":"Setup TLS Passthrough Connectivity in a single cluster","text":""},{"location":"guides/tls-passthrough/#1-configure-tls-passthrough-listener-on-gateway","title":"1. Configure TLS Passthrough Listener on Gateway","text":"
kubectl apply -f files/examples/my-gateway-tls-passthrough.yaml\n
# tls listener config snips:\napiVersion: gateway.networking.k8s.io/v1\nkind: Gateway\nmetadata:\n  name: my-hotel-tls-passthrough\nspec:\n  gatewayClassName: amazon-vpc-lattice\n  listeners:\n  ...\n  - name: tls\n    protocol: TLS \n    port: 443\n    tls:\n      mode: Passthrough \n  ...\n
"},{"location":"guides/tls-passthrough/#2-configure-tlsroute","title":"2. Configure TLSRoute","text":"
# Suppose in the below example, we use the \"parking\" service as the client pod to test the TLS passthrough traffic.\nkubectl apply -f files/examples/parking.yaml\n\n# Configure nginx backend service (This nginx image includes a self-signed certificate)\nkubectl apply -f files/example/nginx-server-tls-passthrough.yaml\n\n# configure nginx tls route\nkubectl apply -f files/examples/tlsroute-nginx.yaml\n
"},{"location":"guides/tls-passthrough/#3-verify-the-controller-has-reconciled-nginx-tls-route","title":"3. Verify the controller has reconciled nginx-tls route","text":"

Make sure the TLSRoute has the application-networking.k8s.aws/lattice-assigned-domain-name annotation and status Accepted: True

kubectl get tlsroute nginx-tls -o yaml\napiVersion: gateway.networking.k8s.io/v1alpha2\nkind: TLSRoute\nmetadata:\n  annotations:\n    application-networking.k8s.aws/lattice-assigned-domain-name: nginx-tls-default-0af995120af2711bc.7d67968.vpc-lattice-svcs.us-west-2.on.aws\n    ...\n  name: nginx-tls\n  namespace: default\n ...\n\nstatus:\n  parents:\n  - conditions:\n    - lastTransitionTime: .....\n      message: \"\"\n      observedGeneration: 1\n      reason: Accepted\n      status: \"True\"\n      type: Accepted\n    - lastTransitionTime: .....\n      message: \"\"\n      observedGeneration: 1\n      reason: ResolvedRefs\n      status: \"True\"\n      type: ResolvedRefs\n    controllerName: application-networking.k8s.aws/gateway-api-controller\n

"},{"location":"guides/tls-passthrough/#4-verify-tls-passthrough-traffic","title":"4. Verify TLS Passthrough Traffic","text":"
kubectl get deployment nginx-tls \nNAME        READY   UP-TO-DATE   AVAILABLE   AGE\nnginx-tls   2/2     2            2           1d\n\n# Use the specified TLSRoute hostname to send traffic to the beackend nginx service\nkubectl exec deployments/parking  -- curl -kv  https://nginx-test.my-test.com  --resolve nginx-test.my-test.com:443:169.254.171.0\n\n* Trying 169.254.171.0:443...\n* Connected to nginx-test.my-test.com (169.254.171.0) port 443 (#0)\n....\n* TLSv1.2 (OUT), TLS header, Certificate Status (22):\n* TLSv1.2 (OUT), TLS handshake, Client hello (1):\n* TLSv1.2 (IN), TLS handshake, Server hello (2):\n* TLSv1.2 (IN), TLS handshake, Certificate (11):\n* TLSv1.2 (IN), TLS handshake, Server key exchange (12):\n* TLSv1.2 (IN), TLS handshake, Server finished (14):\n* TLSv1.2 (OUT), TLS handshake, Client key exchange (16):\n* TLSv1.2 (OUT), TLS change cipher, Change cipher spec (1):\n* TLSv1.2 (OUT), TLS handshake, Finished (20):    \n* TLSv1.2 (IN), TLS change cipher, Change cipher spec (1):\n* TLSv1.2 (IN), TLS handshake, Finished (20):   <---------- TLS Handshake from client pod to the backend `nginx-tls` pod successfully, no tls termination in the middle\n* SSL connection using TLSv1.2 / ECDHE-RSA-AES256-GCM-SHA384\n* ALPN, server accepted to use h2\n....\n<body>\n<h1>Welcome to nginx!</h1>\n<p>If you see this page, the nginx web server is successfully installed and\nworking. Further configuration is required.</p>\n....\n
"},{"location":"guides/tls-passthrough/#setup-tls-passthrough-connectivity-spanning-multiple-clusters","title":"Setup TLS Passthrough Connectivity spanning multiple clusters","text":""},{"location":"guides/tls-passthrough/#1-in-this-example-we-still-use-the-parking-kubernetes-service-as-the-client-pod-to-test-the-cross-cluster-tls-passthrough-traffic","title":"1. In this example we still use the \"parking\" Kubernetes service as the client pod to test the cross cluster TLS passthrough traffic.","text":"
kubectl apply -f files/examples/parking.yaml\n
"},{"location":"guides/tls-passthrough/#2-in-cluster-1-create-tls-rate1-kubernetes-service","title":"2. In cluster-1, create tls-rate1 Kubernetes Service:","text":"
kubectl apply -f files/examples/tls-rate1.yaml\n
"},{"location":"guides/tls-passthrough/#3-configure-serviceexport-with-targetgrouppolicy-protocoltcp-in-cluster-2","title":"3. Configure ServiceExport with TargetGroupPolicy protocol:TCP in cluster-2","text":"
# Create tls-rate2 Kubernetes Service in cluster-2\nkubectl apply -f files/examples/tls-rate2.yaml\n# Create serviceexport in cluster-2\nkubectl apply -f files/examples/tls-rate2-export.yaml\n# Create targetgroup policy to configure TCP protocol for tls-rate2 in cluster-2\nkubectl apply -f files/examples/tls-rate2-targetgrouppolicy.yaml\n
# Snips of serviceexport config\napiVersion: application-networking.k8s.aws/v1alpha1\nkind: ServiceExport\nmetadata:\n  name: tls-rate-2\n  annotations:\n    application-networking.k8s.aws/federation: \"amazon-vpc-lattice\"\n# Snips of targetgroup policy config\napiVersion: application-networking.k8s.aws/v1alpha1\nkind: TargetGroupPolicy\nmetadata:\n    name: tls-rate2\nspec:\n    targetRef:\n        group: \"application-networking.k8s.aws\"\n        kind: ServiceExport\n        name: tls-rate2\n    protocol: TCP\n
"},{"location":"guides/tls-passthrough/#4-configure-serviceimport-in-cluster1","title":"4. Configure ServiceImport in cluster1","text":"
kubectl apply -f files/examples/tls-rate2-import.yaml\n
"},{"location":"guides/tls-passthrough/#5-configure-tlsroute-for-bluegreen-deployment","title":"5. Configure TLSRoute for blue/green deployment","text":"
kubectl apply -f files/examples/rate-tlsroute-bluegreen.yaml\n\n# snips of TLSRoute span multiple Kubernetes Clusters\napiVersion: gateway.networking.k8s.io/v1alpha2\nkind: TLSRoute\nmetadata:\n  name: tls-rate\nspec:\n  hostnames:\n  - tls-rate.my-test.com\n  parentRefs:\n  - name: my-hotel-tls\n    sectionName: tls\n  rules:\n  - backendRefs:\n    - name: tls-rate1 <---------- to Kubernetes Cluster-1\n      kind: Service\n      port: 443\n      weight: 10\n    - name: tls-rate2 <---------- to Kubernetes Cluster-2\n      kind: ServiceImport\n      port: 443\n      weight: 90  \n
"},{"location":"guides/tls-passthrough/#6-verify-cross-cluster-tls-passthrough-traffic","title":"6. Verify cross-cluster TLS passthrough traffic","text":"

Expected to receive the weighted traffic route to tls-rate1 service(10%) and tls-rate2 service(90%), if you curl the tls-rate.my-test.com from the client pod multiple times:

kubectl exec deploy/parking -- sh -c 'for ((i=1; i<=30; i++)); do curl -k https://tls-rate.my-test.com --resolve tls-rate.my-test.com:443:169.254.171.0 2>/dev/null; done'\n\nRequsting to TLS Pod(tls-rate2-7f8b9cc97b-fgqk6): tls-rate2 handler pod <---->  k8s service in cluster-2\nRequsting to TLS Pod(tls-rate2-7f8b9cc97b-fgqk6): tls-rate2 handler pod\nRequsting to TLS Pod(tls-rate2-7f8b9cc97b-fgqk6): tls-rate2 handler pod\nRequsting to TLS Pod(tls-rate2-7f8b9cc97b-fgqk6): tls-rate2 handler pod\nRequsting to TLS Pod(tls-rate1-98cc7fd87a-642zw): tls-rate1 handler pod <----> k8s service in cluster-1\nRequsting to TLS Pod(tls-rate2-7f8b9cc97b-fgqk6): tls-rate2 handler pod\nRequsting to TLS Pod(tls-rate2-7f8b9cc97b-fgqk6): tls-rate2 handler pod\nRequsting to TLS Pod(tls-rate2-7f8b9cc97b-fgqk6): tls-rate2 handler pod\nRequsting to TLS Pod(tls-rate1-98cc7fd87a-642zw): tls-rate1 handler pod\n

"},{"location":"guides/upgrading-v1-0-x-to-v1-1-y/","title":"Update the AWS Gateway API Controller from v1.0.x to v1.1.y","text":"

Release v1.1.0 of the AWS Gateway API Controller is built against v1.2 of the Gateway API spec, but the controller is also compatible with the v1.1 Gateway API. It is not compatible the v1.0 Gateway API.

Previous v1.0.x builds of the controller were built against v1.0 of the Gateway API spec. This guide outlines the controller upgrade process from v1.0.x to v1.1.y.

"},{"location":"guides/upgrading-v1-0-x-to-v1-1-y/#basic-upgrade-process","title":"Basic Upgrade Process","text":"
  1. Back up configuration, in particular GRPCRoute objects
  2. Disable v1.0.x controller (e.g. scale to zero)
  3. Update Gateway API CRDs to v1.1.0
  4. Deploy and launch v1.1.y controller version

With the basic upgrade process, previously created GRPCRoutes on v1alpha2 will automatically update to v1 once they are reconciled by the controller. Alternatively, you can manually update your GRPCRoute versions (for example export to YAML, update version number, and apply updates). Creation of new GRPCRoutes objects using v1alpha2 will be rejected.

"},{"location":"guides/upgrading-v1-0-x-to-v1-1-y/#upgrading-to-gateway-api-v12","title":"Upgrading to Gateway API v1.2","text":"

Moving to GatewayAPI v1.2 can require an additional step as v1alpha2 GRPCRoute objects have been removed. If GRPCRoute objects are not already on v1, you will need to follow steps outlined in the v1.2.0 release notes.

  1. Back up configuration, in particular GRPCRoute objects
  2. Disable v1.0.x controller (e.g. scale to zero)
  3. Update Gateway API CRDs to v1.1.0
  4. Deploy and launch v1.1.Y controller version
  5. Take upgrade steps outlined in v1.2.0 Gateway API release notes
  6. Update Gateway API CRDs to v1.2.0
"}]} \ No newline at end of file +{"config":{"lang":["en"],"separator":"[\\s\\-]+","pipeline":["stopWordFilter"]},"docs":[{"location":"api-reference/","title":"API Specification","text":"

This page contains the API field specification for Gateway API.

Packages:

"},{"location":"api-reference/#application-networking.k8s.aws/v1alpha1","title":"application-networking.k8s.aws/v1alpha1","text":"

Resource Types:

"},{"location":"api-reference/#application-networking.k8s.aws/v1alpha1.AccessLogPolicy","title":"AccessLogPolicy","text":"Field Description apiVersion string application-networking.k8s.aws/v1alpha1 kind string AccessLogPolicy metadata Kubernetes meta/v1.ObjectMeta Refer to the Kubernetes API documentation for the fields of the metadata field. spec AccessLogPolicySpec destinationArn string

The Amazon Resource Name (ARN) of the destination that will store access logs. Supported values are S3 Bucket, CloudWatch Log Group, and Firehose Delivery Stream ARNs.

Changes to this value results in replacement of the VPC Lattice Access Log Subscription.

targetRef sigs.k8s.io/gateway-api/apis/v1alpha2.NamespacedPolicyTargetReference

TargetRef points to the Kubernetes Gateway, HTTPRoute, or GRPCRoute resource that will have this policy attached.

This field is following the guidelines of Kubernetes Gateway API policy attachment.

status AccessLogPolicyStatus

Status defines the current state of AccessLogPolicy.

"},{"location":"api-reference/#application-networking.k8s.aws/v1alpha1.IAMAuthPolicy","title":"IAMAuthPolicy","text":"Field Description apiVersion string application-networking.k8s.aws/v1alpha1 kind string IAMAuthPolicy metadata Kubernetes meta/v1.ObjectMeta Refer to the Kubernetes API documentation for the fields of the metadata field. spec IAMAuthPolicySpec policy string

IAM auth policy content. It is a JSON string that uses the same syntax as AWS IAM policies. Please check the VPC Lattice documentation to get the common elements in an auth policy

targetRef sigs.k8s.io/gateway-api/apis/v1alpha2.NamespacedPolicyTargetReference

TargetRef points to the Kubernetes Gateway, HTTPRoute, or GRPCRoute resource that will have this policy attached.

This field is following the guidelines of Kubernetes Gateway API policy attachment.

status IAMAuthPolicyStatus

Status defines the current state of IAMAuthPolicy.

"},{"location":"api-reference/#application-networking.k8s.aws/v1alpha1.ServiceExport","title":"ServiceExport","text":"

ServiceExport declares that the Service with the same name and namespace as this export should be consumable from other clusters.

Field Description apiVersion string application-networking.k8s.aws/v1alpha1 kind string ServiceExport metadata Kubernetes meta/v1.ObjectMeta (Optional) Refer to the Kubernetes API documentation for the fields of the metadata field. status ServiceExportStatus (Optional)

status describes the current state of an exported service. Service configuration comes from the Service that had the same name and namespace as this ServiceExport. Populated by the multi-cluster service implementation\u2019s controller.

"},{"location":"api-reference/#application-networking.k8s.aws/v1alpha1.ServiceImport","title":"ServiceImport","text":"

ServiceImport describes a service imported from clusters in a ClusterSet.

Field Description apiVersion string application-networking.k8s.aws/v1alpha1 kind string ServiceImport metadata Kubernetes meta/v1.ObjectMeta (Optional) Refer to the Kubernetes API documentation for the fields of the metadata field. spec ServiceImportSpec (Optional)

spec defines the behavior of a ServiceImport.

ports []ServicePort ips []string (Optional)

ip will be used as the VIP for this service when type is ClusterSetIP.

type ServiceImportType

type defines the type of this service. Must be ClusterSetIP or Headless.

sessionAffinity Kubernetes core/v1.ServiceAffinity (Optional)

Supports \u201cClientIP\u201d and \u201cNone\u201d. Used to maintain session affinity. Enable client IP based session affinity. Must be ClientIP or None. Defaults to None. Ignored when type is Headless More info: https://kubernetes.io/docs/concepts/services-networking/service/#virtual-ips-and-service-proxies

sessionAffinityConfig Kubernetes core/v1.SessionAffinityConfig (Optional)

sessionAffinityConfig contains session affinity configuration.

status ServiceImportStatus (Optional)

status contains information about the exported services that form the multi-cluster service referenced by this ServiceImport.

"},{"location":"api-reference/#application-networking.k8s.aws/v1alpha1.TargetGroupPolicy","title":"TargetGroupPolicy","text":"Field Description apiVersion string application-networking.k8s.aws/v1alpha1 kind string TargetGroupPolicy metadata Kubernetes meta/v1.ObjectMeta Refer to the Kubernetes API documentation for the fields of the metadata field. spec TargetGroupPolicySpec protocol string (Optional)

The protocol to use for routing traffic to the targets. Supported values are HTTP (default) and HTTPS.

Changes to this value results in a replacement of VPC Lattice target group.

protocolVersion string (Optional)

The protocol version to use. Supported values are HTTP1 (default) and HTTP2. When a policy is behind GRPCRoute, this field value will be ignored as GRPC is only supported through HTTP/2.

Changes to this value results in a replacement of VPC Lattice target group.

targetRef sigs.k8s.io/gateway-api/apis/v1alpha2.NamespacedPolicyTargetReference

TargetRef points to the kubernetes Service resource that will have this policy attached.

This field is following the guidelines of Kubernetes Gateway API policy attachment.

healthCheck HealthCheckConfig (Optional)

The health check configuration.

Changes to this value will update VPC Lattice resource in place.

status TargetGroupPolicyStatus"},{"location":"api-reference/#application-networking.k8s.aws/v1alpha1.VpcAssociationPolicy","title":"VpcAssociationPolicy","text":"Field Description apiVersion string application-networking.k8s.aws/v1alpha1 kind string VpcAssociationPolicy metadata Kubernetes meta/v1.ObjectMeta Refer to the Kubernetes API documentation for the fields of the metadata field. spec VpcAssociationPolicySpec securityGroupIds []SecurityGroupId (Optional)

SecurityGroupIds defines the security groups enforced on the VpcServiceNetworkAssociation. Security groups does not take effect if AssociateWithVpc is set to false.

For more details, please check the VPC Lattice documentation https://docs.aws.amazon.com/vpc-lattice/latest/ug/security-groups.html

associateWithVpc bool (Optional)

AssociateWithVpc indicates whether the VpcServiceNetworkAssociation should be created for the current VPC of k8s cluster.

This value will be considered true by default.

targetRef sigs.k8s.io/gateway-api/apis/v1alpha2.NamespacedPolicyTargetReference

TargetRef points to the kubernetes Gateway resource that will have this policy attached.

This field is following the guidelines of Kubernetes Gateway API policy attachment.

status VpcAssociationPolicyStatus"},{"location":"api-reference/#application-networking.k8s.aws/v1alpha1.AccessLogPolicySpec","title":"AccessLogPolicySpec","text":"

(Appears on:AccessLogPolicy)

AccessLogPolicySpec defines the desired state of AccessLogPolicy.

Field Description destinationArn string

The Amazon Resource Name (ARN) of the destination that will store access logs. Supported values are S3 Bucket, CloudWatch Log Group, and Firehose Delivery Stream ARNs.

Changes to this value results in replacement of the VPC Lattice Access Log Subscription.

targetRef sigs.k8s.io/gateway-api/apis/v1alpha2.NamespacedPolicyTargetReference

TargetRef points to the Kubernetes Gateway, HTTPRoute, or GRPCRoute resource that will have this policy attached.

This field is following the guidelines of Kubernetes Gateway API policy attachment.

"},{"location":"api-reference/#application-networking.k8s.aws/v1alpha1.AccessLogPolicyStatus","title":"AccessLogPolicyStatus","text":"

(Appears on:AccessLogPolicy)

AccessLogPolicyStatus defines the observed state of AccessLogPolicy.

Field Description conditions []Kubernetes meta/v1.Condition (Optional)

Conditions describe the current conditions of the AccessLogPolicy.

Implementations should prefer to express Policy conditions using the PolicyConditionType and PolicyConditionReason constants so that operators and tools can converge on a common vocabulary to describe AccessLogPolicy state.

Known condition types are:

"},{"location":"api-reference/#application-networking.k8s.aws/v1alpha1.ClusterStatus","title":"ClusterStatus","text":"

(Appears on:ServiceImportStatus)

ClusterStatus contains service configuration mapped to a specific source cluster

Field Description cluster string

cluster is the name of the exporting cluster. Must be a valid RFC-1123 DNS label.

"},{"location":"api-reference/#application-networking.k8s.aws/v1alpha1.HealthCheckConfig","title":"HealthCheckConfig","text":"

(Appears on:TargetGroupPolicySpec)

HealthCheckConfig defines health check configuration for given VPC Lattice target group. For the detailed explanation and supported values, please refer to VPC Lattice documentationon health checks.

Field Description enabled bool (Optional)

Indicates whether health checking is enabled.

intervalSeconds int64 (Optional)

The approximate amount of time, in seconds, between health checks of an individual target.

timeoutSeconds int64 (Optional)

The amount of time, in seconds, to wait before reporting a target as unhealthy.

healthyThresholdCount int64 (Optional)

The number of consecutive successful health checks required before considering an unhealthy target healthy.

unhealthyThresholdCount int64 (Optional)

The number of consecutive failed health checks required before considering a target unhealthy.

statusMatch string (Optional)

A regular expression to match HTTP status codes when checking for successful response from a target.

path string (Optional)

The destination for health checks on the targets.

port int64

The port used when performing health checks on targets. If not specified, health check defaults to the port that a target receives traffic on.

protocol HealthCheckProtocol (Optional)

The protocol used when performing health checks on targets.

protocolVersion HealthCheckProtocolVersion (Optional)

The protocol version used when performing health checks on targets. Defaults to HTTP/1.

"},{"location":"api-reference/#application-networking.k8s.aws/v1alpha1.HealthCheckProtocol","title":"HealthCheckProtocol (string alias)","text":"

(Appears on:HealthCheckConfig)

Value Description

\"HTTP\"

\"HTTPS\"

"},{"location":"api-reference/#application-networking.k8s.aws/v1alpha1.HealthCheckProtocolVersion","title":"HealthCheckProtocolVersion (string alias)","text":"

(Appears on:HealthCheckConfig)

Value Description

\"HTTP1\"

\"HTTP2\"

"},{"location":"api-reference/#application-networking.k8s.aws/v1alpha1.IAMAuthPolicySpec","title":"IAMAuthPolicySpec","text":"

(Appears on:IAMAuthPolicy)

IAMAuthPolicySpec defines the desired state of IAMAuthPolicy. When the controller handles IAMAuthPolicy creation, if the targetRef k8s and VPC Lattice resource exists, the controller will change the auth_type of that VPC Lattice resource to AWS_IAM and attach this policy. When the controller handles IAMAuthPolicy deletion, if the targetRef k8s and VPC Lattice resource exists, the controller will change the auth_type of that VPC Lattice resource to NONE and detach this policy.

Field Description policy string

IAM auth policy content. It is a JSON string that uses the same syntax as AWS IAM policies. Please check the VPC Lattice documentation to get the common elements in an auth policy

targetRef sigs.k8s.io/gateway-api/apis/v1alpha2.NamespacedPolicyTargetReference

TargetRef points to the Kubernetes Gateway, HTTPRoute, or GRPCRoute resource that will have this policy attached.

This field is following the guidelines of Kubernetes Gateway API policy attachment.

"},{"location":"api-reference/#application-networking.k8s.aws/v1alpha1.IAMAuthPolicyStatus","title":"IAMAuthPolicyStatus","text":"

(Appears on:IAMAuthPolicy)

IAMAuthPolicyStatus defines the observed state of IAMAuthPolicy.

Field Description conditions []Kubernetes meta/v1.Condition (Optional)

Conditions describe the current conditions of the IAMAuthPolicy.

Implementations should prefer to express Policy conditions using the PolicyConditionType and PolicyConditionReason constants so that operators and tools can converge on a common vocabulary to describe IAMAuthPolicy state.

Known condition types are:

"},{"location":"api-reference/#application-networking.k8s.aws/v1alpha1.SecurityGroupId","title":"SecurityGroupId (string alias)","text":"

(Appears on:VpcAssociationPolicySpec)

"},{"location":"api-reference/#application-networking.k8s.aws/v1alpha1.ServiceExportCondition","title":"ServiceExportCondition","text":"

(Appears on:ServiceExportStatus)

ServiceExportCondition contains details for the current condition of this service export.

Once KEP-1623 is implemented, this will be replaced by metav1.Condition.

Field Description type ServiceExportConditionType status Kubernetes core/v1.ConditionStatus

Status is one of {\u201cTrue\u201d, \u201cFalse\u201d, \u201cUnknown\u201d}

lastTransitionTime Kubernetes meta/v1.Time (Optional) reason string (Optional) message string (Optional)"},{"location":"api-reference/#application-networking.k8s.aws/v1alpha1.ServiceExportConditionType","title":"ServiceExportConditionType (string alias)","text":"

(Appears on:ServiceExportCondition)

ServiceExportConditionType identifies a specific condition.

Value Description

\"Conflict\"

ServiceExportConflict means that there is a conflict between two exports for the same Service. When \u201cTrue\u201d, the condition message should contain enough information to diagnose the conflict: field(s) under contention, which cluster won, and why. Users should not expect detailed per-cluster information in the conflict message.

\"Valid\"

ServiceExportValid means that the service referenced by this service export has been recognized as valid by a controller. This will be false if the service is found to be unexportable (ExternalName, not found).

"},{"location":"api-reference/#application-networking.k8s.aws/v1alpha1.ServiceExportStatus","title":"ServiceExportStatus","text":"

(Appears on:ServiceExport)

ServiceExportStatus contains the current status of an export.

Field Description conditions []ServiceExportCondition (Optional)"},{"location":"api-reference/#application-networking.k8s.aws/v1alpha1.ServiceImportSpec","title":"ServiceImportSpec","text":"

(Appears on:ServiceImport)

ServiceImportSpec describes an imported service and the information necessary to consume it.

Field Description ports []ServicePort ips []string (Optional)

ip will be used as the VIP for this service when type is ClusterSetIP.

type ServiceImportType

type defines the type of this service. Must be ClusterSetIP or Headless.

sessionAffinity Kubernetes core/v1.ServiceAffinity (Optional)

Supports \u201cClientIP\u201d and \u201cNone\u201d. Used to maintain session affinity. Enable client IP based session affinity. Must be ClientIP or None. Defaults to None. Ignored when type is Headless More info: https://kubernetes.io/docs/concepts/services-networking/service/#virtual-ips-and-service-proxies

sessionAffinityConfig Kubernetes core/v1.SessionAffinityConfig (Optional)

sessionAffinityConfig contains session affinity configuration.

"},{"location":"api-reference/#application-networking.k8s.aws/v1alpha1.ServiceImportStatus","title":"ServiceImportStatus","text":"

(Appears on:ServiceImport)

ServiceImportStatus describes derived state of an imported service.

Field Description clusters []ClusterStatus (Optional)

clusters is the list of exporting clusters from which this service was derived.

"},{"location":"api-reference/#application-networking.k8s.aws/v1alpha1.ServiceImportType","title":"ServiceImportType (string alias)","text":"

(Appears on:ServiceImportSpec)

ServiceImportType designates the type of a ServiceImport

Value Description

\"ClusterSetIP\"

ClusterSetIP are only accessible via the ClusterSet IP.

\"Headless\"

Headless services allow backend pods to be addressed directly.

"},{"location":"api-reference/#application-networking.k8s.aws/v1alpha1.ServicePort","title":"ServicePort","text":"

(Appears on:ServiceImportSpec)

ServicePort represents the port on which the service is exposed

Field Description name string (Optional)

The name of this port within the service. This must be a DNS_LABEL. All ports within a ServiceSpec must have unique names. When considering the endpoints for a Service, this must match the \u2018name\u2019 field in the EndpointPort. Optional if only one ServicePort is defined on this service.

protocol Kubernetes core/v1.Protocol (Optional)

The IP protocol for this port. Supports \u201cTCP\u201d, \u201cUDP\u201d, and \u201cSCTP\u201d. Default is TCP.

appProtocol string (Optional)

The application protocol for this port. This field follows standard Kubernetes label syntax. Un-prefixed names are reserved for IANA standard service names (as per RFC-6335 and http://www.iana.org/assignments/service-names). Non-standard protocols should use prefixed names such as mycompany.com/my-custom-protocol. Field can be enabled with ServiceAppProtocol feature gate.

port int32

The port that will be exposed by this service.

"},{"location":"api-reference/#application-networking.k8s.aws/v1alpha1.TargetGroupPolicySpec","title":"TargetGroupPolicySpec","text":"

(Appears on:TargetGroupPolicy)

TargetGroupPolicySpec defines the desired state of TargetGroupPolicy.

Field Description protocol string (Optional)

The protocol to use for routing traffic to the targets. Supported values are HTTP (default) and HTTPS.

Changes to this value results in a replacement of VPC Lattice target group.

protocolVersion string (Optional)

The protocol version to use. Supported values are HTTP1 (default) and HTTP2. When a policy is behind GRPCRoute, this field value will be ignored as GRPC is only supported through HTTP/2.

Changes to this value results in a replacement of VPC Lattice target group.

targetRef sigs.k8s.io/gateway-api/apis/v1alpha2.NamespacedPolicyTargetReference

TargetRef points to the kubernetes Service resource that will have this policy attached.

This field is following the guidelines of Kubernetes Gateway API policy attachment.

healthCheck HealthCheckConfig (Optional)

The health check configuration.

Changes to this value will update VPC Lattice resource in place.

"},{"location":"api-reference/#application-networking.k8s.aws/v1alpha1.TargetGroupPolicyStatus","title":"TargetGroupPolicyStatus","text":"

(Appears on:TargetGroupPolicy)

TargetGroupPolicyStatus defines the observed state of TargetGroupPolicy.

Field Description conditions []Kubernetes meta/v1.Condition (Optional)

Conditions describe the current conditions of the AccessLogPolicy.

Implementations should prefer to express Policy conditions using the PolicyConditionType and PolicyConditionReason constants so that operators and tools can converge on a common vocabulary to describe AccessLogPolicy state.

Known condition types are:

"},{"location":"api-reference/#application-networking.k8s.aws/v1alpha1.VpcAssociationPolicySpec","title":"VpcAssociationPolicySpec","text":"

(Appears on:VpcAssociationPolicy)

VpcAssociationPolicySpec defines the desired state of VpcAssociationPolicy.

Field Description securityGroupIds []SecurityGroupId (Optional)

SecurityGroupIds defines the security groups enforced on the VpcServiceNetworkAssociation. Security groups does not take effect if AssociateWithVpc is set to false.

For more details, please check the VPC Lattice documentation https://docs.aws.amazon.com/vpc-lattice/latest/ug/security-groups.html

associateWithVpc bool (Optional)

AssociateWithVpc indicates whether the VpcServiceNetworkAssociation should be created for the current VPC of k8s cluster.

This value will be considered true by default.

targetRef sigs.k8s.io/gateway-api/apis/v1alpha2.NamespacedPolicyTargetReference

TargetRef points to the kubernetes Gateway resource that will have this policy attached.

This field is following the guidelines of Kubernetes Gateway API policy attachment.

"},{"location":"api-reference/#application-networking.k8s.aws/v1alpha1.VpcAssociationPolicyStatus","title":"VpcAssociationPolicyStatus","text":"

(Appears on:VpcAssociationPolicy)

VpcAssociationPolicyStatus defines the observed state of VpcAssociationPolicy.

Field Description conditions []Kubernetes meta/v1.Condition (Optional)

Conditions describe the current conditions of the VpcAssociationPolicy.

Implementations should prefer to express Policy conditions using the PolicyConditionType and PolicyConditionReason constants so that operators and tools can converge on a common vocabulary to describe VpcAssociationPolicy state.

Known condition types are:

Generated with gen-crd-api-reference-docs on git commit 5de8f32.

"},{"location":"conformance-test/","title":"Report on Gateway API Conformance Testing","text":"

Kubernetes Gateway API Conformance

"},{"location":"conformance-test/#summary-of-test-result","title":"Summary of Test Result","text":"Category Test Cases Status Notes GatewayClass GatewayClassObservedGenerationBump ok Gateway GatewayObservedGenerationBump ok GatewayInvalidRouteKind ok GatewayWithAttachedRoutes ok GatewaySecretInvalidReferenceGrants N/A VPC Lattice supports ACM certs GatewaySecretMissingReferenceGrant N/A VPC Lattice supports ACM certs GatewaySecretReferenceGrantAllInNamespace N/A VPC Lattice supports ACM Certs GatewaySecretReferenceGrantSpecific N/A VPC Lattice supports ACM certs HTTPRoute HTTPRouteCrossNamespace ok HTTPExactPathMatching ok HTTPRouteHeaderMatching fail Test data exceeds Lattice limit on # of rules HTTPRouteSimpleSameNamespace ok HTTPRouteListenerHostnameMatching N/A Listener hostname not supported HTTPRouteMatchingAcrossRoutes N/A Custom domain name conflict not allowed HTTPRouteMatching fail Route precedence HTTPRouteObservedGenerationBump ok HTTPRoutePathMatchOrder fail Test data exceeds Lattice limit on # of rules HTTPRouteReferenceGrant N/A HTTPRouteDisallowedKind N/A Only HTTPRoute is supported HTTPRouteInvalidNonExistentBackendRef fail #277 HTTPRouteInvalidBackendRefUnknownKind fail #277 HTTPRouteInvalidCrossNamespaceBackendRef fail #277 HTTPRouteInvalidCrossNamespaceParentRef fail #277 HTTPRouteInvalidParentRefNotMatchingListenerPort fail #277 HTTPRouteInvalidParentRefNotMatchingSectionName fail #277 HTTPRouteMethodMatching fail not supported in controller yet. #123 HTTPRouteHostnameIntersection N/A VPC lattice only supports one custom domain HTTPRouteQueryParamMatching N/A Not supported by lattice HTTPRouteRedirectHostAndStatus N/A Not supported by lattice HTTPRouteRedirectPath N/A Not supported by lattice HTTPRouteRedirectPort N/A Not supported by lattice HTTPRouteRedirectScheme N/A Not supported by lattice HTTPRouteRequestHeaderModifier N/A Not supported by lattice HTTPRouteResponseHeaderModifier N/A Not supported by lattice HTTPRouteRewriteHost N/A Not supported by lattice HTTPRouteRewritePath N/A Not supported by lattice"},{"location":"conformance-test/#running-gateway-api-conformance","title":"Running Gateway API Conformance","text":""},{"location":"conformance-test/#running-controller-from-cloud-desktop","title":"Running controller from cloud desktop","text":"
# run controller in following mode\n\nREGION=us-west-2 DEFAULT_SERVICE_NETWORK=my-cluster-default ENABLE_SERVICE_NETWORK_OVERRIDE=true \\\nmake run\n
"},{"location":"conformance-test/#run-individual-conformance-test","title":"Run individual conformance test","text":"

Conformance tests directly send traffic, so they should run inside the VPC that the cluster is operating on.

go test ./conformance/ --run \"TestConformance/HTTPRouteCrossNamespace$\" -v -args -gateway-class amazon-vpc-lattice \\\n-supported-features Gateway,HTTPRoute,GatewayClassObservedGenerationBump\n
"},{"location":"faq/","title":"Frequently Asked Questions (FAQ)","text":"

How can I get involved with AWS Gateway API Controller?

We welcome general feedback, questions, feature requests, or bug reports by creating a Github issue.

Where can I find AWS Gateway API Controller releases?

AWS Gateway API Controller releases are tags of the Github repository. The Github releases page shows all the releases.

Which EKS CNI versions are supported?

Your AWS VPC CNI must be v1.8.0 or later to work with VPC Lattice.

Which versions of Gateway API are supported?

AWS Gateway API Controller supports Gateway API CRD bundle versions between v0.6.1 and v1.0.0. The controller does not reject other versions, but will provide \"best effort support\" to it. Not all features of Gateway API are supported - for detailed features and limitation, please refer to individual API references. By default, Gateway API v0.6.1 CRD bundle is included in the helm chart.

"},{"location":"api-types/access-log-policy/","title":"AccessLogPolicy API Reference","text":""},{"location":"api-types/access-log-policy/#introduction","title":"Introduction","text":"

The AccessLogPolicy custom resource allows you to define access logging configurations on Gateways, HTTPRoutes, and GRPCRoutes by specifying a destination for the access logs to be published to.

"},{"location":"api-types/access-log-policy/#features","title":"Features","text":""},{"location":"api-types/access-log-policy/#example-configurations","title":"Example Configurations","text":""},{"location":"api-types/access-log-policy/#example-1","title":"Example 1","text":"

This configuration results in access logs being published to the S3 Bucket, my-bucket, when traffic is sent to any HTTPRoute or GRPCRoute that is a child of Gateway my-hotel.

apiVersion: application-networking.k8s.aws/v1alpha1\nkind: AccessLogPolicy\nmetadata:\n  name: my-access-log-policy\nspec:\n  destinationArn: \"arn:aws:s3:::my-bucket\"\n  targetRef:\n    group: gateway.networking.k8s.io\n    kind: Gateway\n    name: my-hotel\n
"},{"location":"api-types/access-log-policy/#example-2","title":"Example 2","text":"

This configuration results in access logs being published to the CloudWatch Log Group, myloggroup, when traffic is sent to HTTPRoute inventory.

apiVersion: application-networking.k8s.aws/v1alpha1\nkind: AccessLogPolicy\nmetadata:\n  name: my-access-log-policy\nspec:\n  destinationArn: \"arn:aws:logs:us-west-2:123456789012:log-group:myloggroup:*\"\n  targetRef:\n    group: gateway.networking.k8s.io\n    kind: HTTPRoute\n    name: inventory\n
"},{"location":"api-types/access-log-policy/#aws-permissions-required","title":"AWS Permissions Required","text":"

Per the VPC Lattice documentation, IAM permissions are required to enable access logs:

{\n    \"Version\": \"2012-10-17\",\n    \"Statement\": [\n        {\n            \"Effect\": \"Allow\",\n            \"Sid\": \"ManageVPCLatticeAccessLogSetup\",\n            \"Action\": [\n                \"logs:CreateLogDelivery\",\n                \"logs:GetLogDelivery\",\n                \"logs:UpdateLogDelivery\",\n                \"logs:DeleteLogDelivery\",\n                \"logs:ListLogDeliveries\",\n                \"vpc-lattice:CreateAccessLogSubscription\",\n                \"vpc-lattice:GetAccessLogSubscription\",\n                \"vpc-lattice:UpdateAccessLogSubscription\",\n                \"vpc-lattice:DeleteAccessLogSubscription\",\n                \"vpc-lattice:ListAccessLogSubscriptions\"\n            ],\n            \"Resource\": [\n                \"*\"\n            ]\n        }\n    ]\n}\n
"},{"location":"api-types/access-log-policy/#statuses","title":"Statuses","text":"

AccessLogPolicies fit under the definition of Gateway API Policy Objects. As a result, status conditions are applied on every modification of an AccessLogPolicy, and can be viewed by describing it.

"},{"location":"api-types/access-log-policy/#status-condition-reasons","title":"Status Condition Reasons","text":""},{"location":"api-types/access-log-policy/#accepted","title":"Accepted","text":"

The spec of the AccessLogPolicy is valid and has been accepted for reconciliation by the controller.

"},{"location":"api-types/access-log-policy/#conflicted","title":"Conflicted","text":"

The target already has an AccessLogPolicy for the same destination type (i.e. a target can have 1 AccessLogPolicy for an S3 Bucket, 1 for a CloudWatch Log Group, and 1 for a Firehose Delivery Stream at a time).

"},{"location":"api-types/access-log-policy/#invalid","title":"Invalid","text":"

Any of the following: - The target's Group is not gateway.networking.k8s.io - The target's Kind is not Gateway, HTTPRoute, or GRPCRoute - The target's namespace does not match the AccessLogPolicy's namespace

"},{"location":"api-types/access-log-policy/#targetnotfound","title":"TargetNotFound","text":"

The target does not exist.

"},{"location":"api-types/access-log-policy/#annotations","title":"Annotations","text":"

Upon successful creation or modification of an AccessLogPolicy, the controller may add or update an annotation in the AccessLogPolicy. The annotation applied by the controller has the key application-networking.k8s.aws/accessLogSubscription, and its value is the corresponding VPC Lattice Access Log Subscription's ARN.

When an AccessLogPolicy's destinationArn is changed such that the resource type changes (e.g. from S3 Bucket to CloudWatch Log Group), or the AccessLogPolicy's targetRef is changed, the annotation's value will be updated because a new Access Log Subscription will be created to replace the previous one.

When creation of an AccessLogPolicy fails, no annotation is added to the AccessLogPolicy because no corresponding Access Log Subscription exists.

When modification or deletion of an AccessLogPolicy fails, the previous value of the annotation is left unchanged because the corresponding Access Log Subscription is also left unchanged.

"},{"location":"api-types/gateway/","title":"Gateway API Reference","text":""},{"location":"api-types/gateway/#introduction","title":"Introduction","text":"

Gateway allows you to configure network traffic through AWS Gateway API Controller. When a Gateway is defined with amazon-vpc-lattice GatewayClass, the controller will watch for the gateway and the resources under them, creating required resources under Amazon VPC Lattice.

Internally, a Gateway points to a VPC Lattice service network. Service networks are identified by Gateway name (without namespace) - for example, a Gateway named my-gateway will point to a VPC Lattice service network my-gateway. If multiple Gateways share the same name, all of them will point to the same service network.

VPC Lattice service networks must be managed separately, as it is a broader concept that can cover resources outside the Kubernetes cluster. To create and manage a service network, you can either:

Gateways with amazon-vpc-lattice GatewayClass do not create a single entrypoint to bind Listeners and Routes under them. Instead, each Route will have its own domain name assigned. To see an example of how domain names are assigned, please refer to our Getting Started Guide.

"},{"location":"api-types/gateway/#supported-gatewayclass","title":"Supported GatewayClass","text":""},{"location":"api-types/gateway/#limitations","title":"Limitations","text":""},{"location":"api-types/gateway/#example-configuration","title":"Example Configuration","text":"

Here is a sample configuration that demonstrates how to set up a Gateway:

apiVersion: gateway.networking.k8s.io/v1\nkind: Gateway\nmetadata:\n  name: my-hotel\nspec:\n  gatewayClassName: amazon-vpc-lattice\n  listeners:\n    - name: http\n      protocol: HTTP\n      port: 80\n    - name: https\n      protocol: HTTPS\n      port: 443\n      tls:\n        mode: Terminate\n        certificateRefs:\n          - name: unused\n        options:\n          application-networking.k8s.aws/certificate-arn: <certificate-arn>\n

The created Gateway will point to a VPC Lattice service network named my-hotel. Routes under this Gateway can have either http or https listener as a parent based on their desired protocol to use.

This Gateway documentation provides a detailed introduction, feature set, and a basic example of how to configure and use the resource within AWS Gateway API Controller project. For in-depth details and specifications, you can refer to the official Gateway API documentation.

"},{"location":"api-types/grpc-route/","title":"GRPCRoute API Reference","text":""},{"location":"api-types/grpc-route/#introduction","title":"Introduction","text":"

With integration of the Gateway API, AWS Gateway API Controller supports GRPCRoute. This allows you to define and manage the routing of gRPC traffic within your Kubernetes cluster.

"},{"location":"api-types/grpc-route/#grpcroute-key-features-limitations","title":"GRPCRoute Key Features & Limitations","text":"

Features:

Limitations:

"},{"location":"api-types/grpc-route/#annotations","title":"Annotations","text":""},{"location":"api-types/grpc-route/#example-configuration","title":"Example Configuration","text":"

Here is a sample configuration that demonstrates how to set up a GRPCRoute for a HelloWorld gRPC service:

apiVersion: gateway.networking.k8s.io/v1\nkind: GRPCRoute\nmetadata:\n  name: greeter-grpc-route\nspec:\n  parentRefs:\n    - name: my-hotel\n      sectionName: https\n  rules:\n    - matches:\n        - headers:\n            - name: testKey1\n              value: testValue1\n      backendRefs:\n        - name: greeter-grpc-server\n          kind: Service\n          port: 50051\n          weight: 10\n    - matches:\n        - method:\n            service: helloworld.Greeter\n            method: SayHello\n      backendRefs:\n        - name: greeter-grpc-server\n          kind: Service\n          port: 443\n

In this example:

This GRPCRoute documentation provides a detailed introduction, feature set, and a basic example of how to configure and use the resource within AWS Gateway API Controller project. For in-depth details and specifications, you can refer to the official Gateway API documentation.

"},{"location":"api-types/http-route/","title":"HTTPRoute API Reference","text":""},{"location":"api-types/http-route/#introduction","title":"Introduction","text":"

With integration of the Gateway API, AWS Gateway API Controller supports HTTPRoute. This allows you to define and manage the routing of HTTP and HTTPS traffic within your Kubernetes cluster.

"},{"location":"api-types/http-route/#httproute-key-features-limitations","title":"HTTPRoute Key Features & Limitations","text":"

Features:

Limitations:

"},{"location":"api-types/http-route/#annotations","title":"Annotations","text":""},{"location":"api-types/http-route/#example-configuration","title":"Example Configuration","text":""},{"location":"api-types/http-route/#example-1","title":"Example 1","text":"

Here is a sample configuration that demonstrates how to set up an HTTPRoute that forwards HTTP traffic to a Service and ServiceImport, using rules to determine which backendRef to route traffic to.

apiVersion: gateway.networking.k8s.io/v1\nkind: HTTPRoute\nmetadata:\n  name: inventory\nspec:\n  parentRefs:\n    - name: my-hotel\n      sectionName: http\n  rules:\n    - backendRefs:\n        - name: inventory-ver1\n          kind: Service\n          port: 80\n      matches:\n        - path:\n            type: PathPrefix\n            value: /ver1\n    - backendRefs:\n        - name: inventory-ver2\n          kind: ServiceImport\n          port: 80\n      matches:\n        - path:\n            type: PathPrefix\n            value: /ver2\n

In this example:

"},{"location":"api-types/http-route/#example-2","title":"Example 2","text":"

Here is a sample configuration that demonstrates how to set up a HTTPRoute that forwards HTTP and HTTPS traffic to a Service and ServiceImport, using weighted rules to route more traffic to one backendRef than the other. Weighted rules simplify the process of creating blue/green deployments by shifting rule weight from one backendRef to another.

apiVersion: gateway.networking.k8s.io/v1\nkind: HTTPRoute\nmetadata:\n  name: inventory\nspec:\n  parentRefs:\n    - name: my-hotel\n      sectionName: http\n    - name: my-hotel\n      sectionName: https\n  rules:\n    - backendRefs:\n        - name: inventory-ver1\n          kind: Service\n          port: 80\n          weight: 10\n        - name: inventory-ver2\n          kind: ServiceImport\n          port: 80\n          weight: 90\n

In this example:

This HTTPRoute documentation provides a detailed introduction, feature set, and a basic example of how to configure and use the resource within AWS Gateway API Controller project. For in-depth details and specifications, you can refer to the official Gateway API documentation.

"},{"location":"api-types/iam-auth-policy/","title":"IAMAuthPolicy API Reference","text":""},{"location":"api-types/iam-auth-policy/#introduction","title":"Introduction","text":"

VPC Lattice Auth Policies are IAM policy documents that are attached to VPC Lattice Service Networks or Services to control authorization of principal's access the attached Service Network's Services, or the specific attached Service.

IAMAuthPolicy implements Direct Policy Attachment of Gateway APIs GEP-713: Metaresources and Policy Attachment. An IAMAuthPolicy can be attached to a Gateway, HTTPRoute, or GRPCRoute.

Please visit the VPC Lattice Auth Policy documentation page for more details about Auth Policies.

"},{"location":"api-types/iam-auth-policy/#features","title":"Features","text":"

Note: IAMAuthPolicy can only do authorization for traffic that travels through Gateways, HTTPRoutes, and GRPCRoutes. The authorization will not take effect if the client directly sends traffic to the k8s service DNS.

This article is also a good reference on how to set up VPC Lattice Auth Policies in Kubernetes.

"},{"location":"api-types/iam-auth-policy/#example-configuration","title":"Example Configuration","text":""},{"location":"api-types/iam-auth-policy/#example-1","title":"Example 1","text":"

This configuration attaches a policy to the Gateway, default/my-hotel. The policy only allows traffic with the header, header1=value1, through the Gateway. This means, for every child HTTPRoute and GRPCRoute of the Gateway, only traffic with the specified header will be authorized to access it.

apiVersion: application-networking.k8s.aws/v1alpha1\nkind: IAMAuthPolicy\nmetadata:\n    name: test-iam-auth-policy\nspec:\n    targetRef:\n        group: \"gateway.networking.k8s.io\"\n        kind: Gateway\n        name: my-hotel\n    policy: |\n        {\n            \"Version\": \"2012-10-17\",\n            \"Statement\": [\n                {\n                    \"Effect\": \"Allow\",\n                    \"Principal\": \"*\",\n                    \"Action\": \"vpc-lattice-svcs:Invoke\",\n                    \"Resource\": \"*\",\n                    \"Condition\": {\n                        \"StringEquals\": {\n                            \"vpc-lattice-svcs:RequestHeader/header1\": \"value1\"\n                        }\n                    }\n                }\n            ]\n        }\n
"},{"location":"api-types/iam-auth-policy/#example-2","title":"Example 2","text":"

This configuration attaches a policy to the HTTPRoute, examplens/my-route. The policy only allows traffic from the principal, 123456789012, to the HTTPRoute. Note that the traffic from the specified principal must be SIGv4-signed to be authorized.

apiVersion: application-networking.k8s.aws/v1alpha1\nkind: IAMAuthPolicy\nmetadata:\n    name: test-iam-auth-policy\nspec:\n    targetRef:\n        group: \"gateway.networking.k8s.io\"\n        kind: HTTPRoute\n        namespace: examplens\n        name: my-route\n    policy: |\n        {\n            \"Version\": \"2012-10-17\",\n            \"Statement\": [\n                {\n                    \"Effect\": \"Allow\",\n                    \"Principal\": \"123456789012\",\n                    \"Action\": \"vpc-lattice-svcs:Invoke\",\n                    \"Resource\": \"*\"\n                }\n            ]\n        }\n
"},{"location":"api-types/service-export/","title":"ServiceExport API Reference","text":""},{"location":"api-types/service-export/#introduction","title":"Introduction","text":"

In AWS Gateway API Controller, ServiceExport enables a Service for multi-cluster traffic setup. Clusters can import the exported service with ServiceImport resource.

Internally, creating a ServiceExport creates a standalone VPC Lattice target group. Even without ServiceImports, creating ServiceExports can be useful in case you only need the target groups created; for example, using target groups in the VPC Lattice setup outside Kubernetes.

Note that ServiceExport is not the implementation of Kubernetes Multicluster Service APIs; instead AWS Gateway API Controller uses its own version of the resource for the purpose of Gateway API integration.

"},{"location":"api-types/service-export/#limitations","title":"Limitations","text":""},{"location":"api-types/service-export/#annotations","title":"Annotations","text":""},{"location":"api-types/service-export/#example-configuration","title":"Example Configuration","text":"

The following yaml will create a ServiceExport for a Service named service-1:

apiVersion: application-networking.k8s.aws/v1alpha1\nkind: ServiceExport\nmetadata:\n  name: service-1\n  annotations:\n    application-networking.k8s.aws/port: \"9200\"\nspec: {}\n

"},{"location":"api-types/service-import/","title":"ServiceImport API Reference","text":""},{"location":"api-types/service-import/#introduction","title":"Introduction","text":"

ServiceImport is a resource referring to a Service outside the cluster, paired with ServiceExport resource defined in the other clusters.

Just like Services, ServiceImports can be a backend reference of HTTPRoutes. Along with the cluster's own Services (and ServiceImports from even more clusters), you can distribute the traffic across multiple VPCs and clusters.

Note that ServiceImport is not the implementation of Kubernetes Multicluster Service APIs; instead AWS Gateway API Controller uses its own version of the resource for the purpose of Gateway API integration.

"},{"location":"api-types/service-import/#limitations","title":"Limitations","text":""},{"location":"api-types/service-import/#annotations","title":"Annotations","text":""},{"location":"api-types/service-import/#example-configuration","title":"Example Configuration","text":"

The following yaml imports service-1 exported from the designated cluster.

apiVersion: application-networking.k8s.aws/v1alpha1\nkind: ServiceImport\nmetadata:\n  name: service-1\n  annotations:\n    application-networking.k8s.aws/aws-eks-cluster-name: \"service-1-owner-cluster\"\n    application-networking.k8s.aws/aws-vpc: \"service-1-owner-vpc-id\"\nspec: {}\n

The following example HTTPRoute directs traffic to the above ServiceImport.

apiVersion: gateway.networking.k8s.io/v1\nkind: HTTPRoute\nmetadata:\n  name: my-route\nspec:\n  parentRefs:\n    - name: my-gateway\n      sectionName: http\n  rules:\n    - backendRefs:\n        - name: service-1\n          kind: ServiceImport\n

"},{"location":"api-types/service/","title":"Service API Reference","text":""},{"location":"api-types/service/#introduction","title":"Introduction","text":"

Kubernetes Services define a logical set of Pods and a policy by which to access them, often referred to as a microservice. The set of Pods targeted by a Service is determined by a selector.

"},{"location":"api-types/service/#service-key-features-limitations","title":"Service Key Features & Limitations","text":"

Features:

Limitations:

"},{"location":"api-types/service/#example-configuration","title":"Example Configuration:","text":""},{"location":"api-types/service/#example-1","title":"Example 1","text":"

Here's a basic example of a Service that routes traffic to Pods with the label app=MyApp:

apiVersion: v1\nkind: Service\nmetadata:\n  name: my-service\nspec:\n  selector:\n    app: MyApp\n  ports:\n    - protocol: TCP\n      port: 80\n      targetPort: 8080\n

In this example:

This Service documentation provides an overview of its key features, limitations, and basic examples of configuration within Kubernetes. For detailed specifications and advanced configurations, refer to the official Kubernetes Service documentation.

"},{"location":"api-types/target-group-policy/","title":"TargetGroupPolicy API Reference","text":""},{"location":"api-types/target-group-policy/#introduction","title":"Introduction","text":"

By default, AWS Gateway API Controller assumes plaintext HTTP/1 traffic for backend Kubernetes resources. TargetGroupPolicy is a CRD that can be attached to Service or ServiceExport, which allows the users to define protocol, protocol version and health check configurations of those backend resources.

When attaching a policy to a resource, the following restrictions apply:

The policy will not take effect if: - The resource does not exist - The resource is not referenced by any route - The resource is referenced by a route of unsupported type - The ProtocolVersion is non-empty if the TargetGroupPolicy protocol is TCP

Please check the TargetGroupPolicy API Reference for more details. TargetGroupPolicy API Reference

These restrictions are not forced; for example, users may create a policy that targets a service that is not created yet. However, the policy will not take effect unless the target is valid.

"},{"location":"api-types/target-group-policy/#limitations-and-considerations","title":"Limitations and Considerations","text":""},{"location":"api-types/target-group-policy/#example-configuration","title":"Example Configuration","text":"

This will enable HTTPS traffic between the gateway and Kubernetes service, with customized health check configuration.

apiVersion: application-networking.k8s.aws/v1alpha1\nkind: TargetGroupPolicy\nmetadata:\n    name: test-policy\nspec:\n    targetRef:\n        group: \"\"\n        kind: Service\n        name: my-parking-service\n    protocol: HTTPS\n    protocolVersion: HTTP1\n    healthCheck:\n        enabled: true\n        intervalSeconds: 5\n        timeoutSeconds: 1\n        healthyThresholdCount: 3\n        unhealthyThresholdCount: 2\n        path: \"/healthcheck\"\n        port: 80\n        protocol: HTTP\n        protocolVersion: HTTP1\n        statusMatch: \"200\"\n
"},{"location":"api-types/tls-route/","title":"TLSRoute API Reference","text":""},{"location":"api-types/tls-route/#introduction","title":"Introduction","text":"

With integration of the Gateway API, AWS Gateway API Controller supports TLSRoute. This allows you to define and manage end-to-end TLS encrypted traffic routing to your Kubernetes clusters.

"},{"location":"api-types/tls-route/#considerations","title":"Considerations","text":""},{"location":"api-types/tls-route/#example-configuration","title":"Example Configuration","text":"

Here is a sample configuration that demonstrates how to set up a TLSRoute resource to route end-to-end TLS encrypted traffic to a nginx service:

apiVersion: gateway.networking.k8s.io/v1alpha2\nkind: TLSRoute\nmetadata:\n  name: nginx-tls-route\nspec:\n  hostnames:\n    - nginx-test.my-test.com\n  parentRefs:\n    - name: my-hotel-tls-passthrough\n      sectionName: tls\n  rules:\n    - backendRefs:\n        - name: nginx-tls\n          kind: Service\n          port: 443\n

In this example:

For the detailed tls passthrough traffic connectivity setup, please refer the user guide here.

For the detailed Gateway API TLSRoute resource specifications, you can refer to the Kubernetes official documentation.

For the VPC Lattice tls passthrough Listener configuration details, you can refer to the VPC Lattice documentation.

"},{"location":"api-types/vpc-association-policy/","title":"VpcAssociationPolicy API Reference","text":""},{"location":"api-types/vpc-association-policy/#introduction","title":"Introduction","text":"

VpcAssociationPolicy is a Custom Resource Definition (CRD) that can be attached to a Gateway to define the configuration of the ServiceNetworkVpcAssociation between the Gateway's associated VPC Lattice Service Network and the cluster VPC.

"},{"location":"api-types/vpc-association-policy/#recommended-security-group-inbound-rules","title":"Recommended Security Group Inbound Rules","text":"Source Protocol Port Range Comment Kubernetes cluster VPC CIDR or security group reference Protocols defined in the gateway's listener section Ports defined in the gateway's listener section Allow inbound traffic from current cluster vpc to gateway"},{"location":"api-types/vpc-association-policy/#limitations-and-considerations","title":"Limitations and Considerations","text":"

When attaching a VpcAssociationPolicy to a resource, the following restrictions apply:

The security group will not take effect if:

"},{"location":"api-types/vpc-association-policy/#removing-security-groups","title":"Removing Security Groups","text":"

The VPC Lattice UpdateServiceNetworkVpcAssociation API cannot be used to remove all security groups. If you have a VpcAssociationPolicy attached to a gateway that already has security groups applied, updating the VpcAssociationPolicy with empty security group ids or deleting the VpcAssociationPolicy will NOT remove the security groups from the gateway.

To remove security groups, instead, you should delete VPC Association and re-create a new VPC Association without security group ids by following steps: 1. Update the VpcAssociationPolicy by setting associateWithVpc to false and empty security group ids. 2. Update the VpcAssociationPolicy by setting associateWithVpc to true and empty security group ids. Note: SettingassociateWithVpc` to false will disable traffic from the current cluster workloads to the gateway.

"},{"location":"api-types/vpc-association-policy/#example-configuration","title":"Example Configuration","text":"

This configuration attaches a policy to the Gateway, default/my-hotel. The ServiceNetworkVpcAssociation between the Gateway's corresponding VPC Lattice Service Network and the cluster VPC is updated based on the policy contents.

If the expected ServiceNetworkVpcAssociation does not exist, it is created since associateWithVpc is set to true. This allows traffic from clients in the cluster VPC to VPC Lattice Services in the associated Service Network. Additionally, two security groups (sg-1234567890 and sg-0987654321) are attached to the ServiceNetworkVpcAssociation.

apiVersion: application-networking.k8s.aws/v1alpha1\nkind: VpcAssociationPolicy\nmetadata:\n    name: test-vpc-association-policy\nspec:\n    targetRef:\n        group: \"gateway.networking.k8s.io\"\n        kind: Gateway\n        name: my-hotel\n    securityGroupIds:\n        - sg-1234567890\n        - sg-0987654321\n    associateWithVpc: true\n
"},{"location":"concepts/concepts/","title":"AWS Gateway API Controller User Guide","text":"

As part of the VPC Lattice launch, AWS introduced the AWS Gateway API Controller ; an implementation of the Kubernetes Gateway API. Gateway API is an open-source standard interface to enable Kubernetes application networking through expressive, extensible, and role-oriented interfaces. AWS Gateway API controller extends custom resources, defined by Gateway API, which allows you to create VPC Lattice resources using Kubernetes APIs.

When installed in your cluster, the controller watches for the creation of Gateway API resources such as gateways and routes and provisions corresponding Amazon VPC Lattice objects. This enables users to configure VPC Lattice Services, VPC Lattice service networks and Target Groups using Kubernetes APIs, without needing to write custom code or manage sidecar proxies. The AWS Gateway API Controller is an open-source project and fully supported by Amazon.

AWS Gateway API Controller integrates with Amazon VPC Lattice and allows you to:

This documentation describes how to set up the AWS Gateway API Controller, provides example use cases, development concepts, and API references. AWS Gateway API Controller will provide developers the ability to publish services running on Kubernetes cluster and other compute platforms on AWS such as AWS Lambda or Amazon EC2. Once the AWS Gateway API controller deployed and running, you will be able to manage services for multiple Kubernetes clusters and other compute targets on AWS through the following:

Integrating with the Kubernetes Gateway API provides a kubernetes-native experience for developers to create services, manage network routing and traffic behaviour without the heavy lifting managing the underlying networking infrastrcuture. This lets you work with Kubernetes service-related resources using Kubernetes APIs and custom resource definitions (CRDs) defined by the Kubernetes networking.k8s.io specification.

For more information on this technology, see Kubernetes Gateway API.

"},{"location":"concepts/overview/","title":"Understanding the Gateway API Controller","text":"

For medium and large-scale customers, applications can often spread across multiple areas of a cloud. For example, information pertaining to a company\u2019s authentication, billing, and inventory may each be served by services running on different VPCs in AWS. Someone wanting to run an application that is spread out in this way might find themselves having to work with multiple ways to configure:

This is not a new problem. A common approach to interconnecting services that span multiple VPCs is to use service meshes. However, these require sidecars, which can introduce scaling problems and present their own management challenges, such as dealing with control plane and data plane at scale.

If you just want to run an application, you should be shielded from details needed to find assets across multiple VPCs and multiple clusters. You should also have consistent ways of working with assets across your VPCs, even if those assets include different combinations of instances, clusters, containers, and serverless. And while making it simpler to run multi-VPC applications easier for users, administrators still need the tools to control and audit their resources to suit their company\u2019s compliance needs.

"},{"location":"concepts/overview/#service-directory-networks-policies-and-gateways","title":"Service Directory, Networks, Policies and Gateways","text":"

The goal of VPC Lattice is to provide a way to have a single, overarching services view of all services across multiple VPCs. You should also have consistent ways of working with assets across your VPCs, even if those assets include different combinations of instances, clusters, containers, and serverless. The components making up that view include:

Service

An independently deployable unit of software that delivers a specific task or function. A service can run on EC2 instances or ECS containers, or as Lambda functions, within an account or a virtual private cloud (VPC).

A VPC Lattice service has the following components: target groups, listeners, and rules.

Service Network

A logical boundary for a collection of services. A client is any resource deployed in a VPC that is associated with the service network. Clients and services that are associated with the same service network can communicate with each other if they are authorized to do so.

In the following figure, the clients can communicate with both services, because the VPC and services are associated with the same service network.

Service Directory A central registry of all VPC Lattice services that you own or are shared with your account through AWS Resource Access Manager (AWS RAM).

Auth Policies Fine-grained authorization policies that can be used to define access to services. You can attach separate authorization policies to individual services or to the service network. For example, you can create a policy for how a payment service running on an auto scaling group of EC2 instances should interact with a billing service running in AWS Lambda.

"},{"location":"concepts/overview/#use-cases","title":"Use-cases","text":"

In the context of Kubernetes, Amazon VPC Lattice helps to simplify the following:

With VPC Lattice you can also avoid some of these common problems:

"},{"location":"concepts/overview/#relationship-between-vpc-lattice-and-kubernetes","title":"Relationship between VPC Lattice and Kubernetes","text":"

As a Kubernetes user, you can have a very Kubernetes-native experience using the VPC Lattice APIs. The following figure illustrates how VPC Lattice objects connect to Kubernetes Gateway API objects:

As shown in the figure, there are different personas associated with different levels of control in VPC Lattice. Notice that the Kubernetes Gateway API syntax is used to create the gateway, HTTPRoute and services, but Kubernetes gets the details of those items from VPC Lattice:

This is all done by checking the related VPC Lattice Services (and related policies), Target Groups, and Targets.

Keep in mind that you can have different Target Groups spread across different clusters/VPCs receiving traffic from the same VPC Lattice Service (HTTPRoute).

"},{"location":"contributing/developer-cheat-sheet/","title":"Developer Cheat Sheet","text":""},{"location":"contributing/developer-cheat-sheet/#startup","title":"Startup","text":"

The program flow is roughly as follows. On startup, cmd/aws-application-networking-k8s/main.go#main() runs. This initializes all the key components and registers various controllers (in controllers/) with the Kubernetes control plane. These controllers include event handlers (in controllers/eventhandlers/) whose basic function is to convert object notifications to more specific types and enqueue them for processing by the various controllers.

"},{"location":"contributing/developer-cheat-sheet/#build-and-deploy","title":"Build and Deploy","text":"

Processing takes place in a controller's Reconcile() method, which will check if the object has been changed/created or deleted. Sometimes this invokes different paths, but most involve a buildAndDeployModel() step.

"},{"location":"contributing/developer-cheat-sheet/#build","title":"Build","text":"

In the \"build\" step, the controller invokes a \"model builder\" with the changed Kubernetes object. Model builders (in pkg/gateway/) convert the Kubernetes object state into an intermediate VPC Lattice object called a \"model type\". These model types (in pkg/model/lattice) are basic structs which contain the important information about the source Kubernetes object and fields containing Lattice-related values. These models are built only using information from Kubernetes and are intended to contain all details needed to make updates against the VPC Lattice API. Once created, model ojbects are stored in a \"stack\" (/pkg/model/core/stack.go). The stack contains basic methods for storing and retrieving objects based on model type.

"},{"location":"contributing/developer-cheat-sheet/#deploy","title":"Deploy","text":"

Once the \"stack\" is populated, it is serialized for logging then passed to the \"deploy\" step. Here, a \"deployer\" (/pkg/deployer/stack_deployer.go) will invoke a number of \"synthesizers\" (in /pkg/deploy/lattice/) to read model objects from the stack and convert these to VPC Lattice API calls (via Synthesize()). Most synthesizer interaction with the Lattice APIs is deferred to various \"managers\" (also /pkg/deploy/lattice). These managers each have their own interface definition with CRUD methods that bridge model types and Lattice API types. Managers, in turn, use /pkg/aws/services/vpclattice.go for convenience methods against the Lattice API only using Lattice API types.

"},{"location":"contributing/developer-cheat-sheet/#other-notes","title":"Other notes","text":"

Model Status Structs On each model type is a <Type>Status struct which contains information about the object from the Lattice API, such as ARNs or IDs. These are populated either on creation against the Lattice API or in cases where the API object already exists but needs updating via the model.

Resource Cleanup A typical part of model synthesis (though it is also present in ) is checking all resources in the VPC Lattice API to see if they belong to deleted or missing Kubernetes resources. This is to help ensure Lattice resources are not leaked due to bugs in the reconciliation process.

"},{"location":"contributing/developer-cheat-sheet/#code-consistency","title":"Code Consistency","text":"

To help reduce cognitive load while working through the code, it is critical to be consistent. For now, this applies most directly to imports and variable naming.

"},{"location":"contributing/developer-cheat-sheet/#imports","title":"Imports","text":"

Common imports should all use the same alias (or lack of alias) so that when we see aws. or vpclattice. or sdk. in the code we know what they refer to without having to double-check. Here are the conventions:

import (\n  \"github.com/aws/aws-sdk-go/service/vpclattice\" // no alias\n  \"github.com/aws/aws-sdk-go/aws\" // no alias\n\n  corev1 \"k8s.io/api/core/v1\"\n  gwv1 \"sigs.k8s.io/gateway-api/apis/v1\"\n  ctrl \"sigs.k8s.io/controller-runtime\"\n\n  pkg_aws \"github.com/aws/aws-application-networking-k8s/pkg/aws\"\n  model \"github.com/aws/aws-application-networking-k8s/pkg/model/lattice\"\n  anv1alpha1 \"github.com/aws/aws-application-networking-k8s/pkg/apis/applicationnetworking/v1alpha1\"\n)\n

For unit tests, this changes slightly for more readability when using the imports strictly for mocking

import (\n  mocks_aws \"github.com/aws/aws-application-networking-k8s/pkg/aws\"\n  mocks \"github.com/aws/aws-application-networking-k8s/pkg/aws/services\"\n  mock_client \"github.com/aws/aws-application-networking-k8s/mocks/controller-runtime/client\"\n)\n

"},{"location":"contributing/developer-cheat-sheet/#service-service-or-service","title":"Service, Service, or Service?","text":"

Similarly, because there is overlap between the Gateway API spec, model types, and Lattice API nouns, it is important to use differentiating names between the types in components where the name could be ambiguous. The safest approach is to use a prefix:

An example in practice would be

  var k8sSvc *corev1.Service\n  var modelSvc *model.Service\n  var latticeSvc *vpclattice.ServiceSummary\n

There are some objects which interact unambiguously with an underlying type, for example in vpclattice.go the types are always VPC Lattice API types, so disambiguating is less important there.

"},{"location":"contributing/developer/","title":"Developer Guide","text":""},{"location":"contributing/developer/#prerequisites","title":"Prerequisites","text":"

Tools

Install these tools before proceeding:

  1. AWS CLI,
  2. kubectl - the Kubernetes CLI,
  3. helm - the package manager for Kubernetes,
  4. eksctl- the CLI for Amazon EKS,
  5. go v1.20.x - language,
  6. yq - CLI to manipulate yaml files,
  7. jq - CLI to manipulate json files,
  8. make- build automation tool.

Cluster creation and setup

Before proceeding to the next sections, you need to:

  1. Create a and set up a cluster dev-cluster with the controller following the AWS Gateway API Controller installation guide on Amazon EKS.

    Note

    You can either install the Controller and CRDs following the steps in the installation guide or using the steps below if you prefer to create the individual CRDs.

  2. Clone the AWS Gateway API Controller repository.

    git clone git@github.com:aws/aws-application-networking-k8s.git\ncd aws-application-networking-k8s\n

  3. Install dependencies with toolchain.sh script:
    make toolchain\n
"},{"location":"contributing/developer/#setup","title":"Setup","text":"

Once cluster is ready, we need to apply CRDs for gateway-api resources. First install core gateway-api CRDs:

v1.2 CRDs

Install the latest v1 CRDs:

kubectl kustomize \"github.com/kubernetes-sigs/gateway-api/config/crd?ref=v1.2.0\" | kubectl apply -f -\n

Note

Note that v1 CRDs are not included in deploy-*.yaml and helm chart.

And install additional CRDs for the controller:

kubectl apply -f config/crds/bases/externaldns.k8s.io_dnsendpoints.yaml\nkubectl apply -f config/crds/bases/gateway.networking.k8s.io_tlsroutes.yaml\nkubectl apply -f config/crds/bases/application-networking.k8s.aws_serviceexports.yaml\nkubectl apply -f config/crds/bases/application-networking.k8s.aws_serviceimports.yaml\nkubectl apply -f config/crds/bases/application-networking.k8s.aws_targetgrouppolicies.yaml\nkubectl apply -f config/crds/bases/application-networking.k8s.aws_vpcassociationpolicies.yaml\nkubectl apply -f config/crds/bases/application-networking.k8s.aws_accesslogpolicies.yaml\nkubectl apply -f config/crds/bases/application-networking.k8s.aws_iamauthpolicies.yaml\n

When e2e tests are terminated during execution, it might break clean-up stage and resources will leak. To delete dangling resources manually use cleanup script:

make e2e-clean\n
"},{"location":"contributing/developer/#local-development","title":"Local Development","text":"

A minimal test of changes can be done with make presubmit. This command will also run on PR.

make presubmit\n

Start controller in development mode, that will point to cluster (see setup above).

// should be region of the cluster\nREGION=us-west-2 make run\n

You can explore a collection of different yaml configurations in the examples folder that can be applied to cluster.

To run it against specific lattice service endpoint.

LATTICE_ENDPOINT=https://vpc-lattice.us-west-2.amazonaws.com/ make run\n

To easier load environment variables, if you hope to run the controller by GoLand IDE locally, you could run the ./scripts/load_env_variables.sh And use \"EnvFile\" GoLand plugin to read the env variables from the generated .env file.

"},{"location":"contributing/developer/#end-to-end-testing","title":"End-to-End Testing","text":"

For larger changes it's recommended to run e2e suites on your local cluster. E2E tests require a service network named test-gateway with cluster VPC associated to run. You can either set up service network manually or use DEFAULT_SERVICE_NETWORK option when running controller locally. (e.g. DEFAULT_SERVICE_NETWORK=test-gateway make run)

REGION=us-west-2 make e2e-test\n

For the RAM Share test suite, which runs cross-account e2e tests, you will need a secondary AWS Account with a role that can be assumed by the primary account during test execution. You can create an IAM Role, with a Trust Policy allowing the primary account to assume it, via the AWS IAM Console.

export SECONDARY_ACCOUNT_TEST_ROLE_ARN=arn:aws:iam::000000000000:role/MyRole\nexport FOCUS=\"RAM Share\"\nREGION=us-west-2 make e2e-test\n

You can use the FOCUS environment variable to run some specific test cases based on filter condition. You could assign the string in the Describe(\"xxxxxx\") or It(\"xxxxxx\") to the FOCUS environment variable to run the specific test cases.

var _ = Describe(\"HTTPRoute path matches\", func() {\n    It(\"HTTPRoute should support multiple path matches\", func() {\n        // test case body\n    })\n

For example, to run the test case \"HTTPRoute should support multiple path matches\", you could run the following command:

export FOCUS=\"HTTPRoute should support multiple path matches\"\nexport REGION=us-west-2\nmake e2e-test\n

Conversely, you can use the SKIP environment variable to skip specific test cases.

For example, to skip the same test as above, you would run the following command:

export SKIP=\"HTTPRoute should support multiple path matches\"\n

For more detail on filter condition for ginkgo https://onsi.github.io/ginkgo/#focused-specs https://onsi.github.io/ginkgo/#description-based-filtering

After all test cases running finished, in the AfterSuite() function, it will clean up k8s and vpc lattice resource created by current test cases running.

"},{"location":"contributing/developer/#documentations","title":"Documentations","text":"

The controller documentation is managed in docs/ directory, and built with mkdocs. It uses mike to manage versioning. To build and verify your changes locally:

pip install -r requirements.txt\nmake docs\n
The website will be located in site/ directory. You can also run a local dev-server by running mike serve or mkdocs serve.

"},{"location":"contributing/developer/#contributing","title":"Contributing","text":"

Before sending a Pull Request, you should run unit tests:

make presubmit\n

For larger, functional changes, run e2e tests:

make e2e-test\n

"},{"location":"contributing/developer/#make-docker-image","title":"Make Docker Image","text":"
make docker-build\n
"},{"location":"contributing/developer/#deploy-controller-inside-a-kubernetes-cluster","title":"Deploy Controller inside a Kubernetes Cluster","text":"

Generate deploy.yaml

make build-deploy\n

Then follow Deploying the AWS Gateway API Controller to configure and deploy the docker image.

"},{"location":"guides/advanced-configurations/","title":"Advanced configurations","text":"

The section below covers advanced configuration techniques for installing and using the AWS Gateway API Controller. This includes things such as running the controller on a self-hosted cluster on AWS or using an IPv6 EKS cluster.

"},{"location":"guides/advanced-configurations/#using-a-self-managed-kubernetes-cluster","title":"Using a self-managed Kubernetes cluster","text":"

You can install AWS Gateway API Controller to a self-managed Kubernetes cluster in AWS.

However, the controller utilizes IMDS to get necessary information from instance metadata, such as AWS account ID and VPC ID. So:

"},{"location":"guides/advanced-configurations/#ipv6-support","title":"IPv6 support","text":"

IPv6 address type is automatically used for your services and pods if your cluster is configured to use IPv6 addresses.

If your cluster is configured to be dual-stack, you can set the IP address type of your service using the ipFamilies field. For example:

parking_service.yaml
apiVersion: v1\nkind: Service\nmetadata:\n  name: ipv4-target-in-dual-stack-cluster\nspec:\n  ipFamilies:\n    - \"IPv4\"\n  selector:\n    app: parking\n  ports:\n    - protocol: TCP\n      port: 80\n      targetPort: 8090\n
"},{"location":"guides/custom-domain-name/","title":"Configure a Custom Domain Name for HTTPRoute","text":"

When you create a HTTPRoute under amazon-vpc-lattice gatewayclass, the controller creates a AWS VPC Lattice Service during reconciliation. VPC Lattice generates a unique Fully Qualified Domain Name (FQDN) for you; however, this auto-generated domain name is not easy to remember.

If you'd prefer to use a custom domain name for a HTTPRoute, you can specify them in hostname field of HTTPRoute. Here is one example:

custom-domain-route.yaml
apiVersion: gateway.networking.k8s.io/v1\nkind: HTTPRoute\nmetadata:\n  name: review\nspec:\n  hostnames:\n  - review.my-test.com  # this is the custom domain name\n  parentRefs:\n  - name: my-hotel\n    sectionName: http\n  rules:    \n  - backendRefs:\n    - name: review2\n      kind: Service\n      port: 8090\n    matches:\n    - path:\n        type: PathPrefix\n        value: /review2\n
"},{"location":"guides/custom-domain-name/#managing-dns-records-using-externaldns","title":"Managing DNS records using ExternalDNS","text":"

To register custom domain names to your DNS provider, we recommend using ExternalDNS. AWS Gateway API Controller supports ExternalDNS integration through CRD source - the controller will manage DNSEndpoint resource for you.

To use ExternalDNS with the AWS Gateway API Controller, you need to:

  1. Install DNSEndpoint CRD. This is bundled with both Gateway API Controller Helm chart and files/controller-installation/deploy-*.yaml manifest, but also can be installed manually by the following command:

    kubectl apply -f config/crds/bases/externaldns.k8s.io_dnsendpoints.yaml\n

    Note

    If the DNSEndpoint CRD does not exist, DNSEndpoint resource will not be created nor will be managed by the controller.

  2. Restart the controller if running already.

  3. Run ExternalDNS controller watching crd source. The following example command runs ExternalDNS compiled from source, using AWS Route53 provider:
    build/external-dns --source crd --crd-source-apiversion externaldns.k8s.io/v1alpha1 \\\n--crd-source-kind DNSEndpoint --provider aws --txt-prefix \"prefix.\"\n
  4. Create HTTPRoutes and Services. The controller should create DNSEndpoint resource owned by the HTTPRoute you created.
  5. ExternalDNS will watch the changes and create DNS record on the configured DNS provider.
"},{"location":"guides/custom-domain-name/#notes","title":"Notes","text":""},{"location":"guides/deploy/","title":"Deploy the AWS Gateway API Controller on Amazon EKS","text":"

This Deployment Guide provides an end-to-end procedure to install the AWS Gateway API Controller with Amazon Elastic Kubernetes Service.

Amazon EKS is a simple, recommended way of preparing a cluster for running services with AWS Gateway API Controller, however the AWS Gateway API Controller can be used on any Kubernetes cluster on AWS. Check out the Advanced Configurations section below for instructions on how to install and run the controller on self-hosted Kubernetes clusters on AWS.

"},{"location":"guides/deploy/#prerequisites","title":"Prerequisites","text":"

Install these tools before proceeding:

  1. AWS CLI,
  2. kubectl - the Kubernetes CLI,
  3. helm - the package manager for Kubernetes,
  4. eksctl- the CLI for Amazon EKS,
  5. jq - CLI to manipulate json files.
"},{"location":"guides/deploy/#setup","title":"Setup","text":"

Set your AWS Region and Cluster Name as environment variables. See the Amazon VPC Lattice FAQs for a list of supported regions.

export AWS_REGION=<cluster_region>\nexport CLUSTER_NAME=<cluster_name>\n

Create a cluster (optional)

You can easily create a cluster with eksctl, the CLI for Amazon EKS:

eksctl create cluster --name $CLUSTER_NAME --region $AWS_REGION\n

Allow traffic from Amazon VPC Lattice

You must set up security groups so that they allow all Pods communicating with VPC Lattice to allow traffic from the VPC Lattice managed prefix lists. See Control traffic to resources using security groups for details. Lattice has both an IPv4 and IPv6 prefix lists available.

  1. Configure the EKS nodes' security group to receive traffic from the VPC Lattice network.

    CLUSTER_SG=<your_node_security_group>\n

    Note

    If you have created the cluster with eksctl create cluster --name $CLUSTER_NAME --region $AWS_REGION command, you can use this command to export the Security Group ID:

    CLUSTER_SG=$(aws eks describe-cluster --name $CLUSTER_NAME --output json| jq -r '.cluster.resourcesVpcConfig.clusterSecurityGroupId')\n
    PREFIX_LIST_ID=$(aws ec2 describe-managed-prefix-lists --query \"PrefixLists[?PrefixListName==\"\\'com.amazonaws.$AWS_REGION.vpc-lattice\\'\"].PrefixListId\" | jq -r '.[]')\naws ec2 authorize-security-group-ingress --group-id $CLUSTER_SG --ip-permissions \"PrefixListIds=[{PrefixListId=${PREFIX_LIST_ID}}],IpProtocol=-1\"\nPREFIX_LIST_ID_IPV6=$(aws ec2 describe-managed-prefix-lists --query \"PrefixLists[?PrefixListName==\"\\'com.amazonaws.$AWS_REGION.ipv6.vpc-lattice\\'\"].PrefixListId\" | jq -r '.[]')\naws ec2 authorize-security-group-ingress --group-id $CLUSTER_SG --ip-permissions \"PrefixListIds=[{PrefixListId=${PREFIX_LIST_ID_IPV6}}],IpProtocol=-1\"\n

Set up IAM permissions

The AWS Gateway API Controller needs to have necessary permissions to operate.

  1. Create a policy (recommended-inline-policy.json) in IAM with the following content that can invoke the Gateway API and copy the policy arn for later use:

    curl https://raw.githubusercontent.com/aws/aws-application-networking-k8s/main/files/controller-installation/recommended-inline-policy.json  -o recommended-inline-policy.json\n\naws iam create-policy \\\n    --policy-name VPCLatticeControllerIAMPolicy \\\n    --policy-document file://recommended-inline-policy.json\n\nexport VPCLatticeControllerIAMPolicyArn=$(aws iam list-policies --query 'Policies[?PolicyName==`VPCLatticeControllerIAMPolicy`].Arn' --output text)\n
  2. Create the aws-application-networking-system namespace:

    kubectl apply -f https://raw.githubusercontent.com/aws/aws-application-networking-k8s/main/files/controller-installation/deploy-namesystem.yaml\n

You can choose from Pod Identities (recommended) and IAM Roles For Service Accounts to set up controller permissions.

Pod Identities (recommended)IRSA

Set up the Pod Identities Agent

To use Pod Identities, we need to set up the Agent and to configure the controller's Kubernetes Service Account to assume necessary permissions with EKS Pod Identity.

Read if you are using a custom node role

The node role needs to have permissions for the Pod Identity Agent to do the AssumeRoleForPodIdentity action in the EKS Auth API. Follow the documentation if you are not using the AWS managed policy AmazonEKSWorkerNodePolicy.

  1. Run the following AWS CLI command to create the Pod Identity addon.
    aws eks create-addon --cluster-name $CLUSTER_NAME --addon-name eks-pod-identity-agent --addon-version v1.0.0-eksbuild.1\n
    kubectl get pods -n kube-system | grep 'eks-pod-identity-agent'\n

Assign role to Service Account

Create an IAM role and associate it with a Kubernetes service account.

  1. Create a Service Account.

    cat >gateway-api-controller-service-account.yaml <<EOF\napiVersion: v1\nkind: ServiceAccount\nmetadata:\n    name: gateway-api-controller\n    namespace: aws-application-networking-system\nEOF\nkubectl apply -f gateway-api-controller-service-account.yaml\n
  2. Create a trust policy file for the IAM role.

    cat >trust-relationship.json <<EOF\n{\n    \"Version\": \"2012-10-17\",\n    \"Statement\": [\n        {\n            \"Sid\": \"AllowEksAuthToAssumeRoleForPodIdentity\",\n            \"Effect\": \"Allow\",\n            \"Principal\": {\n                \"Service\": \"pods.eks.amazonaws.com\"\n            },\n            \"Action\": [\n                \"sts:AssumeRole\",\n                \"sts:TagSession\"\n            ]\n        }\n    ]\n}\nEOF\n
  3. Create the role.

    aws iam create-role --role-name VPCLatticeControllerIAMRole --assume-role-policy-document file://trust-relationship.json --description \"IAM Role for AWS Gateway API Controller for VPC Lattice\"\naws iam attach-role-policy --role-name VPCLatticeControllerIAMRole --policy-arn=$VPCLatticeControllerIAMPolicyArn\nexport VPCLatticeControllerIAMRoleArn=$(aws iam list-roles --query 'Roles[?RoleName==`VPCLatticeControllerIAMRole`].Arn' --output text)\n
  4. Create the association

    aws eks create-pod-identity-association --cluster-name $CLUSTER_NAME --role-arn $VPCLatticeControllerIAMRoleArn --namespace aws-application-networking-system --service-account gateway-api-controller\n

You can use AWS IAM Roles for Service Accounts (IRSA) to assign the Controller necessary permissions via a ServiceAccount.

  1. Create an IAM OIDC provider: See Creating an IAM OIDC provider for your cluster for details.

    eksctl utils associate-iam-oidc-provider --cluster $CLUSTER_NAME --approve --region $AWS_REGION\n

  2. Create an iamserviceaccount for pod level permission:

    eksctl create iamserviceaccount \\\n    --cluster=$CLUSTER_NAME \\\n    --namespace=aws-application-networking-system \\\n    --name=gateway-api-controller \\\n    --attach-policy-arn=$VPCLatticeControllerIAMPolicyArn \\\n    --override-existing-serviceaccounts \\\n    --region $AWS_REGION \\\n    --approve\n
"},{"location":"guides/deploy/#install-the-controller","title":"Install the Controller","text":"
  1. Run either kubectl or helm to deploy the controller. Check Environment Variables for detailed explanation of each configuration option.

    HelmKubectl
    # login to ECR\naws ecr-public get-login-password --region us-east-1 | helm registry login --username AWS --password-stdin public.ecr.aws\n# Run helm with either install or upgrade\nhelm install gateway-api-controller \\\n    oci://public.ecr.aws/aws-application-networking-k8s/aws-gateway-controller-chart \\\n    --version=v1.1.0 \\\n    --set=serviceAccount.create=false \\\n    --namespace aws-application-networking-system \\\n    --set=log.level=info # use \"debug\" for debug level logs\n
    kubectl apply -f https://raw.githubusercontent.com/aws/aws-application-networking-k8s/main/files/controller-installation/deploy-v1.1.0.yaml\n
  2. Create the amazon-vpc-lattice GatewayClass:

    kubectl apply -f https://raw.githubusercontent.com/aws/aws-application-networking-k8s/main/files/controller-installation/gatewayclass.yaml\n

"},{"location":"guides/environment/","title":"Configuration","text":""},{"location":"guides/environment/#environment-variables","title":"Environment Variables","text":"

AWS Gateway API Controller for VPC Lattice supports a number of configuration options, which are set through environment variables. The following environment variables are available, and all of them are optional.

"},{"location":"guides/environment/#cluster_name","title":"CLUSTER_NAME","text":"

Type: string

Default: Inferred from IMDS metadata

A unique name to identify a cluster. This will be used in AWS resource tags to record ownership. This variable is required except for EKS cluster. This needs to be specified if IMDS is not available.

"},{"location":"guides/environment/#cluster_vpc_id","title":"CLUSTER_VPC_ID","text":"

Type: string

Default: Inferred from IMDS metadata

When running AWS Gateway API Controller outside the Kubernetes Cluster, this specifies the VPC of the cluster. This needs to be specified if IMDS is not available.

"},{"location":"guides/environment/#aws_account_id","title":"AWS_ACCOUNT_ID","text":"

Type: string

Default: Inferred from IMDS metadata

When running AWS Gateway API Controller outside the Kubernetes Cluster, this specifies the AWS account. This needs to be specified if IMDS is not available.

"},{"location":"guides/environment/#region","title":"REGION","text":"

Type: string

Default: Inferred from IMDS metadata

When running AWS Gateway API Controller outside the Kubernetes Cluster, this specifies the AWS Region of VPC Lattice Service endpoint. This needs to be specified if IMDS is not available.

"},{"location":"guides/environment/#log_level","title":"LOG_LEVEL","text":"

Type: string

Default: \"info\"

When set as \"debug\", the AWS Gateway API Controller will emit debug level logs.

"},{"location":"guides/environment/#default_service_network","title":"DEFAULT_SERVICE_NETWORK","text":"

Type: string

Default: \"\"

When set as a non-empty value, creates a service network with that name. The created service network will be also associated with cluster VPC.

"},{"location":"guides/environment/#enable_service_network_override","title":"ENABLE_SERVICE_NETWORK_OVERRIDE","text":"

Type: string

Default: \"\"

When set as \"true\", the controller will run in \"single service network\" mode that will override all gateways to point to default service network, instead of searching for service network with the same name. Can be used for small setups and conformance tests.

"},{"location":"guides/environment/#webhook_enabled","title":"WEBHOOK_ENABLED","text":"

Type: string

Default: \"\"

When set as \"true\", the controller will start the webhook listener responsible for pod readiness gate injection (see pod-readiness-gates.md). This is disabled by default for deploy.yaml because the controller will not start successfully without the TLS certificate for the webhook in place. While this can be fixed by running scripts/gen-webhook-cert.sh, it requires manual action. The webhook is enabled by default for the Helm install as the Helm install will also generate the necessary certificate.

"},{"location":"guides/environment/#disable_tagging_service_api","title":"DISABLE_TAGGING_SERVICE_API","text":"

Type: string

Default: \"\"

When set as \"true\", the controller will not use the AWS Resource Groups Tagging API.

The Resource Groups Tagging API is only available on the public internet and customers using private clusters will need to enable this feature. When enabled, the controller will use VPC Lattice APIs to lookup tags which are not as performant and requires more API calls.

The Helm chart sets this value to \"false\" by default.

"},{"location":"guides/environment/#route_max_concurrent_reconciles","title":"ROUTE_MAX_CONCURRENT_RECONCILES","text":"

Type: int

Default: 1

Maximum number of concurrently running reconcile loops per route type (HTTP, GRPC, TLS)

"},{"location":"guides/getstarted/","title":"Getting Started with AWS Gateway API Controller","text":"

This guide helps you get started using the controller.

Following this guide, you will:

Using these examples as a foundation, see the Concepts section for ways to further configure service-to-service communications.

"},{"location":"guides/getstarted/#prerequisites","title":"Prerequisites","text":"

Before proceeding to the next sections, you need to:

"},{"location":"guides/getstarted/#single-cluster","title":"Single cluster","text":"

This example creates a single cluster in a single VPC, then configures two HTTPRoutes (rates and inventory) and three kubetnetes services (parking, review, and inventory-1). The following figure illustrates this setup:

Setup in-cluster service-to-service communications

  1. AWS Gateway API Controller needs a VPC Lattice service network to operate.

    HelmAWS CLI

    If you installed the controller with helm, you can update chart configurations by specifying the defaultServiceNetwork variable:

    aws ecr-public get-login-password --region us-east-1 | helm registry login --username AWS --password-stdin public.ecr.aws\nhelm upgrade gateway-api-controller \\\noci://public.ecr.aws/aws-application-networking-k8s/aws-gateway-controller-chart \\\n--version=v1.0.6 \\\n--reuse-values \\\n--namespace aws-application-networking-system \\\n--set=defaultServiceNetwork=my-hotel \n

    You can use AWS CLI to manually create a VPC Lattice service network association to the previously created my-hotel service network:

    aws vpc-lattice create-service-network --name my-hotel\nSERVICE_NETWORK_ID=$(aws vpc-lattice list-service-networks --query \"items[?name==\"\\'my-hotel\\'\"].id\" | jq -r '.[]')\nCLUSTER_VPC_ID=$(aws eks describe-cluster --name $CLUSTER_NAME | jq -r .cluster.resourcesVpcConfig.vpcId)\naws vpc-lattice create-service-network-vpc-association --service-network-identifier $SERVICE_NETWORK_ID --vpc-identifier $CLUSTER_VPC_ID\n

    Ensure the service network created above is ready to accept traffic from the new VPC, by checking if the VPC association status is ACTIVE:

    aws vpc-lattice list-service-network-vpc-associations --vpc-id $CLUSTER_VPC_ID\n
    {\n    \"items\": [\n        {\n            ...\n            \"status\": \"ACTIVE\",\n            ...\n        }\n    ]\n}\n

  2. Create the Kubernetes Gateway my-hotel:

    kubectl apply -f files/examples/my-hotel-gateway.yaml\n

    Verify that my-hotel Gateway is created with PROGRAMMED status equals to True:

    kubectl get gateway\n
    NAME       CLASS                ADDRESS   PROGRAMMED   AGE\nmy-hotel   amazon-vpc-lattice               True      7d12h\n

  3. Create the Kubernetes HTTPRoute rates that can has path matches routing to the parking service and review service:

    kubectl apply -f files/examples/parking.yaml\nkubectl apply -f files/examples/review.yaml\nkubectl apply -f files/examples/rate-route-path.yaml\n

  4. Create another Kubernetes HTTPRoute inventory:
    kubectl apply -f files/examples/inventory-ver1.yaml\nkubectl apply -f files/examples/inventory-route.yaml\n
  5. Find out HTTPRoute's DNS name from HTTPRoute status:

    kubectl get httproute\n
    NAME        HOSTNAMES   AGE\ninventory               51s\nrates                   6m11s\n

  6. Check VPC Lattice generated DNS address for HTTPRoute inventory and rates (this could take up to one minute to populate):

    kubectl get httproute inventory -o yaml \n
        apiVersion: gateway.networking.k8s.io/v1\n    kind: HTTPRoute\n    metadata:\n        annotations:\n        application-networking.k8s.aws/lattice-assigned-domain-name: inventory-default-xxxxxx.xxxxx.vpc-lattice-svcs.us-west-2.on.aws\n    ...\n

    kubectl get httproute rates -o yaml\n
        apiVersion: gateway.networking.k8s.io/v1\n    kind: HTTPRoute\n    metadata:\n        annotations:\n        application-networking.k8s.aws/lattice-assigned-domain-name: rates-default-xxxxxx.xxxxxxxxx.vpc-lattice-svcs.us-west-2.on.aws\n    ...\n

  7. If the previous step returns the expected response, store VPC Lattice assigned DNS names to variables.

    ratesFQDN=$(kubectl get httproute rates -o json | jq -r '.metadata.annotations.\"application-networking.k8s.aws/lattice-assigned-domain-name\"')\ninventoryFQDN=$(kubectl get httproute inventory -o json | jq -r '.metadata.annotations.\"application-networking.k8s.aws/lattice-assigned-domain-name\"')\n

    Confirm that the URLs are stored correctly:

    echo \"$ratesFQDN \\n$inventoryFQDN\"\n
    rates-default-xxxxxx.xxxxxxxxx.vpc-lattice-svcs.us-west-2.on.aws \ninventory-default-xxxxxx.xxxxx.vpc-lattice-svcs.us-west-2.on.aws\n

Verify service-to-service communications

  1. Check connectivity from the inventory-ver1 service to parking and review services:

    kubectl exec deploy/inventory-ver1 -- curl -s $ratesFQDN/parking $ratesFQDN/review\n
    Requsting to Pod(parking-xxxxx): parking handler pod\nRequsting to Pod(review-xxxxxx): review handler pod\n
  2. Check connectivity from the parking service to the inventory-ver1 service:

    kubectl exec deploy/parking -- curl -s $inventoryFQDN\n
    Requsting to Pod(inventory-xxx): Inventory-ver1 handler pod\n
    Now you could confirm the service-to-service communications within one cluster is working as expected.

"},{"location":"guides/getstarted/#multi-cluster","title":"Multi-cluster","text":"

This section builds on the previous one. We will be migrating the Kubernetes inventory service from a the EKS cluster we previously created to new cluster in a different VPC, located in the same AWS Account.

Set up inventory-ver2 service and serviceExport in the second cluster

Warning

VPC Lattice is a regional service, so you will need to create this second cluster in the same AWS Region as gw-api-controller-demo. Keep this in mind when setting AWS_REGION variable in the following steps.

export AWS_REGION=<clusters_region>\n

  1. Create a second Kubernetes cluster gw-api-controller-demo-2 with the Controller installed (using the same instructions to create and install the controller on Amazon EKS used to create the first).

  2. For sake of simplicity, lets set some alias for our clusters in the Kubernetes config file.

    aws eks update-kubeconfig --name gw-api-controller-demo --region $AWS_REGION --alias gw-api-controller-demo\naws eks update-kubeconfig --name gw-api-controller-demo-2 --region $AWS_REGION --alias gw-api-controller-demo-2\nkubectl config get-contexts\n
  3. Ensure you're using the second cluster's kubectl context.

    kubectl config current-context\n
    If your context is set to the first cluster, switch it to use the second cluster one:
    kubectl config use-context gw-api-controller-demo-2\n

  4. Create the service network association.

    HelmAWS CLI

    If you installed the controller with helm, you can update chart configurations by specifying the defaultServiceNetwork variable:

    aws ecr-public get-login-password --region us-east-1 | helm registry login --username AWS --password-stdin public.ecr.aws\nhelm upgrade gateway-api-controller \\\noci://public.ecr.aws/aws-application-networking-k8s/aws-gateway-controller-chart \\\n--version=v1.0.6 \\\n--reuse-values \\\n--namespace aws-application-networking-system \\\n--set=defaultServiceNetwork=my-hotel \n

    You can use AWS CLI to manually create a VPC Lattice service network association to the previously created my-hotel service network:

    SERVICE_NETWORK_ID=$(aws vpc-lattice list-service-networks --query \"items[?name==\"\\'my-hotel\\'\"].id\" | jq -r '.[]')\nCLUSTER_VPC_ID=$(aws eks describe-cluster --name $CLUSTER_NAME | jq -r .cluster.resourcesVpcConfig.vpcId)\naws vpc-lattice create-service-network-vpc-association --service-network-identifier $SERVICE_NETWORK_ID --vpc-identifier $CLUSTER_VPC_ID\n

    Ensure the service network created above is ready to accept traffic from the new VPC, by checking if the VPC association status is ACTIVE:

    aws vpc-lattice list-service-network-vpc-associations --vpc-id $CLUSTER_VPC_ID\n
    {\n    \"items\": [\n        {\n            ...\n            \"status\": \"ACTIVE\",\n            ...\n        }\n    ]\n}\n

  5. Create the Kubernetes Gateway my-hotel:

    kubectl apply -f files/examples/my-hotel-gateway.yaml\n

    Verify that my-hotel Gateway is created with PROGRAMMED status equals to True:

    kubectl get gateway\n
    NAME       CLASS                ADDRESS   PROGRAMMED   AGE\nmy-hotel   amazon-vpc-lattice               True      7d12h\n

  6. Create a Kubernetes inventory-ver2 service in the second cluster:

    kubectl apply -f files/examples/inventory-ver2.yaml\n

  7. Export this Kubernetes inventory-ver2 from the second cluster, so that it can be referenced by HTTPRoute in the first cluster:
    kubectl apply -f files/examples/inventory-ver2-export.yaml\n

Switch back to the first cluster

  1. Switch context back to the first cluster
    kubectl config use-context gw-api-controller-demo\n
  2. Create Kubernetes service import inventory-ver2 in the first cluster:
    kubectl apply -f files/examples/inventory-ver2-import.yaml\n
  3. Update the HTTPRoute inventory rules to route 10% traffic to the first cluster and 90% traffic to the second cluster:
    kubectl apply -f files/examples/inventory-route-bluegreen.yaml\n
  4. Check the service-to-service connectivity from parking(in the first cluster) to inventory-ver1(in in the first cluster) and inventory-ver2(in in the second cluster):

    inventoryFQDN=$(kubectl get httproute inventory -o json | jq -r '.metadata.annotations.\"application-networking.k8s.aws/lattice-assigned-domain-name\"')\nkubectl exec deploy/parking -- sh -c 'for ((i=1; i<=30; i++)); do curl -s \"$0\"; done' \"$inventoryFQDN\"\n

    Requesting to Pod(inventory-ver2-6dc74b45d8-rlnlt): Inventory-ver2 handler pod <----> in 2nd cluster\nRequesting to Pod(inventory-ver2-6dc74b45d8-rlnlt): Inventory-ver2 handler pod\nRequesting to Pod(inventory-ver2-6dc74b45d8-rlnlt): Inventory-ver2 handler pod\nRequesting to Pod(inventory-ver2-6dc74b45d8-rlnlt): Inventory-ver2 handler pod\nRequesting to Pod(inventory-ver2-6dc74b45d8-95rsr): Inventory-ver1 handler pod <----> in 1st cluster\nRequesting to Pod(inventory-ver2-6dc74b45d8-rlnlt): Inventory-ver2 handler pod\nRequesting to Pod(inventory-ver2-6dc74b45d8-95rsr): Inventory-ver2 handler pod\nRequesting to Pod(inventory-ver2-6dc74b45d8-95rsr): Inventory-ver2 handler pod\nRequesting to Pod(inventory-ver1-74fc59977-wg8br): Inventory-ver1 handler pod....\n

    You can see that the traffic is distributed between inventory-ver1 and inventory-ver2 as expected.

"},{"location":"guides/getstarted/#cleanup","title":"Cleanup","text":"

To avoid additional charges, remove the demo infrastructure from your AWS account.

Multi-cluster

Delete resources in the Multi-cluster walkthrough.

Warning

Remember that you need to have the AWS Region set.

export AWS_REGION=<cluster_region>\n

  1. Cleanup VPC Lattice service and service import in gw-api-controller-demo cluster:

    kubectl config use-context gw-api-controller-demo\nkubectl delete -f files/examples/inventory-route-bluegreen.yaml\nkubectl delete -f files/examples/inventory-ver2-import.yaml\n

  2. Delete service export and applications in gw-api-controller-demo-2 cluster:

    kubectl config use-context gw-api-controller-demo-2\nkubectl delete -f files/examples/inventory-ver2-export.yaml\nkubectl delete -f files/examples/inventory-ver2.yaml\n

  3. Delete the service network association (this could take up to one minute):

    CLUSTER_NAME=gw-api-controller-demo-2\nCLUSTER_VPC_ID=$(aws eks describe-cluster --name $CLUSTER_NAME | jq -r .cluster.resourcesVpcConfig.vpcId)\nSERVICE_NETWORK_ASSOCIATION_IDENTIFIER=$(aws vpc-lattice list-service-network-vpc-associations --vpc-id $CLUSTER_VPC_ID --query \"items[?serviceNetworkName==\"\\'my-hotel\\'\"].id\" | jq -r '.[]')\naws vpc-lattice delete-service-network-vpc-association  --service-network-vpc-association-identifier $SERVICE_NETWORK_ASSOCIATION_IDENTIFIER\n

Single cluster

Delete resources in the Single cluster walkthrough.

Warning

Remember that you need to have the AWS Region set.

export AWS_REGION=<cluster_region>\n

  1. Delete VPC Lattice services and applications in gw-api-controller-demo cluster:

    kubectl config use-context gw-api-controller-demo\nkubectl delete -f files/examples/inventory-route.yaml\nkubectl delete -f files/examples/inventory-ver1.yaml\nkubectl delete -f files/examples/rate-route-path.yaml\nkubectl delete -f files/examples/parking.yaml\nkubectl delete -f files/examples/review.yaml\nkubectl delete -f files/examples/my-hotel-gateway.yaml\n

  2. Delete the service network association (this could take up to one minute):

    CLUSTER_NAME=gw-api-controller-demo\nCLUSTER_VPC_ID=$(aws eks describe-cluster --name $CLUSTER_NAME | jq -r .cluster.resourcesVpcConfig.vpcId)\nSERVICE_NETWORK_ASSOCIATION_IDENTIFIER=$(aws vpc-lattice list-service-network-vpc-associations --vpc-id $CLUSTER_VPC_ID --query \"items[?serviceNetworkName==\"\\'my-hotel\\'\"].id\" | jq -r '.[]')\naws vpc-lattice delete-service-network-vpc-association  --service-network-vpc-association-identifier $SERVICE_NETWORK_ASSOCIATION_IDENTIFIER\n

Cleanup VPC Lattice Resources

  1. Cleanup controllers in gw-api-controller-demo and gw-api-controller-demo-2 clusters:

    HelmKubectl
    kubectl config use-context gw-api-controller-demo\nCLUSTER_NAME=gw-api-controller-demo\naws ecr-public get-login-password --region us-east-1 | helm registry login --username AWS --password-stdin public.ecr.aws\nhelm uninstall gateway-api-controller --namespace aws-application-networking-system \nkubectl config use-context gw-api-controller-demo-2\nCLUSTER_NAME=gw-api-controller-demo-2\naws ecr-public get-login-password --region us-east-1 | helm registry login --username AWS --password-stdin public.ecr.aws\nhelm uninstall gateway-api-controller --namespace aws-application-networking-system \n
    kubectl config use-context gw-api-controller-demo\nkubectl delete -f https://raw.githubusercontent.com/aws/aws-application-networking-k8s/main/files/controller-installation/deploy-v1.0.4.yaml\nkubectl config use-context gw-api-controller-demo-2\nkubectl delete -f https://raw.githubusercontent.com/aws/aws-application-networking-k8s/main/files/controller-installation/deploy-v1.0.4.yaml\n
  2. Delete the service network:

    1. Ensure the service network associations have been deleted (do not move forward if the deletion is still IN PROGRESS):
      SN_IDENTIFIER=$(aws vpc-lattice list-service-networks --query \"items[?name==\"\\'my-hotel\\'\"].id\" | jq -r '.[]')\naws vpc-lattice list-service-network-vpc-associations --service-network-identifier $SN_IDENTIFIER\n
    2. Delete my-hotel service network:
      aws vpc-lattice delete-service-network --service-network-identifier $SN_IDENTIFIER\n
    3. Ensure the service network my-hotel is deleted:
      aws vpc-lattice list-service-networks\n

Cleanup the clusters

Finally, remember to delete the clusters you created for this walkthrough:

  1. Delete gw-api-controller-demo cluster:

    eksctl delete cluster --name=gw-api-controller-demo\nk config delete-context gw-api-controller-demo\n

  2. Delete gw-api-controller-demo-2 cluster:

    eksctl delete cluster --name=gw-api-controller-demo-2\nk config delete-context gw-api-controller-demo-2\n

"},{"location":"guides/grpc/","title":"GRPCRoute Support","text":""},{"location":"guides/grpc/#what-is-grpcroute","title":"What is GRPCRoute?","text":"

The GRPCRoute is a custom resource defined in the Gateway API that specifies how gRPC traffic should be routed. It allows you to set up routing rules based on various match criteria, such as service names and methods. With GRPCRoute, you can ensure that your gRPC traffic is directed to the appropriate backend services in a Kubernetes environment.

For a detailed reference on GRPCRoute from the Gateway API, please check the official Gateway API documentation.

"},{"location":"guides/grpc/#setting-up-a-helloworld-grpc-server","title":"Setting up a HelloWorld gRPC Server","text":"

In this section, we'll walk you through deploying a simple \"HelloWorld\" gRPC server and setting up the required routing rules using the Gateway API.

Deploying the Necessary Resources

  1. Apply the Gateway Configuration: This YAML file contains the definition for a gateway with an HTTPS listener.

    kubectl apply -f files/examples/my-hotel-gateway-multi-listeners.yaml\n

  2. Deploy the gRPC Server: Deploy the example gRPC server which will respond to the SayHello gRPC request.

    kubectl apply -f files/examples/greeter-grpc-server.yaml\n

  3. Set Up the gRPC Route:This YAML file contains the GRPCRoute resource which directs the gRPC traffic to our example server.

    kubectl apply -f files/examples/greeter-grpc-route.yaml\n

  4. Verify the Deployment:Check to make sure that our gRPC server pod is running and get its name.

    kubectl get pods -A\n

Testing the gRPC Server

  1. Access the gRPC Server Pod: Copy the name of the pod running the greeter-grpc-server and use it to access the pod's shell.

    kubectl exec -it <name-of-grpc-server-pod> -- bash\n

  2. Prepare the Test Client: Inside the pod shell, create a test client by pasting the provided Go code.

    cat << EOF > test.go\npackage main\n\nimport (\n   \"crypto/tls\"\n   \"log\"\n   \"os\"\n\n   \"golang.org/x/net/context\"\n   \"google.golang.org/grpc\"\n   \"google.golang.org/grpc/credentials\"\n   pb \"google.golang.org/grpc/examples/helloworld/helloworld\"\n)\n\nfunc main() {\n   if len(os.Args) < 3 {\n   log.Fatalf(\"Usage: %s <address> <port>\", os.Args[0])\n   }\n\n   address := os.Args[1] + \":\" + os.Args[2]\n\n   // Create a connection with insecure TLS (no server verification).\n   creds := credentials.NewTLS(&tls.Config{\n       InsecureSkipVerify: true,\n   })\n   conn, err := grpc.Dial(address, grpc.WithTransportCredentials(creds))\n   if err != nil {\n       log.Fatalf(\"did not connect: %v\", err)\n   }\n   defer conn.Close()\n   c := pb.NewGreeterClient(conn)\n\n   // Contact the server and print out its response.\n   name := \"world\"\n   if len(os.Args) > 3 {\n       name = os.Args[3]\n   }\n   r, err := c.SayHello(context.Background(), &pb.HelloRequest{Name: name})\n   if err != nil {\n       log.Fatalf(\"could not greet: %v\", err)\n   }\n   log.Printf(\"Greeting: %s\", r.Message)\n}\nEOF\n

  3. Run the Test Client: Execute the test client, making sure to replace <SERVICE DNS> with the VPC Lattice service DNS and <PORT> with the port your Lattice listener uses (in this example, we use 443).

    go run test.go <SERVICE DNS> <PORT>\n

Expected Output

If everything is set up correctly, you should see the following output:

Greeting: Hello world\n

This confirms that our gRPC request was successfully routed through VPC Lattice and processed by our greeter-grpc-server.

"},{"location":"guides/https/","title":"HTTPS","text":""},{"location":"guides/https/#configure-https-connections","title":"Configure HTTPs connections","text":"

The Getting Started guide uses HTTP communications by default. Using the examples here, you can change that to HTTPs. If you choose, you can further customize your HTTPs connections by adding custom domain names and certificates, as described below.

NOTE: You can get the yaml files used on this page by cloning the AWS Gateway API Controller for VPC Lattice site. The files are in the files/examples/ directory.

"},{"location":"guides/https/#securing-traffic-using-https","title":"Securing Traffic using HTTPs","text":"

By adding https to the amazon-vpc-lattice gateway, you can tell the listener to use HTTPs communications. The following modifications to the files/examples/my-hotel-gateway.yaml file add HTTPs communications:

my-hotel-gateway.yaml
apiVersion: gateway.networking.k8s.io/v1\nkind: Gateway\nmetadata:\n  name: my-hotel\nspec:\n  gatewayClassName: amazon-vpc-lattice\n  listeners:\n  - name: http\n    protocol: HTTP\n    port: 80\n  - name: https         # Specify https listener\n    protocol: HTTPS     # Specify HTTPS protocol\n    port: 443           # Specify communication on port 443\n...\n

Next, the following modifications to the files/examples/rate-route-path.yaml file tell the rates HTTPRoute to use HTTPs for communications:

rate-route-path.yaml
apiVersion: gateway.networking.k8s.io/v1\nkind: HTTPRoute\nmetadata:\n  name: rates\nspec:\n  parentRefs:\n  - name: my-hotel\n    sectionName: http \n  - name: my-hotel      # Specify the parentRefs name\n    sectionName: https  # Specify all traffic MUST use HTTPs\n  rules:\n...\n

In this case, the VPC Lattice service automatically generates a managed ACM certificate and uses it for encryting client to service traffic.

"},{"location":"guides/https/#bring-your-own-certificate-byoc","title":"Bring Your Own Certificate (BYOC)","text":"

If you want to use a custom domain name along with its own certificate, you can:

The following shows modifications to files/examples/my-hotel-gateway.yaml to add a custom certificate:

my-hotel-gateway.yaml

apiVersion: gateway.networking.k8s.io/v1\nkind: Gateway\nmetadata:\n  name: my-hotel\n  annotations:\n    application-networking.k8s.aws/lattice-vpc-association: \"true\"\nspec:\n  gatewayClassName: amazon-vpc-lattice\n  listeners:\n  - name: http\n    protocol: HTTP\n    port: 80\n  - name: https\n    protocol: HTTPS      # This is required\n    port: 443\n    tls:\n      mode: Terminate    # This is required\n      certificateRefs:   # This is required per API spec, but currently not used by the controller\n      - name: unused\n      options:           # Instead, we specify ACM certificate ARN under this section\n        application-networking.k8s.aws/certificate-arn: arn:aws:acm:us-west-2:<account>:certificate/<certificate-id>\n
Note that only Terminate mode is supported (Passthrough is not supported).

Next, associate the HTTPRoute to the listener configuration you just configured:

rate-route-path.yaml
apiVersion: gateway.networking.k8s.io/v1\nkind: HTTPRoute\nmetadata:\n  name: rates\nspec:\n  hostnames:\n    - review.my-test.com               # MUST match the DNS in the certificate\n  parentRefs:\n  - name: my-hotel\n    sectionName: http \n  - name: my-hotel                     # Use the listener defined above as parentRef\n    sectionName: https\n...\n
"},{"location":"guides/https/#enabling-tls-connection-on-the-backend","title":"Enabling TLS connection on the backend","text":"

Currently, TLS Passthrough mode is not supported in the controller, but it allows TLS re-encryption to support backends that only allow TLS connections. To handle this use case, you need to configure your service to receive HTTPs traffic instead:

target-group.yaml
apiVersion: application-networking.k8s.aws/v1alpha1\nkind: TargetGroupPolicy\nmetadata:\n    name: test-policy\nspec:\n    targetRef:\n        group: \"\"\n        kind: Service\n        name: my-parking-service # Put service name here\n    protocol: HTTPS\n    protocolVersion: HTTP1\n

This will create VPC Lattice TargetGroup with HTTPs protocol option, which can receive TLS traffic. Note that certificate validation is not supported.

For more details, please refer to TargetGroupPolicy API reference.

"},{"location":"guides/pod-readiness-gates/","title":"Pod readiness gate","text":"

AWS Gateway API controller supports \u00bbPod readiness gates\u00ab to indicate that pod is registered to the VPC Lattice and healthy to receive traffic. The controller automatically injects the necessary readiness gate configuration to the pod spec via mutating webhook during pod creation.

For readiness gate configuration to be injected to the pod spec, you need to apply the label application-networking.k8s.aws/pod-readiness-gate-inject: enabled to the pod namespace.

The pod readiness gate is needed under certain circumstances to achieve full zero downtime rolling deployments. Consider the following example:

In order to avoid this situation, the AWS Gateway API controller can set the readiness condition on the pods that constitute your ingress or service backend. The condition status on a pod will be set to True only when the corresponding target in the VPC Lattice target group shows a health state of \u00bbHealthy\u00ab. This prevents the rolling update of a deployment from terminating old pods until the newly created pods are \u00bbHealthy\u00ab in the VPC Lattice target group and ready to take traffic.

"},{"location":"guides/pod-readiness-gates/#setup","title":"Setup","text":"

Pod readiness gates rely on \u00bbadmission webhooks\u00ab, where the Kubernetes API server makes calls to the AWS Gateway API controller as part of pod creation. This call is made using TLS, so the controller must present a TLS certificate. This certificate is stored as a standard Kubernetes secret. If you are using Helm, the certificate will automatically be configured as part of the Helm install.

If you are manually deploying the controller using the deploy.yaml file, you will need to either patch the deploy.yaml file (see scripts/patch-deploy-yaml.sh) or generate the secret following installation (see scripts/gen-webhook-secret.sh) and manually enable the webhook via the WEBHOOK_ENABLED environment variable.

Note that, without the secret in place, the controller cannot start successfully, and you will see an error message like the following:

{\"level\":\"error\",\"ts\":\"...\",\"logger\":\"setup\",\"caller\":\"workspace/main.go:240\",\"msg\":\"tls: failed to find any PEM data in certificate inputproblem running manager\"}\n
For this reason, the webhook is DISABLED by default in the controller for the non-Helm install. You can enable the webhook by setting the WEBHOOK_ENABLED environment variable to \"true\" in the deploy.yaml file.
apiVersion: apps/v1\nkind: Deployment\nmetadata:\n  name: gateway-api-controller\n  namespace: aws-application-networking-system\n  labels:\n    control-plane: gateway-api-controller\nspec:\n  ...\n  template:\n    metadata:\n      annotations:\n        kubectl.kubernetes.io/default-container: manager\n      labels:\n        control-plane: gateway-api-controller\n    spec:\n      securityContext:\n        runAsNonRoot: true\n      containers:\n      - command:\n        ...\n        name: manager\n        ...\n        env:\n          - name: WEBHOOK_ENABLED\n            value: \"true\"   # <-- value of \"true\" enables the webhook in the controller\n
If you run scripts/patch-deploy-yaml.sh prior to installing deploy.yaml, the script will create the necessary TLS certificates and configuration and will enable the webhook in the controller. Note that, even with the webhook enabled, the webhook will only run for namespaces labeled with application-networking.k8s.aws/pod-readiness-gate-inject: enabled.

"},{"location":"guides/pod-readiness-gates/#enabling-the-readiness-gate","title":"Enabling the readiness gate","text":"

After a Helm install or manually configuring and enabling the webhook, you are ready to begin using pod readiness gates. Apply a label to each namespace you would like to use this feature. You can create and label a namespace as follows -

$ kubectl create namespace example-ns\nnamespace/example-ns created\n\n$ kubectl label namespace example-ns application-networking.k8s.aws/pod-readiness-gate-inject=enabled\nnamespace/example-ns labeled\n\n$ kubectl describe namespace example-ns\nName:         example-ns\nLabels:       application-networking.k8s.aws/pod-readiness-gate-inject=enabled\n              kubernetes.io/metadata.name=example-ns\nAnnotations:  <none>\nStatus:       Active\n

Once labelled, the controller will add the pod readiness gates to all subsequently created pods in the namespace.

The readiness gates have the condition type application-networking.k8s.aws/pod-readiness-gate and the controller injects the config to the pod spec only during pod creation.

"},{"location":"guides/pod-readiness-gates/#object-selector","title":"Object Selector","text":"

The default webhook configuration matches all pods in the namespaces containing the label application-networking.k8s.aws/pod-readiness-gate-inject=enabled. You can modify the webhook configuration further to select specific pods from the labeled namespace by specifying the objectSelector. For example, in order to select ONLY pods with application-networking.k8s.aws/pod-readiness-gate-inject: enabled label instead of all pods in the labeled namespace, you can add the following objectSelector to the webhook:

  objectSelector:\n    matchLabels:\n      application-networking.k8s.aws/pod-readiness-gate-inject: enabled\n
To edit,
$ kubectl edit mutatingwebhookconfigurations aws-appnet-gwc-mutating-webhook\n  ...\n  name: mpod.gwc.k8s.aws\n  namespaceSelector:\n    matchExpressions:\n    - key: application-networking.k8s.aws/pod-readiness-gate-inject\n      operator: In\n      values:\n      - enabled\n  objectSelector:\n    matchLabels:\n      application-networking.k8s.aws/pod-readiness-gate-inject: enabled\n  ...\n
When you specify multiple selectors, pods matching all the conditions will get mutated.

"},{"location":"guides/pod-readiness-gates/#checking-the-pod-condition-status","title":"Checking the pod condition status","text":"

The status of the readiness gates can be verified with kubectl get pod -o wide:

NAME                          READY   STATUS    RESTARTS   AGE   IP         NODE                       READINESS GATES\nnginx-test-5744b9ff84-7ftl9   1/1     Running   0          81s   10.1.2.3   ip-10-1-2-3.ec2.internal   0/1\n

When the target is registered and healthy in the VPC Lattice target group, the output will look like:

NAME                          READY   STATUS    RESTARTS   AGE   IP         NODE                       READINESS GATES\nnginx-test-5744b9ff84-7ftl9   1/1     Running   0          81s   10.1.2.3   ip-10-1-2-3.ec2.internal   1/1\n

If a readiness gate doesn't get ready, you can check the reason via:

$ kubectl get pod nginx-test-545d8f4d89-l7rcl -o yaml | grep -B7 'type: application-networking.k8s.aws/pod-readiness-gate'\nstatus:\n  conditions:\n  - lastProbeTime: null\n    lastTransitionTime: null\n    reason: HEALTHY\n    status: \"True\"\n    type: application-networking.k8s.aws/pod-readiness-gate\n
"},{"location":"guides/ram-sharing/","title":"Share Kubernetes Gateway (VPC Lattice Service Network) between different AWS accounts","text":"

AWS Resource Access Manager (AWS RAM) helps you share your resources across AWS Accounts, within your AWS Organization or Organizational Units (OUs). RAM supports 2 types of VPC Lattice resource sharing: VPC Lattice services and service networks.

Let's build an example where Account A (sharer account) shares its service network with Account B (sharee account), and Account B can access all Kubernetes services (VPC Lattice Target Groups) and Kubernetes HTTPRoutes(VPC Lattice services) within this sharer account's service network.

Create a VPC Lattice Resources

In Account A, set up a cluster with the Controller and an example application installed. You can follow the Getting Started guide up to the \"Single Cluster\" section.

Share the VPC Lattice Service Network

Now that we have a VPC Lattice service network and service in Account A , share this service network to Account B.

  1. Retrieve the my-hotel service network Identifier:

    aws vpc-lattice list-service-networks --query \"items[?name==\"\\'my-hotel\\'\"].id\" | jq -r '.[]'\n

  2. Share the my-hotel service network, using the identifier retrieved in the previous step.

    1. Open the AWS RAM console in Account A and create a resource share.
    2. Select VPC Lattice service network resource sharing type.
    3. Select the my-hotel service network identifier retrieved in the previous step.
    4. Associate AWS Managed Permissions.
    5. Set Account B as principal.
    6. Review and create the resource share.
  3. Open the Account B's AWS RAM console and accept Account A's service network sharing invitation in the \"Shared with me\" section.

  4. Switch back to Account A, retrieve the service network ID.

    SERVICE_NETWORK_ID=$(aws vpc-lattice list-service-networks --query \"items[?name==\"\\'my-hotel\\'\"].id\" | jq -r '.[]')\necho $SERVICE_NETWORK_ID\n
  5. Switch to Account B and verify that my-hotel service network resource is available in Account B (referring to the SERVICE_NETWORK_ID retrived in the previous step).

  6. Now choose an Amazon VPC in Account B to attach to the my-hotel service network.

    VPC_ID=<your_vpc_id>\naws vpc-lattice create-service-network-vpc-association --service-network-identifier $SERVICE_NETWORK_ID --vpc-identifier $VPC_ID\n

    Warning

    VPC Lattice is a regional service, therefore the VPC must be in the same AWS Region of the service network you created in Account A.

Test cross-account connectivity

You can verify that the parking and review microservices - in Account A - can be consumed from resources in the assocuated VPC in Account B.

  1. To simplify, let's create and connect to a Cloud9 environment in the VPC you previously attached to the my-hotel service network.

  2. In Account A, retrieve the VPC Lattice services urls.

    ratesFQDN=$(aws vpc-lattice list-services --query \"items[?name==\"\\'rates-default\\'\"].dnsEntry\" | jq -r '.[].domainName')\ninventoryFQDN=$(aws vpc-lattice list-services --query \"items[?name==\"\\'inventory-default\\'\"].dnsEntry\" | jq -r '.[].domainName')\necho \"$ratesFQDN \\n$inventoryFQDN\"\n
    ```

  3. In the Cloud9 instance in Account B, install curl in the instance and curl parking and rates microservices:

    sudo apt-get install curl\ncurl $ratesFQDN/parking $ratesFQDN/review\n

Cleanup

To avoid additional charges, remove the demo infrastructure from your AWS Accounts.

  1. Delete the service network Association you created in Account B. In Account A:

    VPC_ID=<accountB_vpc_id>\nSERVICE_NETWORK_ASSOCIATION_IDENTIFIER=$(aws vpc-lattice list-service-network-vpc-associations --vpc-id $VPC_ID --query \"items[?serviceNetworkName==\"\\'my-hotel\\'\"].id\" | jq -r '.[]')\naws vpc-lattice delete-service-network-vpc-association  --service-network-vpc-association-identifier $SERVICE_NETWORK_ASSOCIATION_IDENTIFIER\n

    Ensure the service network Association is deleted:

    aws vpc-lattice list-service-network-vpc-associations --vpc-id $VPC_ID\n

  2. Delete the service network RAM share resource in AWS RAM Console.

  3. Follow the cleanup section of the getting Started guide to delete Cluster and service network Resources in Account A.

  4. Delete the Cloud9 Environment in Account B.

"},{"location":"guides/tls-passthrough/","title":"TLS Passthrough Support","text":"

Kubernetes Gateway API lays out the general guidelines on how to configure TLS passthrough. Here are examples on how to use them against AWS Gateway Api controller and VPC Lattice.

"},{"location":"guides/tls-passthrough/#install-gateway-api-tlsroute-crd","title":"Install Gateway API TLSRoute CRD","text":"

The TLSRoute CRD is already included in the helm chart and deployment.yaml, if you are using these 2 methods to install the controller no extra steps needed. If you want to install the TLSRoute CRD manually by yourself:

# Install CRD\nkubectl apply -f config/crds/bases/gateway.networking.k8s.io_tlsroutes.yaml\n# Verfiy TLSRoute CRD \nkubectl get crd tlsroutes.gateway.networking.k8s.io \nNAME                                  CREATED AT\ntlsroutes.gateway.networking.k8s.io   2024-03-07T23:16:22Z\n

"},{"location":"guides/tls-passthrough/#setup-tls-passthrough-connectivity-in-a-single-cluster","title":"Setup TLS Passthrough Connectivity in a single cluster","text":""},{"location":"guides/tls-passthrough/#1-configure-tls-passthrough-listener-on-gateway","title":"1. Configure TLS Passthrough Listener on Gateway","text":"
kubectl apply -f files/examples/my-gateway-tls-passthrough.yaml\n
# tls listener config snips:\napiVersion: gateway.networking.k8s.io/v1\nkind: Gateway\nmetadata:\n  name: my-hotel-tls-passthrough\nspec:\n  gatewayClassName: amazon-vpc-lattice\n  listeners:\n  ...\n  - name: tls\n    protocol: TLS \n    port: 443\n    tls:\n      mode: Passthrough \n  ...\n
"},{"location":"guides/tls-passthrough/#2-configure-tlsroute","title":"2. Configure TLSRoute","text":"
# Suppose in the below example, we use the \"parking\" service as the client pod to test the TLS passthrough traffic.\nkubectl apply -f files/examples/parking.yaml\n\n# Configure nginx backend service (This nginx image includes a self-signed certificate)\nkubectl apply -f files/example/nginx-server-tls-passthrough.yaml\n\n# configure nginx tls route\nkubectl apply -f files/examples/tlsroute-nginx.yaml\n
"},{"location":"guides/tls-passthrough/#3-verify-the-controller-has-reconciled-nginx-tls-route","title":"3. Verify the controller has reconciled nginx-tls route","text":"

Make sure the TLSRoute has the application-networking.k8s.aws/lattice-assigned-domain-name annotation and status Accepted: True

kubectl get tlsroute nginx-tls -o yaml\napiVersion: gateway.networking.k8s.io/v1alpha2\nkind: TLSRoute\nmetadata:\n  annotations:\n    application-networking.k8s.aws/lattice-assigned-domain-name: nginx-tls-default-0af995120af2711bc.7d67968.vpc-lattice-svcs.us-west-2.on.aws\n    ...\n  name: nginx-tls\n  namespace: default\n ...\n\nstatus:\n  parents:\n  - conditions:\n    - lastTransitionTime: .....\n      message: \"\"\n      observedGeneration: 1\n      reason: Accepted\n      status: \"True\"\n      type: Accepted\n    - lastTransitionTime: .....\n      message: \"\"\n      observedGeneration: 1\n      reason: ResolvedRefs\n      status: \"True\"\n      type: ResolvedRefs\n    controllerName: application-networking.k8s.aws/gateway-api-controller\n

"},{"location":"guides/tls-passthrough/#4-verify-tls-passthrough-traffic","title":"4. Verify TLS Passthrough Traffic","text":"
kubectl get deployment nginx-tls \nNAME        READY   UP-TO-DATE   AVAILABLE   AGE\nnginx-tls   2/2     2            2           1d\n\n# Use the specified TLSRoute hostname to send traffic to the beackend nginx service\nkubectl exec deployments/parking  -- curl -kv  https://nginx-test.my-test.com  --resolve nginx-test.my-test.com:443:169.254.171.0\n\n* Trying 169.254.171.0:443...\n* Connected to nginx-test.my-test.com (169.254.171.0) port 443 (#0)\n....\n* TLSv1.2 (OUT), TLS header, Certificate Status (22):\n* TLSv1.2 (OUT), TLS handshake, Client hello (1):\n* TLSv1.2 (IN), TLS handshake, Server hello (2):\n* TLSv1.2 (IN), TLS handshake, Certificate (11):\n* TLSv1.2 (IN), TLS handshake, Server key exchange (12):\n* TLSv1.2 (IN), TLS handshake, Server finished (14):\n* TLSv1.2 (OUT), TLS handshake, Client key exchange (16):\n* TLSv1.2 (OUT), TLS change cipher, Change cipher spec (1):\n* TLSv1.2 (OUT), TLS handshake, Finished (20):    \n* TLSv1.2 (IN), TLS change cipher, Change cipher spec (1):\n* TLSv1.2 (IN), TLS handshake, Finished (20):   <---------- TLS Handshake from client pod to the backend `nginx-tls` pod successfully, no tls termination in the middle\n* SSL connection using TLSv1.2 / ECDHE-RSA-AES256-GCM-SHA384\n* ALPN, server accepted to use h2\n....\n<body>\n<h1>Welcome to nginx!</h1>\n<p>If you see this page, the nginx web server is successfully installed and\nworking. Further configuration is required.</p>\n....\n
"},{"location":"guides/tls-passthrough/#setup-tls-passthrough-connectivity-spanning-multiple-clusters","title":"Setup TLS Passthrough Connectivity spanning multiple clusters","text":""},{"location":"guides/tls-passthrough/#1-in-this-example-we-still-use-the-parking-kubernetes-service-as-the-client-pod-to-test-the-cross-cluster-tls-passthrough-traffic","title":"1. In this example we still use the \"parking\" Kubernetes service as the client pod to test the cross cluster TLS passthrough traffic.","text":"
kubectl apply -f files/examples/parking.yaml\n
"},{"location":"guides/tls-passthrough/#2-in-cluster-1-create-tls-rate1-kubernetes-service","title":"2. In cluster-1, create tls-rate1 Kubernetes Service:","text":"
kubectl apply -f files/examples/tls-rate1.yaml\n
"},{"location":"guides/tls-passthrough/#3-configure-serviceexport-with-targetgrouppolicy-protocoltcp-in-cluster-2","title":"3. Configure ServiceExport with TargetGroupPolicy protocol:TCP in cluster-2","text":"
# Create tls-rate2 Kubernetes Service in cluster-2\nkubectl apply -f files/examples/tls-rate2.yaml\n# Create serviceexport in cluster-2\nkubectl apply -f files/examples/tls-rate2-export.yaml\n# Create targetgroup policy to configure TCP protocol for tls-rate2 in cluster-2\nkubectl apply -f files/examples/tls-rate2-targetgrouppolicy.yaml\n
# Snips of serviceexport config\napiVersion: application-networking.k8s.aws/v1alpha1\nkind: ServiceExport\nmetadata:\n  name: tls-rate-2\n  annotations:\n    application-networking.k8s.aws/federation: \"amazon-vpc-lattice\"\n# Snips of targetgroup policy config\napiVersion: application-networking.k8s.aws/v1alpha1\nkind: TargetGroupPolicy\nmetadata:\n    name: tls-rate2\nspec:\n    targetRef:\n        group: \"application-networking.k8s.aws\"\n        kind: ServiceExport\n        name: tls-rate2\n    protocol: TCP\n
"},{"location":"guides/tls-passthrough/#4-configure-serviceimport-in-cluster1","title":"4. Configure ServiceImport in cluster1","text":"
kubectl apply -f files/examples/tls-rate2-import.yaml\n
"},{"location":"guides/tls-passthrough/#5-configure-tlsroute-for-bluegreen-deployment","title":"5. Configure TLSRoute for blue/green deployment","text":"
kubectl apply -f files/examples/rate-tlsroute-bluegreen.yaml\n\n# snips of TLSRoute span multiple Kubernetes Clusters\napiVersion: gateway.networking.k8s.io/v1alpha2\nkind: TLSRoute\nmetadata:\n  name: tls-rate\nspec:\n  hostnames:\n  - tls-rate.my-test.com\n  parentRefs:\n  - name: my-hotel-tls\n    sectionName: tls\n  rules:\n  - backendRefs:\n    - name: tls-rate1 <---------- to Kubernetes Cluster-1\n      kind: Service\n      port: 443\n      weight: 10\n    - name: tls-rate2 <---------- to Kubernetes Cluster-2\n      kind: ServiceImport\n      port: 443\n      weight: 90  \n
"},{"location":"guides/tls-passthrough/#6-verify-cross-cluster-tls-passthrough-traffic","title":"6. Verify cross-cluster TLS passthrough traffic","text":"

Expected to receive the weighted traffic route to tls-rate1 service(10%) and tls-rate2 service(90%), if you curl the tls-rate.my-test.com from the client pod multiple times:

kubectl exec deploy/parking -- sh -c 'for ((i=1; i<=30; i++)); do curl -k https://tls-rate.my-test.com --resolve tls-rate.my-test.com:443:169.254.171.0 2>/dev/null; done'\n\nRequsting to TLS Pod(tls-rate2-7f8b9cc97b-fgqk6): tls-rate2 handler pod <---->  k8s service in cluster-2\nRequsting to TLS Pod(tls-rate2-7f8b9cc97b-fgqk6): tls-rate2 handler pod\nRequsting to TLS Pod(tls-rate2-7f8b9cc97b-fgqk6): tls-rate2 handler pod\nRequsting to TLS Pod(tls-rate2-7f8b9cc97b-fgqk6): tls-rate2 handler pod\nRequsting to TLS Pod(tls-rate1-98cc7fd87a-642zw): tls-rate1 handler pod <----> k8s service in cluster-1\nRequsting to TLS Pod(tls-rate2-7f8b9cc97b-fgqk6): tls-rate2 handler pod\nRequsting to TLS Pod(tls-rate2-7f8b9cc97b-fgqk6): tls-rate2 handler pod\nRequsting to TLS Pod(tls-rate2-7f8b9cc97b-fgqk6): tls-rate2 handler pod\nRequsting to TLS Pod(tls-rate1-98cc7fd87a-642zw): tls-rate1 handler pod\n

"},{"location":"guides/upgrading-v1-0-x-to-v1-1-y/","title":"Update the AWS Gateway API Controller from v1.0.x to v1.1.y","text":"

Release v1.1.0 of the AWS Gateway API Controller is built against v1.2 of the Gateway API spec, but the controller is also compatible with the v1.1 Gateway API. It is not compatible the v1.0 Gateway API.

Previous v1.0.x builds of the controller were built against v1.0 of the Gateway API spec. This guide outlines the controller upgrade process from v1.0.x to v1.1.y.

"},{"location":"guides/upgrading-v1-0-x-to-v1-1-y/#basic-upgrade-process","title":"Basic Upgrade Process","text":"
  1. Back up configuration, in particular GRPCRoute objects
  2. Disable v1.0.x controller (e.g. scale to zero)
  3. Update Gateway API CRDs to v1.1.0
  4. Deploy and launch v1.1.y controller version

With the basic upgrade process, previously created GRPCRoutes on v1alpha2 will automatically update to v1 once they are reconciled by the controller. Alternatively, you can manually update your GRPCRoute versions (for example export to YAML, update version number, and apply updates). Creation of new GRPCRoutes objects using v1alpha2 will be rejected.

"},{"location":"guides/upgrading-v1-0-x-to-v1-1-y/#upgrading-to-gateway-api-v12","title":"Upgrading to Gateway API v1.2","text":"

Moving to GatewayAPI v1.2 can require an additional step as v1alpha2 GRPCRoute objects have been removed. If GRPCRoute objects are not already on v1, you will need to follow steps outlined in the v1.2.0 release notes.

  1. Back up configuration, in particular GRPCRoute objects
  2. Disable v1.0.x controller (e.g. scale to zero)
  3. Update Gateway API CRDs to v1.1.0
  4. Deploy and launch v1.1.Y controller version
  5. Take upgrade steps outlined in v1.2.0 Gateway API release notes
  6. Update Gateway API CRDs to v1.2.0
"}]} \ No newline at end of file