Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Full Application Load Balancer support #598

Open
sesh-kebab opened this issue Apr 10, 2018 · 1 comment
Open

Full Application Load Balancer support #598

sesh-kebab opened this issue Apr 10, 2018 · 1 comment
Assignees

Comments

@sesh-kebab
Copy link
Contributor

sesh-kebab commented Apr 10, 2018

The initial ALB support is an opinionated and somewhat rudimentary implementation. It supports a service that needs to expose a single port behind a load balancer. This issue is a proposal for how to extend the existing functionality to surface additional ALB features.

New Features:

  • Allow a single ALB to be used by multiple services via Target Groups, by allowing Service to specify Target Group (or an abstraction of a TG).
  • Path based routing via Target Group rules.

Service to Target Group

ECS services can only map to a single load balancer or target group. An application load balancer allows specifying a target group for each listener. This allows a multiple services to be behind a single ALB.

Services that are behind a classic load balancer specify the load balancer id during service creation.

Classic LB's Listener (80:80/tcp) -> Service

Services that are behind an application load balancer specify a target group arn during service creation. An ALB then needs a listener that maps a port to an existing target group.

Application LB's Listener (80:target-group-1/http) -> Target Group's Target (80:PortOfService) -> Service 

Enable re-use of a single ALB

The below describes the current ALB behavior and the proposal of how to extend it.

Current implementation
Creating an application type layer0 load balancer automatically creates:

  • one target group
  • listener on the default port to forward traffic to the target group

New implementation
For both the layer0 cli & terraform provider, allow specifying a target group for services.

CLI example:

l0 loadbalancer addport loadbalancer_name 81:81/http
l0 service create --target-port 80 --target-path "81:/service-a" environment service_name deploy

port mapping
Creating listeners on an ALBs had a one to one mapping of a default target group for all listeners. This behavior will be changed to allow a new target group for each listener for an application type layer0 load balancer.

81:81/http currently is used to map the load balancer port to the instance port for a classic LB. With ALBs, this will now map load balancer port to a target group. <load_balancer_port>/<target_group_port>:<protocol>.

--target-port
The new flag abstracts target groups. It will check if the load balancer has a target group for the specified listener port. If not, an error will be returned. If a target group and listener mapping is present, it will allow the service to link to the target group instead of the default target group for an ALB.

--target_path
The new flag will allow path based mapping by adding a rule specified to the target group specified via --target-port flag.

81:/service-a for example will create a rule for the listener specified by --target-port 80 and create a rule that will map all traffic requests coming from path /service-a to port 81.

Health checks

As health checks for ALBs are per target group, I think cascading any changes to a health check to all the target groups instead of allowing the user to target individual target group's health check is a reasonable compromise for now.

@sesh-kebab sesh-kebab changed the title Full Application Load Balancer support Multiple Services per Application Load Balancer support Apr 10, 2018
@sesh-kebab sesh-kebab changed the title Multiple Services per Application Load Balancer support Full Application Load Balancer support Apr 10, 2018
@zpatrick
Copy link
Contributor

zpatrick commented Apr 11, 2018

I realize what I'm about to propose is a little radical, but I proposed this way back when we first created a ticket for using ALBs and it seems to have gotten lost over time:

The main purpose when we created Layer0 is to provide abstractions over AWS to make developers' lives easier. Because we failed to invest adequate time in the initial design phase of Layer0, and because we failed to push back against requests that broke abstractions, we have leaky-abstractions all over the place with deploys, services, and load balancers - and we are pretty much stuck with them.

I bring this up because I don't think we need to go that route with ALBs. We have an opportunity to design a new entity from scratch; and I feel strongly that our users would prefer a well-design abstraction over a thin wrapping of the AWS API. In my mind, I think it would be great if we created a new entity that looked something like this:

resource "layer0_application" "foo" {
  name           = "foo"
  environment_id = "${layer0_environment.foo.id}"
  deploy_id      = "${layer0_deploy.foo.id}"
  scale          = 1

  port {
    host_port           = 443
    container_port      = 80
    host_protocol       = "https"
    container_protocol  = "http"
    ssl_certificate_arn = "${...}"  # abstraction leak, but is preferable to having layer0 certificate entities IMO
  }

  port {
    host_port           = 22
    container_port      = 22
    host_protocol       = "tcp"
    container_protocol  = "tcp"
  }

  health_check {
    path     = "http:80"
    interval = 5
    timeout  = 5
  }
}

Under the covers, we would use ALB and Fargate as the drivers.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

2 participants