Skip to content

Learnings about running Lambda functions locally, written in Go.

License

Notifications You must be signed in to change notification settings

northwood-labs/local-lambda-environments-with-go

Repository files navigation

Local AWS Lambda environments (with Go)

Note

This document is a DRAFT, and is IN-PROGRESS. It will be updated over time as we learn more.

Overview

AWS has open-sourced their AWS Lambda runtimes as Docker images. However, they are currently in a broken state. So we've had to create a custom build of the AWS Lambda runtime environments to get them to work as people outside of Amazon expect.

Since we work with Go, we primarily care about the provided.al2023 runtime. So this is what we're patching and rebuilding. However, the runtime alone is not enough to make local Lambda functions work the way you expect.

There is also a way to do this with the AWS SAM CLI, however (a) I'm not a fan of AWS SAM, and (b) this can be done with normal Docker Desktop, which will help you understand the pieces better and not be locked into a single vendor's local tooling.

Intended audience

The indended audience of this documentation is a person who:

  • Is actively building things for AWS Lambda (or knows how to).
  • Wants to understand how the various pieces of the puzzle work.
  • Is working with Go.
    • …or can transpose these instructions for another language
  • Is generally familiar with GitHub-isms.
    • Personal access tokens
    • GitHub releases can have downloadable assets
  • Is generally familiar with modern Docker-isms.
    • Multi-stage builds
    • Multi-platform images
    • Secure handling of secrets

Dockerfile

While you could do this in all sorts of different ways (e.g., Kubernetes, Podman, nerdctl, AWS SAM), I chose to solve the local Lambda runtime with Docker Compose running in Docker Desktop since they're good for local development.

I started with a multi-stage Dockerfile.

In the first stage, we download the AWS Lambda Runtime Interface Emulator (RIE), compiled for Linux and our current CPU architecture. In the second stage, we will put this in front of our Lambda executable that we've written ourselves.

Downloading Runtime Interface Emulator

In our Dockerfile, our first stage leverages download-asset to download the correct GitHub release asset for our current CPU architecture.

Tip

download-asset requires a valid $GITHUB_TOKEN environment variable in order to raise the GitHub rate limit. Create a new Personal Access Token, no scopes required (we only need to authenticate you). We do this securely using Docker Secrets, instead of insecurely using --build-args or plain-text secrets in source code. We show the Docker Compose definition later in this document.

# syntax=docker/dockerfile:1
FROM golang:1-alpine AS go-installer

RUN go install github.com/northwood-labs/download-asset@latest
RUN --mount=type=secret,id=github_token \
    GITHUB_TOKEN="$(cat /run/secrets/github_token)" \
    download-asset get \
        --owner-repo aws/aws-lambda-runtime-interface-emulator \
        --tag latest \
        --intel64 x86_64 \
        --arm64 arm64 \
        --pattern 'aws-lambda-rie-{{.Arch}}' \
        --write-to-bin aws-lambda-rie \
    ;

# Rename `aws-lambda-rie-arm64` or `aws-lambda-rie-x86_64` to a platform-neutral `aws-lambda-rie`.
RUN mv /usr/local/bin/aws-lambda-rie* /usr/local/bin/aws-lambda-rie

Identify the SHA digest of the Docker image

It is more secure to refer to a remote Docker image by SHA digest than by tag. This is because a SHA digest is immutable, and cannot be changed after-the-fact like a Docker tag can.

It's a little more work for a lot more security.

We want to pull the SHA digest for the :latest tag on the ghcr.io/northwood-labs/lambda-provided-al2023 image. This is a multi-platform image that has an Intel64 version and an ARM64 version.

Tip

View the GitHub Actions workflow which constructs this multi-platform Docker image from AWS source code.

docker pull ghcr.io/northwood-labs/lambda-provided-al2023:latest
docker images --digests ghcr.io/northwood-labs/lambda-provided-al2023 --format '{{ .Digest }}'

This gave me a result of sha256:2b947c7c1e18392ce6b1b311ba1715a9b043a6fb5bb6572e914764e946321382. So we'll use this instead of :latest. Over time, as :latest is updated, it will point to a different SHA digest. But this command will get you the current value.

Downloading the AWS Lambda provided.al2023 image

# syntax=docker/dockerfile:1
FROM ghcr.io/northwood-labs/lambda-provided-al2023@sha256:2b947c7c1e18392ce6b1b311ba1715a9b043a6fb5bb6572e914764e946321382

# Copy a file from the previous stage
COPY --from=go-installer /usr/local/bin/aws-lambda-rie /usr/local/bin/aws-lambda-rie
COPY entrypoint.sh /entrypoint.sh

# Ensure that these are executable
RUN chmod 0755 /usr/local/bin/aws-lambda-rie /entrypoint.sh

ENTRYPOINT ["/entrypoint.sh"]

Contents of entrypoint.sh

We will be compiling our own Lambda function and will save it to /var/runtime/bootstrap.

These are the contents of entrypoint.sh. If we're running inside a Lambda Docker image, use the Runtime Interface Emulator (from the first stage). Otherwise, don't.

#!/bin/sh
if [ -z "${AWS_LAMBDA_RUNTIME_API}" ]; then
    exec /usr/local/bin/aws-lambda-rie /var/runtime/bootstrap
else
    exec /var/runtime/bootstrap
fi

Docker Compose

Here is our docker-compose.yml file. Modern Docker Compose no longer uses version: as a top-level YAML key.

---
services:
  lambda:
    # Name of the container when it is running.
    container_name: localdev-lambda

    # Instructions which tell BuildKit how to build the image, passing secrets
    # SECURELY to the Dockerfile.
    build:
      context: .
      dockerfile: ./Dockerfile
      secrets:
        - github_token # See below for definition

    # Set shared memory limit when using `docker compose`.
    shm_size: 128mb

    # Stay running. Restart on failure.
    restart: always

    # Basic Linux-y and permission stuff.
    privileged: false
    pid: host
    sysctls:
      net.core.somaxconn: 1024

    # Environment variables used by the running Docker environment.
    # https://github.com/aws/aws-lambda-runtime-interface-emulator
    environment:
      _LAMBDA_SERVER_PORT: 8080
      AWS_LAMBDA_FUNCTION_TIMEOUT: 30      # Web timeout
      AWS_LAMBDA_FUNCTION_MEMORY_SIZE: 128 # Lambda function memory limit (logged; not enforced)
      LOG_LEVEL: DEBUG                     # Logging for the Runtime Interface Emulator

    # Mount a local directory inside the running Docker container.
    volumes:
      - ./var-runtime:/var/runtime:ro

    # Inside, the container runs on port 8080. But we want to expose it on
    # port 9000 to our host machine.
    ports:
      - 9000:8080

    # Enable running containers to communicate with services on the host machine.
    # Only works in Docker Desktop for local development. Don't do this with
    # containers you don't trust.
    extra_hosts:
      - host.docker.internal:host-gateway

# Define a secret here to read from the builder's environment variables, and
# pass them SECURELY into Docker BuildKit so that the Dockerfile can access it.
secrets:
  github_token:
    name: GITHUB_TOKEN
    environment: GITHUB_TOKEN

# Configure a bridge network to connect this (and other containers inside this
# file) together.
networks:
  dst-network:
    driver: bridge

Compiling Go into a Lambda function

  1. When you deploy a Lambda function to the real AWS Lambda service, and you're deploying a compiled executable for a Lambda function, it gets stored inside /var/runtime.

  2. We know that later, we'll mount a local var-runtime directory (that is not a typo) into the running Docker environment at /var/runtime, so we'll compile our Lambda executable into our local var-runtime directory.

    volumes:
      - ./var-runtime:/var/runtime:ro
  3. Here, we'll compile our Lambda function into the correct local directory using the appropriate build flags. We do not specify a value for GOARCH because we want to build for whatever the current CPU architecture is, since we're running this locally.

    CGO_ENABLED=0 GOOS=linux go build \
        -a -trimpath \
        -ldflags="-s -w" \
        -tags lambda.norpc \
        -o localdev/var-runtime/bootstrap \
        . ;

Running Docker Compose

From the directory containing your docker-compose.yml file, run:

docker compose up

The Docker image will build (installing the things you need), and the Lambda RIE process will start.

In our docker-compose.yml file, we specified that the Lambda service inside Docker (port 8080) should be exposed to the host machine on port 9000.

The endpoint for the local Lambda environment (exposed by RIE) will be:

http://localhost:9000/2015-03-31/functions/function/invocations

And in this case, whatever you send to this endpoint will be sent to your Lambda function to process.

But what if you have something sitting in-front of your Lambda function? Something like API Gateway? The payload that API Gateway sends to your Lambda function (when running in the cloud) is not the same bare-bones payload that RIE sends to your local Lambda function.

Simulating API Gateway

There is not, at the time of this writing, any out-of-the-box solution for simulating API Gateway. This GitHub issue entitled “Emulate API Gateway payload in event object” calls this out and has some helpful suggestions, but they all essentially involve setting up a reverse proxy to do what you want.

A reverse proxy sits between you and the service you're talking to, and (in this case) translates the shape of the request and response into/out-of API Gateway format.

Writing a reverse proxy

The aws/aws-lambda-go package simplifies writing Lambda functions in Go, and is a must-have library for this purpose. It provides the shapes of a variety of AWS services which communicate back to Lambda, including API Gateway. See the documentation.

The flow looks like this:

  1. Write a small web server (I used Gin) which simulates the endpoints you have with API Gateway sitting in front of Lambda. This is your reverse proxy.

  2. When you send a web request to your reverse proxy, the reverse proxy reads your HTTP method, query string parameters, request body, etc., and rewrites those inputs as an API Gateway payload.

  3. Your reverse proxy POSTs that API Gateway-shaped payload to the endpoint for Runtime Interface Emulator and your Lambda function running in a Docker environment.

    http://localhost:9000/2015-03-31/functions/function/invocations
    
  4. Runtime Interface Emulator receives the payload, and invokes your Lambda function. This is automatic if you followed all of the instructions above to wire things together appropriately.

  5. Your Lambda function, leveraging aws/aws-lambda-go, will receive that payload, and do whatever your Lambda function is supposed to do. It will respond with an events.APIGatewayProxyResponse struct.

  6. The reverse proxy receives the response from the Lambda. That response is an events.APIGatewayProxyResponse struct. The reverse proxy reads the status code and response body, then returns those back to the caller.

    Using Gin, it looks something like this. Different frameworks for different languages will look different.

    c.JSON(result.StatusCode, body)

Note

An example implementation can be found in the devsec-tools project.

Debugging

I wrote a tool several years ago which simulates an API Gateway request body. When API Gateway sits in front of a Lambda function, this is the payload that gets sent to the Lambda function on each request.

  • The headers block is faked.
  • The requestContext block is faked.
  • httpMethod, path, resource, body, and queryStringParameters are all real.

GET example

curl -XGET "https://debug.ryanparman.com/json?abc=123&def=456"

POST example

curl -XPOST "https://debug.ryanparman.com/json" \
    --header 'Content-Type: application/x-www-form-urlencoded' \
    --data-urlencode "abc=123" \
    --data-urlencode "def=456" \
    ;
curl -XPOST "https://debug.ryanparman.com/json" \
    --header 'Content-Type: application/json; charset=utf-8' \
    --data $'{"abc": 123, "def": 456}' \
    ;

Format with go-spew

If you are using Go, you might find it helpful to replace /json with /dump. The "dump" format is produced by a tool called go-spew, and shows you the data types of the payload.

Debugging a running Lambda function in Go using Delve

TBD.

About

Learnings about running Lambda functions locally, written in Go.

Topics

Resources

License

Stars

Watchers

Forks