Skip to content

Back up Route 53 DNS records to S3. Runs as ECS, Lambda, Docker, or Python script.

License

Notifications You must be signed in to change notification settings

jrokeach/route53-backup

 
 

Repository files navigation

route53-backup

Back up Route 53 DNS records to S3.

About

This is a fork from jacobfgrant/route53-lambda-backup. The primary intent is to ready the script for use for larger Route53 accounts because the 15 minute time limit imposed by AWS Lambda prevents full execution on large accounts due to AWS API interaction limits. To get around this, we can use AWS Elastic Container Service (ECS) or EKS. Other methods of execution, including Lambda, are still available using this script.

Setup

This script can be run either as a container on ECS or EKS, a scheduled AWS Lambda function, or as a bare Python script or Docker container via cron job on EC2 or your own hardware.

In any environment, the recommended manner of configuring settings is to use the following environment variables:

Variable Default Description
S3_BUCKET_NAME None (Required) Name of the S3 bucket for output
S3_BUCKET_REGION None (Required) AWS Bucket Region like "us-east-1" or "us-west-2"
S3_BUCKET_FOLDER None (Optional) Folder prefix for everything output to S3 bucket. (No /)
S3_BUCKET_VERSIONED 0 Must be 0 or 1. Set to 1 to turn on versioned mode (Recommended)

As a container

On ECS or EKS

Recommended for large Route53 deployments

The easiest mechanism is to use the Docker Image. You can load this into ECS by specifying the docker image jrokeach/route53-backup. Set the environment variables S3_BUCKET_NAME AND S3_BUCKET_REGION according to your needs. Ensure that your task has permissions to write to your S3 bucket and read from Route53.

Docker

Alternatively, to install to a docker container locally, run:

docker pull jrokeach/route53-backup

To then run the container, run:

docker run \
> -e S3_BUCKET_NAME="<S3BUCKET>" \
> -e S3_BUCKET_REGION="<S3BUCKETREGION>" \
> -e S3_BUCKET_FOLDER="<S3BUCKETFOLDER>" \
> -e S3_BUCKET_VERSIONED=<0|1> \
> -e AWS_ACCESS_KEY_ID="<AWSACCESSKEYID>" \
> -e AWS_SECRET_ACCESS_KEY="<AWSSECRETACCESSKEY>" \
> jrokeach/route53-backup

The access key must belong to a user with permissions to write to your S3 bucket and read from Route53.

As a lambda function

Recommended for small Route53 deployments

Upload the script to AWS Lambda, set the S3_BUCKET_NAME and S3_BUCKET_REGION environmental variables, set up an IAM role (with Route 53 read-only and S3 read-write permissions), and set a schedule.

As a Python script

Alternatively, you can set the script to run using cron on an EC2 instance or your own computing environment with the appropriate IAM role permissions. Don't forget to add the proper Python 3 shebang and set the s3_bucket_name and s3_bucket_region variables in the file when doing so.

Versioned Mode

When versioned mode is enabled, no timestamp is prepended to the zones' directories. In general (with or without versioning mode), upon each file upload, the file is checked to confirm that it does not exist or if it does, it has changed before uploading. This is for operation with S3's native versioning functionality.

In addition, as a sanity check, the script will not run if it detects a mismatch between the script's configured versioning mode and the S3 bucket's versioning mode.

Restoring Backups

Backups generated by this script are uploaded as csv and json files to the specified AWS S3 bucket. They can be restored using AWS provided tools, including the AWS CLI, or using the route53-transfer module. The code and documentation for this module, including how to restore the Route 53 DNS record csv backups, can be found here.

About

Back up Route 53 DNS records to S3. Runs as ECS, Lambda, Docker, or Python script.

Resources

License

Stars

Watchers

Forks

Packages

No packages published

Languages

  • Python 98.6%
  • Dockerfile 1.4%