generated from silinternational/template-public
-
Notifications
You must be signed in to change notification settings - Fork 0
Commit
This commit does not belong to any branch on this repository, and may belong to a fork outside of the repository.
Merge pull request #1 from silinternational/develop
Release 0.1.0
- Loading branch information
Showing
7 changed files
with
195 additions
and
2 deletions.
There are no files selected for viewing
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,5 @@ | ||
.git | ||
.gitignore | ||
LICENSE | ||
local.env.dist | ||
README.md |
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,37 @@ | ||
name: Build and Publish | ||
|
||
on: | ||
push: | ||
|
||
jobs: | ||
build-and-publish: | ||
name: Build and Publish | ||
runs-on: ubuntu-latest | ||
steps: | ||
- name: Checkout code | ||
uses: actions/checkout@v4 | ||
|
||
- name: Log in to Docker Hub | ||
uses: docker/login-action@v3 | ||
with: | ||
username: ${{ secrets.DOCKERHUB_USERNAME }} | ||
password: ${{ secrets.DOCKERHUB_TOKEN }} | ||
|
||
- name: Extract metadata (tags, labels) for Docker | ||
id: meta | ||
uses: docker/metadata-action@v5 | ||
with: | ||
images: ${{ vars.DOCKER_ORG }}/${{ github.event.repository.name }} | ||
tags: | | ||
type=ref,event=branch | ||
type=ref,event=tag | ||
# set latest tag for master branch | ||
type=raw,value=latest,enable=${{ github.ref == format('refs/heads/{0}', 'main') }} | ||
- name: Build and push Docker image | ||
uses: docker/build-push-action@v5 | ||
with: | ||
context: . | ||
push: true | ||
tags: ${{ steps.meta.outputs.tags }} | ||
labels: ${{ steps.meta.outputs.labels }} |
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
|
@@ -9,3 +9,5 @@ | |
node_modules/ | ||
vendor/ | ||
.terraform/ | ||
|
||
local.env |
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,10 @@ | ||
FROM alpine:3 | ||
|
||
RUN apk update \ | ||
&& apk add --no-cache bash curl unzip \ | ||
&& curl https://rclone.org/install.sh | bash | ||
|
||
COPY sync-s3-to-b2.sh /data/ | ||
WORKDIR /data | ||
|
||
CMD ["./sync-s3-to-b2.sh"] |
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -1,2 +1,37 @@ | ||
# template-public | ||
general template for public repositories | ||
# sync-s3-to-b2 | ||
Service to synchronize an AWS S3 bucket path to a Backblaze B2 bucket path | ||
|
||
This process uses `rclone sync` to copy changes from a specified directory in an AWS S3 bucket to a specified directory in a Backblaze B2 bucket. After configuring `rclone`, the script executes `rclone sync --checksum --create-empty-src-dirs --metadata --transfers 32 ${RCLONE_ARGUMENTS} s3bucket:${S3_BUCKET}/${S3_PATH} b2bucket:${B2_BUCKET}/${B2_PATH}`. | ||
|
||
## How to use it | ||
1. Create an AWS access key that allows read access to the source S3 bucket. | ||
2. Create a Backblaze Application Key that allows read, write, and delete access to the destination B2 bucket. | ||
3. Supply all appropriate environment variables. | ||
4. Run the sync. | ||
5. Verify the destination bucket is identical to the source bucket. | ||
|
||
**Note:** Empty directories in the AWS S3 bucket may not be created on the Backblaze B2 bucket. See the `rclone` documentation for details. | ||
|
||
### Environment variables | ||
`AWS_ACCESS_KEY` - used to access the AWS S3 bucket | ||
|
||
`AWS_SECRET_KEY` - used to access the AWS S3 bucket | ||
|
||
`AWS_REGION` - AWS region the S3 bucket was created in | ||
|
||
`B2_APPLICATION_KEY_ID` - used to access the Backblaze B2 bucket | ||
|
||
`B2_APPLICATION_KEY` - used to access the Backblaze B2 bucket | ||
|
||
`S3_BUCKET` - AWS S3 bucket name, e.g., _my-s3-storage-bucket_ | ||
|
||
`S3_PATH` - path within the AWS S3 bucket | ||
|
||
`B2_BUCKET` - Backblaze B2 bucket name, e.g., _my-b2-copy-bucket_ | ||
|
||
`B2_PATH` - path within the Backblaze B2 bucket | ||
|
||
`RCLONE_ARGUMENTS` - (optional) Extra rclone arguments. For example, adding `--combined -` to `RCLONE_ARGUMENTS` will "list all file paths with a symbol and then a space and then the path to tell you what happened to it". | ||
|
||
## Docker Hub | ||
This image is built automatically on Docker Hub as [silintl/sync-s3-to-b2](https://hub.docker.com/r/silintl/sync-s3-to-b2/). |
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,19 @@ | ||
# AWS access variables | ||
AWS_ACCESS_KEY= | ||
AWS_SECRET_KEY= | ||
AWS_REGION= | ||
|
||
# Backblaze access variables | ||
B2_APPLICATION_KEY_ID= | ||
B2_APPLICATION_KEY= | ||
|
||
# Extra rclone arguments | ||
RCLONE_ARGUMENTS= | ||
|
||
# AWS S3 bucket and path | ||
S3_BUCKET= | ||
S3_PATH= | ||
|
||
# Backblaze B2 bucket and path | ||
B2_BUCKET= | ||
B2_PATH= |
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,85 @@ | ||
#!/usr/bin/env sh | ||
|
||
MYNAME="sync-s3-to-b2" | ||
STATUS=0 | ||
|
||
echo "${MYNAME}: Started" | ||
|
||
if [ "${B2_APPLICATION_KEY_ID}" = "" ]; then | ||
echo "${MYNAME}: FATAL: environment variable B2_APPLICATION_KEY_ID is required." | ||
STATUS=1 | ||
fi | ||
|
||
if [ "${B2_APPLICATION_KEY}" = "" ]; then | ||
echo "${MYNAME}: FATAL: environment variable B2_APPLICATION_KEY is required." | ||
STATUS=1 | ||
fi | ||
|
||
if [ "${AWS_ACCESS_KEY}" = "" ]; then | ||
echo "${MYNAME}: FATAL: environment variable AWS_ACCESS_KEY is required." | ||
STATUS=1 | ||
fi | ||
|
||
if [ "${AWS_SECRET_KEY}" = "" ]; then | ||
echo "${MYNAME}: FATAL: environment variable AWS_SECRET_KEY is required." | ||
STATUS=1 | ||
fi | ||
|
||
if [ "${AWS_REGION}" = "" ]; then | ||
echo "${MYNAME}: FATAL: environment variable AWS_REGION is required." | ||
STATUS=1 | ||
fi | ||
|
||
if [ "${B2_BUCKET}" = "" ]; then | ||
echo "${MYNAME}: FATAL: environment variable B2_BUCKET is required." | ||
STATUS=1 | ||
fi | ||
|
||
if [ "${S3_BUCKET}" = "" ]; then | ||
echo "${MYNAME}: FATAL: environment variable S3_BUCKET is required." | ||
STATUS=1 | ||
fi | ||
|
||
if [ $STATUS -ne 0 ]; then | ||
exit $STATUS | ||
fi | ||
|
||
echo "${MYNAME}: Configuring rclone" | ||
|
||
mkdir -p ~/.config/rclone | ||
cat > ~/.config/rclone/rclone.conf << EOF | ||
# Created by sync-s3-to-b2.sh | ||
[b2bucket] | ||
type = b2 | ||
hard_delete = true | ||
account = ${B2_APPLICATION_KEY_ID} | ||
key = ${B2_APPLICATION_KEY} | ||
[s3bucket] | ||
type = s3 | ||
provider = AWS | ||
access_key_id = ${AWS_ACCESS_KEY} | ||
secret_access_key = ${AWS_SECRET_KEY} | ||
region = ${AWS_REGION} | ||
location_constraint = ${AWS_REGION} | ||
EOF | ||
|
||
# Adding "--combined - " to RCLONE_ARGUMENTS will list all file paths with a | ||
# symbol and then a space and then the path to tell you what happened to it. | ||
|
||
echo "${MYNAME}: Starting rclone sync" | ||
|
||
start=$(date +%s) | ||
rclone sync --checksum --create-empty-src-dirs --metadata --transfers 32 ${RCLONE_ARGUMENTS} s3bucket:${S3_BUCKET}/${S3_PATH} b2bucket:${B2_BUCKET}/${B2_PATH} || STATUS=$? | ||
end=$(date +%s) | ||
|
||
if [ $STATUS -ne 0 ]; then | ||
echo "${MYNAME}: FATAL: Sync of ${S3_BUCKET}/${S3_PATH} to ${B2_BUCKET}/${B2_PATH} returned non-zero status ($STATUS) in $(expr ${end} - ${start}) seconds." | ||
exit $STATUS | ||
else | ||
echo "${MYNAME}: Sync of ${S3_BUCKET}/${S3_PATH} to ${B2_BUCKET}/${B2_PATH} completed in $(expr ${end} - ${start}) seconds." | ||
fi | ||
|
||
echo "${MYNAME}: Completed" |