Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

sqlite backup and terraform spaces #28

Merged
merged 1 commit into from
Aug 23, 2024
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
1 change: 1 addition & 0 deletions Gemfile
Original file line number Diff line number Diff line change
Expand Up @@ -41,6 +41,7 @@ gem 'mini_magick'
gem 'sidekiq'
gem 'sidekiq-scheduler'
gem 'vite_rails'
gem 'aws-sdk-s3'

# Use Kredis to get higher-level data types in Redis [https://github.com/rails/kredis]
# gem "kredis"
Expand Down
18 changes: 18 additions & 0 deletions Gemfile.lock
Original file line number Diff line number Diff line change
Expand Up @@ -77,6 +77,22 @@ GEM
tzinfo (~> 2.0)
addressable (2.8.7)
public_suffix (>= 2.0.2, < 7.0)
aws-eventstream (1.3.0)
aws-partitions (1.968.0)
aws-sdk-core (3.201.5)
aws-eventstream (~> 1, >= 1.3.0)
aws-partitions (~> 1, >= 1.651.0)
aws-sigv4 (~> 1.9)
jmespath (~> 1, >= 1.6.1)
aws-sdk-kms (1.88.0)
aws-sdk-core (~> 3, >= 3.201.0)
aws-sigv4 (~> 1.5)
aws-sdk-s3 (1.159.0)
aws-sdk-core (~> 3, >= 3.201.0)
aws-sdk-kms (~> 1)
aws-sigv4 (~> 1.5)
aws-sigv4 (1.9.1)
aws-eventstream (~> 1, >= 1.0.2)
base64 (0.2.0)
bcrypt (3.1.20)
bigdecimal (3.1.8)
Expand Down Expand Up @@ -150,6 +166,7 @@ GEM
jbuilder (2.12.0)
actionview (>= 5.0.0)
activesupport (>= 5.0.0)
jmespath (1.6.2)
logger (1.6.0)
loofah (2.22.0)
crass (~> 1.0.2)
Expand Down Expand Up @@ -358,6 +375,7 @@ PLATFORMS
x86_64-linux

DEPENDENCIES
aws-sdk-s3
bootsnap
capybara
chartkick
Expand Down
40 changes: 39 additions & 1 deletion README.md
Original file line number Diff line number Diff line change
Expand Up @@ -162,9 +162,47 @@ Note: The Dockerfile uses a multi-stage build process to create a lean productio

The project uses SQLite by default. For production, consider using PostgreSQL or MySQL.

### Backup and Restore Process

Linkarooie includes an automated backup system to ensure that your SQLite database is securely stored and easily recoverable. This process is managed using a combination of scheduled jobs and DigitalOcean Spaces for storage.

#### Automated Backups

The `BackupDatabaseJob` is scheduled to run daily at 2 AM, ensuring that your SQLite database is backed up regularly. The backup process involves the following steps:

1. **Database Dump**: The job creates a dump of the current SQLite database, storing it in the `db/backups` directory with a timestamp and environment identifier.
2. **Upload to DigitalOcean Spaces**: The backup file is then uploaded to a DigitalOcean Spaces bucket, where it is securely stored with versioning enabled. This ensures that previous versions of the backup are retained for a short period, allowing you to restore from a specific point in time if needed.
3. **Cleanup**: Optionally, the local backup file is deleted after it has been successfully uploaded to DigitalOcean Spaces.

#### Restoring from a Backup

In the event that you need to restore your database from a backup, you can use the provided Rake task. This task allows you to specify the backup file you want to restore from and automatically loads it into the SQLite database.

**Restoration Steps:**

1. **Run the Restore Task**: Use the following command, specifying the path to your backup file:

```bash
rake db:restore BACKUP_FILE=path/to/your_backup_file.sql
```

2. **Process Overview**:

* The task will first drop all existing tables in the database to ensure a clean restoration.
* It will then load the specified backup file into the database.
* Upon completion, your database will be restored to the state it was in when the backup was created.

3. **Error Handling**: If the backup file is not provided or if any errors occur during the restoration process, the task will output helpful messages to guide you in resolving the issue.

#### Important Notes

* **Environment-Specific Backups**: Backups are created separately for each environment (development, production, test), and the backup files are named accordingly.
* **DigitalOcean Spaces Configuration**: Ensure that your DigitalOcean API credentials and bucket details are correctly configured in your environment variables for the backup and restore processes to function properly.
* **Testing Restores**: Regularly test the restore process in a development environment to ensure that your backups are reliable and that the restore process works as expected.

### Geolocation

For now this isn't optional but I intend to make it to. [API key required](https://ipapi.com) but it is free.
Currently, geolocation functionality is mandatory, but I plan to make it optional in future updates. To enable geolocation, you will need an [API key from ipapi](https://ipapi.com), which is free to obtain.

## Customization

Expand Down
38 changes: 38 additions & 0 deletions app/jobs/backup_database_job.rb
Original file line number Diff line number Diff line change
@@ -0,0 +1,38 @@
class BackupDatabaseJob < ApplicationJob
queue_as :default

def perform
environment = Rails.env
backup_file = "db/backups/#{environment}_backup_#{Time.now.strftime('%Y%m%d%H%M%S')}.sqlite3"

begin
# Ensure the backup directory exists
FileUtils.mkdir_p("db/backups")

# Dump the SQLite database for the current environment
database_path = Rails.configuration.database_configuration[environment]["database"]
`sqlite3 #{database_path} .dump > #{backup_file}`

# Upload to DigitalOcean Spaces
upload_to_spaces(backup_file)

# Optionally, delete the local backup file after upload
File.delete(backup_file) if File.exist?(backup_file)

Rails.logger.info "BackupDatabaseJob: Backup created and uploaded successfully: #{backup_file}"
rescue => e
Rails.logger.error "BackupDatabaseJob: Failed to create or upload backup: #{e.message}"
raise
end
end

private

def upload_to_spaces(file_path)
bucket_name = ENV['SPACES_BUCKET_NAME'] || 'sqlite-backup-bucket'
file_name = File.basename(file_path)

obj = S3_CLIENT.bucket(bucket_name).object("backups/#{file_name}")
obj.upload_file(file_path)
end
end
12 changes: 12 additions & 0 deletions config/initializers/aws_s3.rb
Original file line number Diff line number Diff line change
@@ -0,0 +1,12 @@
require 'aws-sdk-s3'

Aws.config.update({
region: ENV['SPACES_REGION'] || 'syd1',
credentials: Aws::Credentials.new(
ENV['SPACES_ACCESS_KEY_ID'],
ENV['SPACES_SECRET_ACCESS_KEY']
),
endpoint: "https://#{ENV['SPACES_REGION'] || 'syd1'}.digitaloceanspaces.com"
})

S3_CLIENT = Aws::S3::Resource.new
6 changes: 5 additions & 1 deletion config/sidekiq_scheduler.yml
Original file line number Diff line number Diff line change
@@ -1,3 +1,7 @@
aggregate_metrics:
cron: '0 1 * * *' # Run at 1 AM every day
class: AggregateMetricsJob
class: AggregateMetricsJob

backup_database:
cron: '0 2 * * *' # Runs daily at 2 AM
class: "BackupDatabaseJob"
27 changes: 27 additions & 0 deletions lib/tasks/restore.rake
Original file line number Diff line number Diff line change
@@ -0,0 +1,27 @@
namespace :db do
desc "Restore the SQLite database from a SQL dump"
task restore: :environment do
backup_file = ENV['BACKUP_FILE']

unless backup_file
puts "ERROR: You must provide the path to the backup file."
puts "Usage: rake db:restore BACKUP_FILE=path/to/your_backup_file.sql"
exit 1
end

begin
puts "Restoring database from #{backup_file}..."

# Drop the current database tables
ActiveRecord::Base.connection.execute("DROP TABLE IF EXISTS #{ActiveRecord::Base.connection.tables.join(', ')}")

# Load the backup SQL file
system("sqlite3 #{Rails.configuration.database_configuration[Rails.env]['database']} < #{backup_file}")

puts "Database restored successfully."
rescue => e
puts "ERROR: Failed to restore the database: #{e.message}"
exit 1
end
end
end
18 changes: 18 additions & 0 deletions terraform/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -9,10 +9,28 @@ terraform apply -var="do_token=YOUR_DIGITALOCEAN_TOKEN"
ssh root@<ip-address>
```

- Terraform for Spaces.

> Note: `export DO_TOKEN=<token>` and also `SPACES_ACCESS_KEY_ID` and `SPACES_SECRET_ACCESS_KEY` before running this.

```bash
terraform apply -var="do_token=$DO_TOKEN" \
-var="spaces_access_id=$SPACES_ACCESS_KEY_ID" \
-var="spaces_secret_key=$SPACES_SECRET_ACCESS_KEY"
```

* Create the instance with Terraform
* Collect the droplet IP address
* Check for access `ssh root@<ip-address>`

- Or for spaces.

```bash
spaces_bucket_domain_name = "sqlite-backup-bucket.syd1.digitaloceanspaces.com"
spaces_bucket_name = "sqlite-backup-bucket"
spaces_bucket_region = "syd1"
```

## GitHub Secrets

Ensure you have the following secrets set in your GitHub repository:
Expand Down
File renamed without changes.
File renamed without changes.
File renamed without changes.
File renamed without changes.
File renamed without changes.
26 changes: 26 additions & 0 deletions terraform/spaces/.terraform.lock.hcl

Some generated files are not rendered by default. Learn more about how customized files appear on GitHub.

69 changes: 69 additions & 0 deletions terraform/spaces/main.tf
Original file line number Diff line number Diff line change
@@ -0,0 +1,69 @@
terraform {
required_providers {
digitalocean = {
source = "digitalocean/digitalocean"
version = "~> 2.0"
}
}
}

provider "digitalocean" {
token = var.do_token
spaces_access_id = var.spaces_access_id
spaces_secret_key = var.spaces_secret_key
}

variable "do_token" {
description = "DigitalOcean API token"
}

variable "spaces_access_id" {
description = "Access Key ID for DigitalOcean Spaces"
}

variable "spaces_secret_key" {
description = "Secret Access Key for DigitalOcean Spaces"
}

variable "region" {
description = "DigitalOcean region"
default = "syd1"
}

resource "digitalocean_spaces_bucket" "sqlite_backup" {
name = "sqlite-backup-bucket"
region = var.region

versioning {
enabled = true
}

lifecycle_rule {
id = "cleanup-old-backups"
enabled = true
prefix = "backup/"
expiration {
days = 30
}
noncurrent_version_expiration {
days = 7
}
}

force_destroy = false
}

output "spaces_bucket_name" {
value = digitalocean_spaces_bucket.sqlite_backup.name
description = "The name of the DigitalOcean Space bucket created."
}

output "spaces_bucket_region" {
value = digitalocean_spaces_bucket.sqlite_backup.region
description = "The region of the DigitalOcean Space bucket created."
}

output "spaces_bucket_domain_name" {
value = digitalocean_spaces_bucket.sqlite_backup.bucket_domain_name
description = "The full domain name of the DigitalOcean Space bucket."
}
Loading