This guide will run you through a deployment of the 1Password SCIM bridge on AWS Fargate using Terraform.
Note that due to the highly advanced and customizable nature of Amazon Web Services, this is only a suggested starting point. You may modify it to your needs to fit within your existing infrastructure.
Before beginning, familiarize yourself with PREPARATION.md and complete the necessary steps there.
- Install Terraform
- Have your
scimsession
file and bearer token (as seen inPREPARATION.md
) ready
Ensure you are authenticated with the aws
tool in your local environment.
See Terraform AWS Authentication for more details.
Copy terraform.tfvars.template
to terraform.tfvars
:
cp terraform.tfvars.template terraform.tfvars
Optional: For customers using Google Workspace
Copy the workspace-settings.json
template file to this Terraform code directory:
cp ../beta/workspace-settings.json ./workspace-settings.json
Edit this file and add the respective values for each variable (see our Google Workspace documentation).
Copy your workspace-credentials.json
file to this Terraform code directory:
cp <path>/workspace-credentials.json ./workspace-credentials.json
Uncommment this line in terraform.tfvars
:
using_google_workspace = true
Copy the scimsession
file in the Terraform code directory:
cp <path>/scimsession ./
This will automatically create an AWS secret containing the contents of the scimsession
file in your instance.
Note: If you skip this step or the installation of the scimsession file is not successful, you can create the required AWS secret manually. Ensure you base64
encode the scimsession
file, and store it in a secret as plain text (not as JSON, and not wrapped in quotation marks):
# only required if the automatic installation of the 'scimession' file is not successful
cat <path>/scimsession | base64
# copy the output to Secrets Manager
Set the aws_region
variable in terraform.tfvars
to the AWS region you're deploying in (the default is us-east-1
).
This example uses AWS Certificate Manager to manage the required TLS certificate. Save the full domain name you want to use as domain_name
in terraform.tfvars
:
domain_name = "<scim.example.com>"
Optional: Configure additional features
If you would like to use an existing wildcard certificate in AWS Certificate Manager (*.example.com
), uncommment this line in terraform.tfvars
:
wildcard_cert = true
This deployment example uses Route 53 to create the required DNS record by default. If you are using another DNS provider, uncommment this line in terraform.tfvars
:
using_route53 = false
Create a CNAME record pointing to the loadbalancer-dns-name
output printed out from terraform apply
.
This deployment example uses the default VPC for your AWS region. If you would like to specify another VPC to use instead, set the value in the vpc_name
in terraform.tfvars
:
vpc_name = "<name_of_VPC>"
If you would like to specify a common prefix for naming all supported AWS resources created by Terraform, set the value in the name_prefix
variable in terraform.tfvars
:
name_prefix = "<prefix>"
Thw deployment example retains logs indifnietely by default. If you would like to set a differnet retention period, specify a number of days in the log_retention_days
variable in terraform.tfvars
:
log_retention_days = <number_of_days>
To apply additional tags to all supported AWS resources created by Terraform, add keys and values to the tags
variable in terraform.tfvars
:
tags = {
<key1> = "<some_value>"
<key2> = "<some_value>"
…
}
Run the following commands to create the necessary configuration settings:
terraform init
terraform plan -out=./op-scim.plan
You will now be asked to validate your configuration. Once you are sure it is correct, run the following:
terraform apply ./op-scim.plan
After a few minutes and the DNS update has had time to take effect, go to the SCIM Bridge URL you set, and you should be able to enter your bearer token to verify that your SCIM bridge is up and running.
Connect to your Identity Provider following the remainder of our setup guide.
The process for updating your infrastructure involes a few key steps. Lucky for us most of the heavy lifting is done by the terraform
CLI.
The update steps are generally as follows:
- Update any variables and task definitions as necessary
- Create a plan for Terraform to apply
- Apply the new plan to your infrastrucure
Note that the terraform
CLI will output the details of the plan in addition to saving it to an output file (./op-scim.plan
). The plan will contain the steps necessary to bring your deployment in line with the latest configuration depending on the changes that are detected. Feel free to inspect the output to get a better idea of the steps that will be taken.
Below we go into detail about some common reasons that you would want to update your infrastructure.
To update your deployment to the latest version, edit the task-definitions/scim.json
file and edit the following line:
"image": "1password/scim:v2.x.x",
Change v2.x.x
to the latest version seen here.
Then, reapply your Terraform settings:
terraform plan -out=./op-scim.plan
terraform apply ./op-scim.plan
There may be situations where you want to update your deployment with the latest configuration changes available in this repository even if you are already on the latest 1password/scim
tag. The steps are fairly similar to updating the tag with a few minor differences.
Update steps:
- [Optional] Verify that your Terraform variables (
./terraform.tfvars
) are correct and up to date - [Optional] Reconcile the state between what Terraform knows about and your deployed infrastructure:
terraform refresh
- Create an update plan to apply:
terraform plan -out=./op-scim.plan
- Apply the plan to your infrastructure:
terraform apply ./op-scim.plan
- Verify that there are no errors in the output as Terraform updates your infrastructure
The default resource recommendations for the SCIM bridge and Redis deployments are acceptable in most scenarios, but they fall short in high volume deployments where there is a large number of users and/or groups.
Our current default resource requirements (defined in scim.json) are:
cpu: 128
memory: 512
Proposed recommendations for high volume deployments:
cpu: 512
memory: 1024
This proposal is 4x the CPU and 2x the memory of the default values.
Please reach out to our support team if you need help with the configuration or to tweak the values for your deployment.
As of April 2022 we have updated the Redis deployment to require a maximum of 512 MB of memory. This meant that we also had to bump required memory for the "op-scim-bridge" task definition to 1024 MB.
The Redis dataset maximum is set to 256Mb and an eviction policy will determine how keys are evicted when the maximum data set size is approached.
This should prevent Redis from consuming large amounts of memory and eventually running out of available memory. The SCIM bridge is also restarted in instances where Redis runs into an out of memory error.
As of December 2021, the ALB health check path has changed. If you are updating from a version earlier than 2.3.0, edit your terraform.tf
file to use /app
instead of /
for the health check before reapplying your Terraform settings.
If you want to view the logs for your SCIM bridge within AWS, go to Cloudwatch -> Log Groups and you should see the log group that was printed out at the end of your terraform apply
. Look for op_scim_bridge
and redis
for your logs in this section.
If you browse to the domain name of your SCIM bridge and are met with a Sign In With 1Password
link, this means the scimsession
file was not properly installed. Due to the nature of the ECS deployment, this “sign in” option cannot be used to complete the setup of your SCIM bridge.
To fix this, be sure to retry the instructions of Step 2 of Configuration. You will also need to restart your op_scim_bridge
task in order for the changes to take effect after you update the scimsession
secret.