This document is autogenerated by https://github.com/NitorCreations/nameless-deploy-tools/blob/master/document_commands.py. Do not edit by hand!
usage: ndt account-id [-h]
Get current account id. Either from instance metadata or current cli
configuration.
options:
-h, --help show this help message and exit
usage: ndt add-deployer-server [-h] [--id ID] file username
Add a server into a maven configuration file. Password is taken from the
environment variable \'DEPLOYER_PASSWORD\'
positional arguments:
file The file to modify
username The username to access the server.
options:
-h, --help show this help message and exit
--id ID Optional id for the server. Default is deploy. One server with
this id is added and another with \'-release\' appended
usage: ndt assume-role [-h] [-t TOKEN_NAME] [-d DURATION] [-p PROFILE]
role_arn
Assume a defined role. Prints out environment variables to be eval\'d to
current context for use: eval $(ndt assume-role
\'arn:aws:iam::43243246645:role/DeployRole\')
positional arguments:
role_arn The ARN of the role to assume
options:
-h, --help show this help message and exit
-t TOKEN_NAME, --mfa-token TOKEN_NAME
Name of MFA token to use
-d DURATION, --duration DURATION
Duration for the session in minutes
-p PROFILE, --profile PROFILE
Profile to edit in ~/.aws/credentials to make role
persist in that file for the duration of the session.
usage: ndt assumed-role-name [-h]
Read the name of the assumed role if currently defined
options:
-h, --help show this help message and exit
usage: ndt aws-config-to-json [-h]
Prints aws config file contents as json for further parsing and use in other
tools.
options:
-h, --help show this help message and exit
usage: ndt azure-ensure-group [-h] [-l LOCATION] name
Ensures that an azure resource group exists.
positional arguments:
name The name of the resource group to make sure exists
options:
-h, --help show this help message and exit
-l LOCATION, --location LOCATION
The location for the resource group. If not defined
looked from the environment variable AZURE_LOCATION
and after that seen if location is defined for the
project.
usage: ndt azure-ensure-management-group [-h] name
Ensures that an azure resource group exists.
positional arguments:
name The name of the resource group to make sure exists
options:
-h, --help show this help message and exit
usage: ndt azure-location [-h]
Resolve an azure location based on \'AZURE_LOCATION\' enviroment variable, local
project or az cli configuration. Defaults to \'northeurope\'
options:
-h, --help show this help message and exit
usage: ndt azure-template-parameters [-h] template
Lists the parameters in an Azure Resource Manager template
positional arguments:
template The json template to scan for parameters
options:
-h, --help show this help message and exit
usage: ndt bake-docker [-h] [-i] component docker-name
Runs a docker build, ensures that an ecr repository with the docker name
(by default <component>/<branch>-<docker-name>) exists and pushes the built
image to that repository with the tags "latest" and "$BUILD_NUMBER"
positional arguments:
component the component directory where the docker directory is
docker-name the name of the docker directory that has the Dockerfile
For example for ecs-cluster/docker-cluster/Dockerfile
you would give cluster
optional arguments:
-h, --help show this help message and exit
-i, --imagedefinitions create imagedefinitions.json for AWS CodePipeline
-d, --dry-run build image without pushing to ECR-repo
usage: ndt bake-image [-h] component [image-name]
Runs an ansible playbook that builds an Amazon Machine Image (AMI) and
tags the image with the job name and build number.
positional arguments
component the component directory where the ami bake configurations are
[image-name] Optional name for a named image in component/image-[image-name]
optional arguments:
-h, --help show this help message and exit
usage: ndt bw-store-aws-cli-creds [-h] entryname
Fetches a Bitwarden entry and if it contains a definition of a aws
credentials, stores it in aws cli configuration. Namely the entry needs to
define the extra fields aws_access_key_id, aws_secret_access_key and
profile_name
positional arguments:
entryname The name of the bitwarden entry to get
options:
-h, --help show this help message and exit
usage: ndt cf-delete-stack [-h] stack_name region
Delete an existing CloudFormation stack
positional arguments:
stack_name Name of the stack to delete
region The region to delete the stack from
options:
-h, --help show this help message and exit
usage: ndt cf-follow-logs [-h] [-s START] stack_name
Tail logs from the log group of a cloudformation stack
positional arguments:
stack_name Name of the stack to watch logs for
options:
-h, --help show this help message and exit
-s START, --start START
Start time in seconds since epoc
usage: ndt cf-get-parameter [-h] parameter
Get a parameter value from the stack
positional arguments:
parameter The name of the parameter to print
options:
-h, --help show this help message and exit
usage: ndt cf-logical-id [-h]
Get the logical id that is expecting a signal from this instance
options:
-h, --help show this help message and exit
usage: ndt cf-region [-h]
Get region of the stack that created this instance
options:
-h, --help show this help message and exit
usage: ndt cf-signal-status [-h] [-r RESOURCE] status
Signal CloudFormation status to a logical resource in CloudFormation that is
either given on the command line or resolved from CloudFormation tags
positional arguments:
status Status to indicate: SUCCESS | FAILURE
options:
-h, --help show this help message and exit
-r RESOURCE, --resource RESOURCE
Logical resource name to signal. Looked up from
cloudformation tags by default
usage: ndt cf-stack-id [-h]
Get id of the stack the creted this instance
options:
-h, --help show this help message and exit
usage: ndt cf-stack-name [-h]
Get name of the stack that created this instance
options:
-h, --help show this help message and exit
usage: ndt create-account [-h] [-d] [-o ORGANIZATION_ROLE_NAME]
[-r TRUST_ROLE_NAME] [-a [TRUSTED_ACCOUNTS ...]]
[-t TOKEN_NAME]
email account_name
Creates a subaccount.
positional arguments:
email Email for account root
account_name Organization unique account name
options:
-h, --help show this help message and exit
-d, --deny-billing-access
-o ORGANIZATION_ROLE_NAME, --organization-role-name ORGANIZATION_ROLE_NAME
Role name for admin access from parent account
-r TRUST_ROLE_NAME, --trust-role-name TRUST_ROLE_NAME
Role name for admin access from parent account
-a [TRUSTED_ACCOUNTS ...], --trusted-accounts [TRUSTED_ACCOUNTS ...]
Account to trust with user management
-t TOKEN_NAME, --mfa-token TOKEN_NAME
Name of MFA token to use
usage: ndt create-stack [-h] [-y] [template]
Create a stack from a template
positional arguments:
template
options:
-h, --help show this help message and exit
-y, --yes Answer yes or use default to all questions
usage: ndt deploy-azure [-d] [-h] component azure-name
Exports ndt parameters into component/azure-name/variables.json
and deploys template.yaml or template.bicep with the azure cli referencing the parameter file
If pre_deploy.sh and post_deploy.sh exist and are executable in the subcompoent directory,
they will be executed before and after the deployment, respectively.
Similarly, if a readable pre_deploy_source.sh exists in the subcompoent directory,
it will be sourced before the deployment to enable things like activating a python venv.
positional arguments:
component the component directory where the azure directory is
azure-name the name of the azure directory that has the template
For example for lambda/azure-blobstore/template.yaml
you would give blobstore
optional arguments:
-d, --dryrun dry-run - do only parameter expansion and template pre-processing and azure cli what-if operation
-v, --verbose verbose - verbose output from azure cli
-h, --help show this help message and exit
usage: ndt deploy-cdk [-d] [-h] component cdk-name
Exports ndt parameters into component/cdk-name/variables.json, runs pre_deploy.sh in the
cdk project and runs cdk diff; cdk deploy for the same
If pre_deploy.sh and post_deploy.sh exist and are executable in the subcompoent directory,
they will be executed before and after the deployment, respectively.
Similarly if a readable pre_deploy_source.sh exists in the subcompoent directory,
it will be sourced before the deployment to enable things like activating a python venv.
positional arguments:
component the component directory where the cdk directory is
cdk-name the name of the cdk directory that has the template
For example for lambda/cdk-sender/bin/MyProject.ts
you would give sender
optional arguments:
-d, --dryrun dry-run - do only parameter expansion and pre_deploy.sh and cdk diff
-h, --help show this help message and exit
usage: ndt deploy-connect [-h] [-d] component contactflowname
Deploy AWS Connect contact flows from a subcomponent
positional arguments:
component the component directory where the connect contact flow
directory is
contactflowname the name of the connect subcomponent directory that has the
contact flow template
options:
-h, --help show this help message and exit
-d, --dryrun Dry run - don\'t do changes but show what would happen of
deployed
usage: ndt deploy-serverless [-d] [-h] component serverless-name
Exports ndt parameters into component/serverless-name/variables.yml, runs npm ci in the
serverless project and runs sls deploy -s $paramEnvId for the same
If pre_deploy.sh and post_deploy.sh exist and are executable in the subcompoent directory,
they will be executed before and after the deployment, respectively.
Similarly, if a readable pre_deploy_source.sh exists in the subcompoent directory,
it will be sourced before the deployment to enable things like activating a python venv.
positional arguments:
component the component directory where the serverless directory is
serverless-name the name of the serverless directory that has the template
For example for lambda/serverless-sender/template.yaml
you would give sender
optional arguments:
-d, --dryrun dry-run - do only parameter expansion and template pre-processing and npm ci
-v, --verbose verbose - verbose output from serverless framework
-h, --help show this help message and exit
ami that is tagged with the bake-job name
-r, --disable-rollback - disable stack rollback on failure
usage: ndt deploy-stack [-d] [-r] [-h] component stack-name ami-id bake-job
Resolves potential ECR urls and AMI Ids and then deploys the given stack either updating or creating it.
If pre_deploy.sh and post_deploy.sh exist and are executable in the subcompoent directory,
they will be executed before and after the deployment, respectively.
Similarly, if a readable pre_deploy_source.sh exists in the subcompoent directory,
it will be sourced before the deployment to enable things like activating a python venv.
positional arguments:
component the component directory where the stack template is
stack-name the name of the stack directory inside the component directory
For example for ecs-cluster/stack-cluster/template.yaml
you would give cluster
ami-id If you want to specify a value for the paramAmi variable in the stack,
you can do so. Otherwise give an empty string with two quotation marks
bake-job If an ami-id is not given, the ami id is resolved by getting the latest
optional arguments:
-d, --dryrun dry-run - show only the change set without actually deploying it
-h, --help show this help message and exit
usage: ndt deploy-terraform [-d] [-h] component terraform-name
Exports ndt parameters into component/terraform-name/terraform.tfvars as json, runs pre_deploy.sh in the
terraform project and runs terraform plan; terraform apply for the same
If TF_BACKEND_CONF is defined and points to a readable file relative to the ndt root,
that file will get interpolated to $component/terraform-$terraform_name/backend.tf
If pre_deploy.sh and post_deploy.sh exist and are executable in the subcompoent directory,
they will be executed before and after the deployment, respectively.
Similarly, if a readable pre_deploy_source.sh exists in the subcompoent directory,
it will be sourced before the deployment to enable things like activating a python venv.
positional arguments:
component the component directory where the terraform directory is
terraform-name the name of the terraform directory that has the template
For example for lambda/terraform-sender/main.tf
you would give sender
optional arguments:
-d, --dryrun dry-run - do only parameter expansion and template pre-processing
-h, --help show this help message and exit
usage: ndt detach-volume [-h] (-m MOUNT_PATH | -i VOLUME_ID | -d DEVICE) [-x]
Create a snapshot of a volume identified by it\'s mount path
options:
-h, --help show this help message and exit
-m MOUNT_PATH, --mount-path MOUNT_PATH
Mount point of the volume to be detached
-i VOLUME_ID, --volume-id VOLUME_ID
Volume id to detach
-d DEVICE, --device DEVICE
Device to detach
-x, --delete Delete volume after detaching
usage: ndt ec2-clean-snapshots [-h] [-r REGION] [-d DAYS] [--dry-run]
tags [tags ...]
Clean snapshots that are older than a number of days (30 by default) and have
one of specified tag values
positional arguments:
tags The tag values to select deleted snapshots
options:
-h, --help show this help message and exit
-r REGION, --region REGION
The region to delete snapshots from. Can also be set
with env variable AWS_DEFAULT_REGION or is gotten from
instance metadata as a last resort
-d DAYS, --days DAYS The number of days that is theminimum age for
snapshots to be deleted
--dry-run Do not delete, but print what would be deleted
usage: ndt ec2-get-tag [-h] name
Get the value of a tag for an ec2 instance
positional arguments:
name The name of the tag to get
options:
-h, --help show this help message and exit
usage: ndt ec2-get-userdata [-h] file
Get userdata defined for an instance into a file
positional arguments:
file File to write userdata into. \'-\' for stdout
options:
-h, --help show this help message and exit
usage: ndt ec2-instance-id [-h]
Get id for instance
options:
-h, --help show this help message and exit
usage: ndt ec2-region [-h]
Get current default region. Defaults to the region of the instance on ec2 if
not otherwise defined.
options:
-h, --help show this help message and exit
usage: ndt ec2-wait-for-metadata [-h] [--timeout TIMEOUT]
Waits for metadata service to be available. All errors are ignored until time
expires or a socket can be established to the metadata service
options:
-h, --help show this help message and exit
--timeout TIMEOUT, -t TIMEOUT
Maximum time to wait in seconds for the metadata
service to be available
usage: ndt ecr-ensure-repo [-h] name
Ensure that an ECR repository exists and get the uri and login token for it
positional arguments:
name The name of the ecr repository to verify
options:
-h, --help show this help message and exit
usage: ndt ecr-repo-uri [-h] name
Get the repo uri for a named docker
positional arguments:
name The name of the ecr repository
options:
-h, --help show this help message and exit
usage: ndt ecs-exec [-h] [-t TASK] [--non-interactive] cluster service command
Execute a command in a running ECS task using ECS Exec
positional arguments:
cluster The cluster to execute the command in
service The service to execute the command in
command The command to execute
options:
-h, --help show this help message and exit
-t TASK, --task TASK The task to execute the command in. If not specified,
a task will be selected at random
--non-interactive Run the command non-interactively. Default is to run
interactively
usage: ndt ecs-ls [-h] [cluster] [service]
List ECS clusters or if a cluster is given, list services in that cluster. If
a service is given, list tasks in that service
positional arguments:
cluster The cluster to list services for. If not specified, all clusters
are listed
service The service to list tasks for. If not specified, all services
are listed
options:
-h, --help show this help message and exit
usage: ndt enable-profile [-h] [-i | -a | -f | -l | -n | -s | -o] profile
Enable a configured profile. Simple IAM user, AzureAD, ADFS and ndt assume-
role profiles are supported.
positional arguments:
profile The profile to enable
options:
-h, --help show this help message and exit
-i, --iam IAM user type profile
-a, --azure Azure login type profile
-f, --adfs ADFS login type profile
-l, --lastpass Lastpass login type profile
-n, --ndt NDT assume role type profile
-s, --azure-subscription
Microsoft Azure subscription
-o, --sso AWS SSO type profile
usage: ndt export-connect-contact-flow [-h] [-c COMPONENT]
[-f CONTACTFLOWNAME]
[-i INSTANCEID | -a INSTANCEALIAS]
[--colorize]
name
Export AWS Connect contact flow from an existing instance
positional arguments:
name The name of the contact flow to export
options:
-h, --help show this help message and exit
-c COMPONENT, --component COMPONENT
the component directory where the connect contact flow
directory is
-f CONTACTFLOWNAME, --contactflowname CONTACTFLOWNAME
the name of the connect subcomponent directory that
has the contact flow template
-i INSTANCEID, --instanceid INSTANCEID
id of the connect instance to export from
-a INSTANCEALIAS, --instancealias INSTANCEALIAS
alias of the connect instance to export from
--colorize, -o Colorize output
usage: ndt get-images [-h] job_name
Gets a list of images given a bake job name
positional arguments:
job_name The job name to look for
options:
-h, --help show this help message and exit
usage: ndt interpolate-file [-h] [-s STACK] [-k] [-n] [-v] [-o OUTPUT]
[-e ENCODING]
file
Replace placeholders in file with parameter values from stack and optionally
from vault
positional arguments:
file File to interpolate
options:
-h, --help show this help message and exit
-s STACK, --stack STACK
Stack name for values. Automatically resolved on ec2
instances
-k, --skip-stack Skip stack parameters in all cases
-n, --use-environ Use environment variables for interpolation
-v, --vault Use vault values as well.Vault resovled from env
variables or default is used
-o OUTPUT, --output OUTPUT
Output file
-e ENCODING, --encoding ENCODING
Encoding to use for the file. Defaults to utf-8
usage: ndt json-to-yaml [-h] [--colorize] file
Convert CloudFormation json to an approximation of a nameless CloudFormation
yaml with for example scripts externalized
positional arguments:
file File to parse
options:
-h, --help show this help message and exit
--colorize, -c Colorize output
usage: ndt latest-snapshot [-h] tag
Get the latest snapshot with a given tag
positional arguments:
tag The tag to find snapshots with
options:
-h, --help show this help message and exit
usage: ndt list-components [-h] [-j] [-b BRANCH]
Prints the components in a branch, by default the current branch
options:
-h, --help show this help message and exit
-j, --json Print in json format.
-b BRANCH, --branch BRANCH
The branch to get components from. Default is to
process current branch
usage: ndt list-connect-contact-flows [-h] [-c COMPONENT] [-f CONTACTFLOWNAME]
[-i INSTANCEID | -a INSTANCEALIAS] [-t]
[-m MATCH]
List existing AWS Connect contact flows in an instance
options:
-h, --help show this help message and exit
-c COMPONENT, --component COMPONENT
the component directory where the connect contact flow
directory is
-f CONTACTFLOWNAME, --contactflowname CONTACTFLOWNAME
the name of the connect subcomponent directory that
has the contact flow template
-i INSTANCEID, --instanceid INSTANCEID
id of the connect instance to export from
-a INSTANCEALIAS, --instancealias INSTANCEALIAS
alias of the connect instance to export from
-t, --trash Include trashed flows
-m MATCH, --match MATCH
Pattern to match printed flows
usage: ndt list-file-to-json [-h] arrayname file
Convert a file with an entry on each line to a json document with a single
element (name as argument) containg file rows as list.
positional arguments:
arrayname The name in the json object givento the array
file The file to parse
options:
-h, --help show this help message and exit
usage: ndt list-jobs [-h] [-e] [-j] [-b BRANCH] [-c COMPONENT]
Prints a line for every runnable job in this git repository, in all branches
and optionally exports the properties for each under \'$root/job-properties/
options:
-h, --help show this help message and exit
-e, --export-job-properties
Set if you want the properties of all jobs into files
under job-properties/
-j, --json Print in json format. Optionally exported parameters
will be in the json document
-b BRANCH, --branch BRANCH
The branch to process. Default is to process all
branches
-c COMPONENT, --component COMPONENT
Component to process. Default is to process all
components
usage: ndt load-parameters [-h] [--branch BRANCH] [--resolve-images]
[--stack STACK | --serverless SERVERLESS | --docker DOCKER | --image [IMAGE]
| --cdk CDK | --terraform TERRAFORM | --azure AZURE
| --connect CONNECT]
[--json | --yaml | --properties | --terraform-variables | --export-statements | --azure-parameters]
[-f FILTER]
[component]
Load parameters from infra*.properties files in the order:
branch.properties
[branch].properties
infra.properties,
infra-[branch].properties,
[component]/infra.properties,
[component]/infra-[branch].properties,
[component]/[subcomponent-type]-[subcomponent]/infra.properties,
[component]/[subcomponent-type]-[subcomponent]/infra-[branch].properties
Last parameter defined overwrites ones defined before in the files. Supports parameter expansion
and bash -like transformations. Namely:
${PARAM##prefix} # strip prefix greedy
${PARAM%%suffix} # strip suffix greedy
${PARAM#prefix} # strip prefix not greedy
${PARAM%suffix} # strip suffix not greedy
${PARAM:-default} # default if empty
${PARAM:4:2} # start:len
${PARAM/substr/replace}
${PARAM^} # upper initial
${PARAM,} # lower initial
${PARAM^^} # upper
${PARAM,,} # lower
Comment lines start with \'#\'
Lines can be continued by adding \'\' at the end
See https://www.tldp.org/LDP/Bash-Beginners-Guide/html/sect_10_03.html
(arrays not supported)
positional arguments:
component Compenent to descend into
options:
-h, --help show this help message and exit
--branch BRANCH, -b BRANCH
Branch to get active parameters for
--resolve-images, -r Also resolve subcomponent AMI IDs and docker repo urls
--stack STACK, -s STACK
CloudFormation subcomponent to descent into
--serverless SERVERLESS, -l SERVERLESS
Serverless subcomponent to descent into
--docker DOCKER, -d DOCKER
Docker image subcomponent to descent into
--image [IMAGE], -i [IMAGE]
AMI image subcomponent to descent into
--cdk CDK, -c CDK CDK subcomponent to descent into
--terraform TERRAFORM, -t TERRAFORM
Terraform subcomponent to descent into
--azure AZURE, -a AZURE
Azure subcomponent to descent into
--connect CONNECT, -n CONNECT
Connect subcomponent to descent into
--json, -j JSON format output (default)
--yaml, -y YAML format output
--properties, -p properties file format output
--terraform-variables, -v
terraform syntax variables
--export-statements, -e
Output as eval-able export statements
--azure-parameters, -z
Azure parameter file syntax variables
-f FILTER, --filter FILTER
Comma separated list of parameter names to output
usage: ndt logs log_group_pattern [-h] [-f FILTER] [-s START [START ...]] [-e END [END ...]] [-o]
Get logs from multiple CloudWatch log groups and possibly filter them.
positional arguments:
log_group_pattern Regular expression to filter log groups with
options:
-h, --help show this help message and exit
-f FILTER, --filter FILTER
CloudWatch filter pattern
-s START [START ...], --start START [START ...]
Start time (x m|h|d|w ago | now | <seconds since
epoc>)
-e END [END ...], --end END [END ...]
End time (x m|h|d|w ago | now | <seconds since epoc>)
-o, --order Best effort ordering of log entries
-t, --shortformat Print timestamps and log groups in shorter format
usage: ndt mfa-add-token [-h] [-i | -b BITWARDEN_ENTRY] [-a TOKEN_ARN]
[-s TOKEN_SECRET] [-f]
token_name
Adds an MFA token to be used with role assumption. Tokens will be saved in a
.ndt subdirectory in the user\'s home directory. If a token with the same name
already exists, it will not be overwritten.
positional arguments:
token_name Name for the token. Use this to refer to the token
later with the assume-role command.
options:
-h, --help show this help message and exit
-i, --interactive Ask for token details interactively.
-b BITWARDEN_ENTRY, --bitwarden-entry BITWARDEN_ENTRY
Use a bitwarden entry as the source of the totp secret
-a TOKEN_ARN, --token_arn TOKEN_ARN
ARN identifier for the token.
-s TOKEN_SECRET, --token_secret TOKEN_SECRET
Token secret.
-f, --force Force an overwrite if the token already exists.
usage: ndt mfa-backup [-h] [-d FILE] backup_secret
Encrypt or decrypt a backup JSON structure of tokens. To output an encrypted
backup, provide an encryption secret. To decrypt an existing backup, use
--decrypt <file>.
positional arguments:
backup_secret Secret to use for encrypting or decrypts the backup.
options:
-h, --help show this help message and exit
-d FILE, --decrypt FILE
Outputs a decrypted token backup read from given file.
usage: ndt mfa-code [-h] token_name
Generates a TOTP code using an MFA token.
positional arguments:
token_name Name of the token to use.
options:
-h, --help show this help message and exit
usage: ndt mfa-delete-token [-h] token_name
Deletes an MFA token file from the .ndt subdirectory in the user\'s home
directory
positional arguments:
token_name Name of the token to delete.
options:
-h, --help show this help message and exit
usage: ndt mfa-qrcode [-h] token_name
Generates a QR code to import a token to other devices.
positional arguments:
token_name Name of the token to use.
options:
-h, --help show this help message and exit
usage: ndt print-aws-profiles [-h] [prefix]
Prints profile names from the credentials file (~/.aws/credentials), and the
config file (~/.aws/config) for autocomplete tools.
positional arguments:
prefix Prefix of profiles to print
options:
-h, --help show this help message and exit
Prints out the instructions to create and deploy the resources in a stack
usage: ndt print-create-instructions [-h] component stack-name
positional arguments:
component the component directory where the stack template is
stack-name the name of the stack directory inside the component directory
For example for ecs-cluster/stack-cluster/template.yaml
you would give cluster
optional arguments:
-h, --help show this help message and exit
usage: ndt profile-expiry-to-env [-h] profile
Prints profile expiry from credentials file (~/.aws/credentials) as eval-able
environment variables.
positional arguments:
profile The profile to read expiry info from
options:
-h, --help show this help message and exit
usage: ndt profile-to-env [-h] [-t] [-r ROLE_ARN] profile
Prints profile parameters from credentials file (~/.aws/credentials) as eval-
able environment variables
positional arguments:
profile The profile to read profile info from
options:
-h, --help show this help message and exit
-t, --target-role Output also azure_default_role_arn
-r ROLE_ARN, --role-arn ROLE_ARN
Output also the role given here as the target role for
the profile
usage: ndt promote-image [-h] image_id target_job
Promotes an image for use in another branch
positional arguments:
image_id The image to promote
target_job The job name to promote the image to
options:
-h, --help show this help message and exit
usage: ndt pytail [-h] file
Read and print a file and keep following the end for new data
positional arguments:
file File to follow
options:
-h, --help show this help message and exit
usage: ndt read-profile-expiry [-h] profile
Read expiry field from credentials file, which is there if the login happened
with aws-azure-login or another tool that implements the same logic (e.g.
https://github.com/NitorCreations/adfs-aws-login).
positional arguments:
profile The profile to read expiry info from
options:
-h, --help show this help message and exit
usage: ndt region [-h]
Get current default region. Defaults to the region of the instance on ec2 if
not otherwise defined.
options:
-h, --help show this help message and exit
usage: ndt register-private-dns [-h] [-t TTL] [-p PRIVATE_IP]
dns_name hosted_zone
Register local private IP in route53 hosted zone usually for internal use.
positional arguments:
dns_name The name to update in route 53
hosted_zone The name of the hosted zone to update
options:
-h, --help show this help message and exit
-t TTL, --ttl TTL Time to live for the record. 60 by default
-p PRIVATE_IP, --private-ip PRIVATE_IP
Private IP address to register
usage: ndt session-to-env [-h] [-t TOKEN_NAME] [-d DURATION_MINUTES]
Export current session as environment variables
options:
-h, --help show this help message and exit
-t TOKEN_NAME, --token-name TOKEN_NAME
Name of the mfs token to use.
-d DURATION_MINUTES, --duration-minutes DURATION_MINUTES
Duration in minutes for the session token. Default to
60
usage: ndt setup-cli [-h] [-n NAME] [-k KEY_ID] [-s SECRET] [-r REGION]
Setup the command line environment to define an aws cli profile with the given
name and credentials. If an identically named profile exists, it will not be
overwritten.
options:
-h, --help show this help message and exit
-n NAME, --name NAME Name for the profile to create
-k KEY_ID, --key-id KEY_ID
Key id for the profile
-s SECRET, --secret SECRET
Secret to set for the profile
-r REGION, --region REGION
Default region for the profile
usage: ndt share-to-another-region [-h]
ami_id to_region ami_name account_id
[account_id ...]
Shares an image to another region for potentially another account
positional arguments:
ami_id The ami to share
to_region The region to share to
ami_name The name for the ami
account_id The account ids to share ami to
options:
-h, --help show this help message and exit
usage: ndt show-azure-params [-h] [-p PARAMETER] component azure
Show available parameters for a azure subcomponent
positional arguments:
component The component containg the azure subcomponet
azure The name of the azure subcomponent
options:
-h, --help show this help message and exit
-p PARAMETER, --parameter PARAMETER
Name of paremeter if only one parameter required
usage: ndt show-stack-params-and-outputs [-h] [-r REGION] [-p PARAMETER]
stack_name
Show stack parameters and outputs as a single json document
positional arguments:
stack_name The stack name to show
options:
-h, --help show this help message and exit
-r REGION, --region REGION
Region for the stack to show
-p PARAMETER, --parameter PARAMETER
Name of paremeter if only one parameter required
usage: ndt show-terraform-params [-h] [-j JMESPATH | -p PARAMETER]
component terraform
Show available parameters for a terraform subcomponent
positional arguments:
component The component containg the terraform subcomponet
terraform The name of the terraform subcomponent
options:
-h, --help show this help message and exit
-j JMESPATH, --jmespath JMESPATH
Show just a matching jmespath value
-p PARAMETER, --parameter PARAMETER
Name of paremeter if only one parameter required
usage: ndt snapshot-from-volume [-h] [-w] [-c [COPYTAGS ...]] [-t [TAGS ...]]
[-i]
tag_key tag_value mount_path
Create a snapshot of a volume identified by it\'s mount path
positional arguments:
tag_key Key of the tag to find volume with
tag_value Value of the tag to find volume with
mount_path Where to mount the volume
options:
-h, --help show this help message and exit
-w, --wait Wait for the snapshot to finish before returning
-c [COPYTAGS ...], --copytags [COPYTAGS ...]
Tag to copy to the snapshot from instance. Multiple
values allowed.
-t [TAGS ...], --tags [TAGS ...]
Tag to add to the snapshot in the format name=value.
Multiple values allowed.
-i, --ignore-missing-copytags
If set, missing copytags are ignored.
usage: ndt terraform-init-state [-h] component terraform-name
Make sure terraform state is initialized either for backend or locally
positional arguments:
component the component directory where the terraform directory is
terraform-name the name of the terraform directory that has the template
For example for lambda/terraform-sender/template.yaml
you would give sender
optional arguments:
-h, --help show this help message and exit
usage: ndt deploy-azure [-d] [-h] component azure-name
Exports ndt parameters into component/azure-name/variables.json
and deletes the deployment
positional arguments:
component the component directory where the azure directory is
azure-name the name of the azure directory that has the template
For example for lambda/azure-blobstore/template.yaml
you would give blobstore
optional arguments:
-d, --dryrun dry-run - do only parameter expansion and template pre-processing and azure cli what-if operation
-v, --verbose verbose - verbose output from azure cli
-h, --help show this help message and exit
usage: ndt undeploy-cdk [-h] component cdk-name
Exports ndt parameters into component/cdk-name/variables.yml
and runs cdk destroy for the same
positional arguments:
component the component directory where the cdk directory is
cdk-name the name of the cdk directory that has the template
For example for lambda/cdk-sender/template.yaml
you would give sender
optional arguments:
-h, --help show this help message and exit
usage: ndt undeploy-serverless [-h] component serverless-name
Exports ndt parameters into component/serverless-name/variables.yml
and runs sls remove -s $paramEnvId for the same
positional arguments:
component the component directory where the serverless directory is
serverless-name the name of the serverless directory that has the template
For example for lambda/serverless-sender/template.yaml
you would give sender
optional arguments:
-h, --help show this help message and exit
usage: ndt undeploy-stack [-h] [-f] <component> <stack-name>
Undeploys (deletes) the given stack.
Found s3 buckets are emptied and deleted only in case the -f argument is given.
positional arguments:
component the component directory where the stack template is
stack-name the name of the stack directory inside the component directory
For example for ecs-cluster/stack-cluster/template.yaml
you would give cluster
optional arguments:
-h, --help show this help message and exit
usage: ndt undeploy-terraform [-h] component terraform-name
Exports ndt parameters into component/terraform-name/terraform.tfvars as json
and runs terraform destroy for the same
positional arguments:
component the component directory where the terraform directory is
terraform-name the name of the terraform directory that has the template
For example for lambda/terraform-sender/template.yaml
you would give sender
optional arguments:
-h, --help show this help message and exit
usage: ndt update-sso-profile [-h]
Update current SSO-profile\'s session to ~/.aws/credentials. Running this
enables SSO-support for the Serverless Framework.
options:
-h, --help show this help message and exit
usage: ndt upsert-cloudfront-records [-h]
(-i DISTRIBUTION_ID | -c DISTRIBUTION_COMMENT)
[-w]
Upsert Route53 records for all aliases of a CloudFront distribution
options:
-h, --help show this help message and exit
-i DISTRIBUTION_ID, --distribution_id DISTRIBUTION_ID
Id for the distribution to upsert
-c DISTRIBUTION_COMMENT, --distribution_comment DISTRIBUTION_COMMENT
Comment for the distribution to upsert
-w, --wait Wait for request to sync
usage: ndt upsert-codebuild-projects [-h] [-d]
Creates or updates codebuild projects to deploy or bake ndt subcomponents in the current branch.
Parameters are read from properties files as described in \'ndt load-parameters -h\'.
To check all job paramters you can run \'ndt list-jobs -e -j -b [current-branch]\'
The only mandatory parameter is CODEBUILD_SERVICE_ROLE,
which defines the role that the codebuild project assumes for building.
Other parameters that affect jobs are:
* BUILD_JOB_NAME - name for the codebuild project
* NDT_VERSION - version to use to run bakes and deployments.
- Defaults to current version.
- You may also want to uses \'latest\' to always run the latest released ndt version (only recommended for dev/testing workloads).
* BUILD_SPEC - file or yaml snippet to use as the build definition.
- See https://docs.aws.amazon.com/codebuild/latest/userguide/build-spec-ref.html
- subcomponent variables and special variables ${command}, ${component} and ${subcomponent} are available and will be substituted accordingly
* CODEBUILD_SOURCE_TYPE - one of BITBUCKET, CODECOMMIT, CODEPIPELINE, GITHUB, GITHUB_ENTERPRISE, NO_SOURCE, S3
* CODEBUILD_SOURCE_LOCATION - the location of the source code
- if either of the above is missing, then the source part of the build will be omitted
* CODEBUILD_EVENT_FILTER - the type of event to trigger the build.
- By default PULL_REQUEST_MERGED
- Other possible values: PUSH, PULL_REQUEST_CREATED, PULL_REQUEST_UPDATED and PULL_REQUEST_REOPENED
* CODEBUILD_TIMEOUT - timeout in minutes for the codebuild execution. 60 by default
* BUILD_ENVIRONMENT_COMPUTE - the compute environment for the build.
- BUILD_GENERAL1_SMALL by default
- Other possible values BUILD_GENERAL1_MEDIUM, BUILD_GENERAL1_LARGE, BUILD_GENERAL1_2XLARGE
* NEEDS_DOCKER - if \'y\' (by default on for docker bakes and missing otheriwise), docker server is started inside the container
- Needed for bakes and for example serverless python dockerized dependencies
* SKIP_BUILD_JOB - skip creating build jobs where this parameter is \'y\'
* SKIP_IMAGE_JOB, SKIP_DOCKER_JOB, SKIP_SERVERLESS_JOB, SKIP_CDK_JOB, SKIP_TERRAFORM_JOB - skip creating jobs where these parameters are \'y\' and match the subcomponent type
options:
-h, --help show this help message and exit
-d, --dry-run Do not actually create or update projects, just print
configuration
usage: ndt upsert-dns-record [-h] [-t TYPE] [-l TTL] [-n] name value
Update a dns record in Route53.
positional arguments:
name The name of the record to create
value The value to put into the record
options:
-h, --help show this help message and exit
-t TYPE, --type TYPE The type of record to create. Defaults to \'A\'
-l TTL, --ttl TTL Time To Live for the record. Defaults to 300
-n, --no-wait Do not wait for the record to be synced within Route53
usage: ndt volume-from-snapshot [-h] [-n] [-c [COPYTAGS ...]] [-t [TAGS ...]]
[-i] [-u] [--gp2 | --gp3]
tag_key tag_value mount_path [size_gb]
Create a volume from an existing snapshot and mount it on the given path. The
snapshot is identified by a tag key and value. If no tag is found, an empty
volume is created, attached, formatted and mounted.
positional arguments:
tag_key Key of the tag to find volume with
tag_value Value of the tag to find volume with
mount_path Where to mount the volume
size_gb Size in GB for the volume. If different from snapshot
size, volume and filesystem are resized
options:
-h, --help show this help message and exit
-n, --no_delete_on_termination
Whether to skip deleting the volume on termination,
defaults to false
-c [COPYTAGS ...], --copytags [COPYTAGS ...]
Tag to copy to the volume from instance. Multiple
values allowed.
-t [TAGS ...], --tags [TAGS ...]
Tag to add to the volume in the format name=value.
Multiple values allowed.
-i, --ignore-missing-copytags
If set, missing copytags are ignored.
-u, --unencrypted If set, create unencrypted volume
--gp2 GP2 volume type (default)
--gp3 GP3 volume type
usage: ndt yaml-to-json [-h] [--colorize] [--merge [MERGE ...]] [--small] file
Convert nameless CloudFormation yaml to CloudFormation json with some
preprocessing
positional arguments:
file File to parse
options:
-h, --help show this help message and exit
--colorize, -c Colorize output
--merge [MERGE ...], -m [MERGE ...]
Merge other yaml files to the main file
--small, -s Compact representration of json
usage: ndt yaml-to-yaml [-h] [--colorize] file
Do ndt preprocessing for a yaml file
positional arguments:
file File to parse
options:
-h, --help show this help message and exit
--colorize, -c Colorize output
usage: associate-eip [-h] [-i IP] [-a ALLOCATIONID] [-e EIPPARAM]
[-p ALLOCATIONIDPARAM]
Associate an Elastic IP for the instance that this script runs on
options:
-h, --help show this help message and exit
-i IP, --ip IP Elastic IP to allocate - default is to get paramEip
from the stack that created this instance
-a ALLOCATIONID, --allocationid ALLOCATIONID
Elastic IP allocation id to allocate - default is to
get paramEipAllocationId from the stack that created
this instance
-e EIPPARAM, --eipparam EIPPARAM
Parameter to look up for Elastic IP in the stack -
default is paramEip
-p ALLOCATIONIDPARAM, --allocationidparam ALLOCATIONIDPARAM
Parameter to look up for Elastic IP Allocation ID in
the stack - default is paramEipAllocationId
usage: cf-logs-to-cloudwatch [-h] [-g GROUP] [-s STREAM] file
Read a file and send rows to cloudwatch and keep following the end for new
data. The log group will be the stack name that created instance if not given
as an argument. The logstream will be the instance id and filename if not
given as an argument. Group and stream aare created if they do not exist.
positional arguments:
file File to follow
options:
-h, --help show this help message and exit
-g GROUP, --group GROUP
Log group to log to. Defaults to the stack name that
created the instance if not given and instance is
created with a CloudFormation stack
-s STREAM, --stream STREAM
The log stream name to log to. The instance id and
filename if not given
usage: ec2-associate-eip [-h] [-i IP] [-a ALLOCATIONID] [-e EIPPARAM]
[-p ALLOCATIONIDPARAM]
Associate an Elastic IP for the instance that this script runs on
options:
-h, --help show this help message and exit
-i IP, --ip IP Elastic IP to allocate - default is to get paramEip
from the stack that created this instance
-a ALLOCATIONID, --allocationid ALLOCATIONID
Elastic IP allocation id to allocate - default is to
get paramEipAllocationId from the stack that created
this instance
-e EIPPARAM, --eipparam EIPPARAM
Parameter to look up for Elastic IP in the stack -
default is paramEip
-p ALLOCATIONIDPARAM, --allocationidparam ALLOCATIONIDPARAM
Parameter to look up for Elastic IP Allocation ID in
the stack - default is paramEipAllocationId
usage: logs-to-cloudwatch [-h] [-g GROUP] [-s STREAM] file
Read a file and send rows to cloudwatch and keep following the end for new
data. The log group will be the stack name that created instance if not given
as an argument. The logstream will be the instance id and filename if not
given as an argument. Group and stream aare created if they do not exist.
positional arguments:
file File to follow
options:
-h, --help show this help message and exit
-g GROUP, --group GROUP
Log group to log to. Defaults to the stack name that
created the instance if not given and instance is
created with a CloudFormation stack
-s STREAM, --stream STREAM
The log stream name to log to. The instance id and
filename if not given
usage: n-include [-h] file
Find a file from the first of the defined include paths
positional arguments:
file The file to find
options:
-h, --help show this help message and exit
usage: n-include-all [-h] pattern
Find a file from the first of the defined include paths
positional arguments:
pattern The file pattern to find
options:
-h, --help show this help message and exit
usage: signal-cf-status [-h] [-r RESOURCE] status
Signal CloudFormation status to a logical resource in CloudFormation that is
either given on the command line or resolved from CloudFormation tags
positional arguments:
status Status to indicate: SUCCESS | FAILURE
options:
-h, --help show this help message and exit
-r RESOURCE, --resource RESOURCE
Logical resource name to signal. Looked up from
cloudformation tags by default
file one or more files to package into the archive
usage: create-shell-archive.sh [-h] [<file> ...]
Creates a self-extracting bash archive, suitable for storing in e.g. Lastpass SecureNotes
positional arguments:
optional arguments:
-h, --help show this help message and exit
Mounts a local block device as an encrypted volume. Handy for things like local database installs.
usage: encrypt-and-mount.sh [-h] blk-device mount-path
positional arguments
blk-device the block device you want to encrypt and mount
mount-path the mount point for the encrypted volume
optional arguments:
-h, --help show this help message and exit
usage: ensure-letsencrypt-certs.sh [-h] domain-name [domain-name ...]
Fetches a certificate with fetch-secrets.sh, and exits cleanly if certificate is found and valid.
Otherwise gets a new certificate from letsencrypt via DNS verification using Route53.
Requires that fetch-secrets.sh and Route53 are set up correctly.
positional arguments
domain-name The domain(s) you want to check certificates for
optional arguments:
-h, --help show this help message and exit
--optional marks that following files will not fail and exit the script in they do not exist
usage: lasptass-fetch-notes.sh [-h] mode file [file ...] [--optional file ...]
Fetches secure notes from lastpass that match the basename of each listed file.
Files specified after --optional won\'t fail if the file does not exist.
positional arguments
mode the file mode for the downloaded files
file the file(s) to download. The source will be the note that matches the basename of the file
optional arguments:
-h, --help show this help message and exit
usage: lpssh [-h] [-k key-name] [email protected]
Fetches key mappings from lastpass, downloads mapped keys into a local ssh-agent and starts
an ssh session using those credentials.
positional arguments
[email protected] The user and host to match in "my-ssh-mappings" secure note
and to log into once keys are set up.
optional arguments:
-k, key name in lastpass to use if you don\'t want to use a mapping
-h, --help show this help message and exit
Mounts a local block device as an encrypted volume. Handy for things like local database installs.
usage: mount-and-format.sh [-h] blk-device mount-path
positional arguments
blk-device the block device you want to mount and formant
mount-path the mount point for the volume
optional arguments:
-h, --help show this help message and exit
Please run as root
usage: setup-fetch-secrets.sh [-h] <lpass|s3|vault>
Sets up a global fetch-secrets.sh that fetches secrets from either LastPass, S3 or nitor-vault
positional arguments
lpass|s3|vault the selected secrets backend.
optional arguments:
-h, --help show this help message and exit exit 1
usage: ssh-hostkeys-collect.sh [-h] hostname
Creates a <hostname>-ssh-hostkeys.sh archive in the current directory containing
ssh host keys to preserve the identity of a server over image upgrades.
positional arguments
hostname the name of the host used to store the keys. Typically the hostname is what
instance userdata scripts will use to look for the keys
optional arguments:
-h, --help show this help message and exit