diff --git a/CHANGELOG.md b/CHANGELOG.md index b92fc33..f457e2a 100644 --- a/CHANGELOG.md +++ b/CHANGELOG.md @@ -4,6 +4,22 @@ All notable changes to this project will be documented in this file. The format is based on [Keep a Changelog](https://keepachangelog.com/en/1.0.0/), and this project adheres to [Semantic Versioning](https://semver.org/spec/v2.0.0.html). +## [2.2.0] - 2020-08-27 +### Added +- (installed from source) Instructions for using ECS/Fargate rather than Lambda for Automation. See GitHub https://github.com/awslabs/aws-ops-automator/tree/master/source/ecs/README.md +- S3 access logging to aws-opsautomator-s3-access-logs-\-\ + +### Changed +- README.md now contains instructions on upgrading Ops Automator 2.x to the latest release. +- ECS/Fargate option updated to use Python3 +- ECS/Fargate option now uses OpsAutomatorLambdaRole (previously had no role assigned) +- Updated all Lambda runtimes to Python 3.8 +- Encryption is now enabled by default in Mappings->Settings->Resources->EncryptResourceData. All SNS topics, SQS queue, and DynamoDB tables are encrypted by this setting. +- S3 buckets are now encrypted using SSE AES256 + +### Known Issues +- ECS can be used for the Resource Selection process, but may fail when used for the Execution of actions. Customers should test use of ECS for Execution and use Lambda if unsuccessful. + ## [2.1.0] - 2019-10-06 ### Added - upgraded the solution to Python 3.7 \ No newline at end of file diff --git a/LICENSE.txt b/LICENSE.txt index e65e2b4..5470187 100755 --- a/LICENSE.txt +++ b/LICENSE.txt @@ -1,174 +1,174 @@ - Apache License - Version 2.0, January 2004 - http://www.apache.org/licenses/ - - TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION - - 1. Definitions. - - "License" shall mean the terms and conditions for use, reproduction, - and distribution as defined by Sections 1 through 9 of this document. - - "Licensor" shall mean the copyright owner or entity authorized by - the copyright owner that is granting the License. - - "Legal Entity" shall mean the union of the acting entity and all - other entities that control, are controlled by, or are under common - control with that entity. For the purposes of this definition, - "control" means (i) the power, direct or indirect, to cause the - direction or management of such entity, whether by contract or - otherwise, or (ii) ownership of fifty percent (50%) or more of the - outstanding shares, or (iii) beneficial ownership of such entity. - - "You" (or "Your") shall mean an individual or Legal Entity - exercising permissions granted by this License. - - "Source" form shall mean the preferred form for making modifications, - including but not limited to software source code, documentation - source, and configuration files. - - "Object" form shall mean any form resulting from mechanical - transformation or translation of a Source form, including but - not limited to compiled object code, generated documentation, - and conversions to other media types. - - "Work" shall mean the work of authorship, whether in Source or - Object form, made available under the License, as indicated by a - copyright notice that is included in or attached to the work - (an example is provided in the Appendix below). - - "Derivative Works" shall mean any work, whether in Source or Object - form, that is based on (or derived from) the Work and for which the - editorial revisions, annotations, elaborations, or other modifications - represent, as a whole, an original work of authorship. For the purposes - of this License, Derivative Works shall not include works that remain - separable from, or merely link (or bind by name) to the interfaces of, - the Work and Derivative Works thereof. - - "Contribution" shall mean any work of authorship, including - the original version of the Work and any modifications or additions - to that Work or Derivative Works thereof, that is intentionally - submitted to Licensor for inclusion in the Work by the copyright owner - or by an individual or Legal Entity authorized to submit on behalf of - the copyright owner. For the purposes of this definition, "submitted" - means any form of electronic, verbal, or written communication sent - to the Licensor or its representatives, including but not limited to - communication on electronic mailing lists, source code control systems, - and issue tracking systems that are managed by, or on behalf of, the - Licensor for the purpose of discussing and improving the Work, but - excluding communication that is conspicuously marked or otherwise - designated in writing by the copyright owner as "Not a Contribution." - - "Contributor" shall mean Licensor and any individual or Legal Entity - on behalf of whom a Contribution has been received by Licensor and - subsequently incorporated within the Work. - - 2. Grant of Copyright License. Subject to the terms and conditions of - this License, each Contributor hereby grants to You a perpetual, - worldwide, non-exclusive, no-charge, royalty-free, irrevocable - copyright license to reproduce, prepare Derivative Works of, - publicly display, publicly perform, sublicense, and distribute the - Work and such Derivative Works in Source or Object form. - - 3. Grant of Patent License. Subject to the terms and conditions of - this License, each Contributor hereby grants to You a perpetual, - worldwide, non-exclusive, no-charge, royalty-free, irrevocable - (except as stated in this section) patent license to make, have made, - use, offer to sell, sell, import, and otherwise transfer the Work, - where such license applies only to those patent claims licensable - by such Contributor that are necessarily infringed by their - Contribution(s) alone or by combination of their Contribution(s) - with the Work to which such Contribution(s) was submitted. If You - institute patent litigation against any entity (including a - cross-claim or counterclaim in a lawsuit) alleging that the Work - or a Contribution incorporated within the Work constitutes direct - or contributory patent infringement, then any patent licenses - granted to You under this License for that Work shall terminate - as of the date such litigation is filed. - - 4. Redistribution. You may reproduce and distribute copies of the - Work or Derivative Works thereof in any medium, with or without - modifications, and in Source or Object form, provided that You - meet the following conditions: - - (a) You must give any other recipients of the Work or - Derivative Works a copy of this License; and - - (b) You must cause any modified files to carry prominent notices - stating that You changed the files; and - - (c) You must retain, in the Source form of any Derivative Works - that You distribute, all copyright, patent, trademark, and - attribution notices from the Source form of the Work, - excluding those notices that do not pertain to any part of - the Derivative Works; and - - (d) If the Work includes a "NOTICE" text file as part of its - distribution, then any Derivative Works that You distribute must - include a readable copy of the attribution notices contained - within such NOTICE file, excluding those notices that do not - pertain to any part of the Derivative Works, in at least one - of the following places: within a NOTICE text file distributed - as part of the Derivative Works; within the Source form or - documentation, if provided along with the Derivative Works; or, - within a display generated by the Derivative Works, if and - wherever such third-party notices normally appear. The contents - of the NOTICE file are for informational purposes only and - do not modify the License. You may add Your own attribution - notices within Derivative Works that You distribute, alongside - or as an addendum to the NOTICE text from the Work, provided - that such additional attribution notices cannot be construed - as modifying the License. - - You may add Your own copyright statement to Your modifications and - may provide additional or different license terms and conditions - for use, reproduction, or distribution of Your modifications, or - for any such Derivative Works as a whole, provided Your use, - reproduction, and distribution of the Work otherwise complies with - the conditions stated in this License. - - 5. Submission of Contributions. Unless You explicitly state otherwise, - any Contribution intentionally submitted for inclusion in the Work - by You to the Licensor shall be under the terms and conditions of - this License, without any additional terms or conditions. - Notwithstanding the above, nothing herein shall supersede or modify - the terms of any separate license agreement you may have executed - with Licensor regarding such Contributions. - - 6. Trademarks. This License does not grant permission to use the trade - names, trademarks, service marks, or product names of the Licensor, - except as required for reasonable and customary use in describing the - origin of the Work and reproducing the content of the NOTICE file. - - 7. Disclaimer of Warranty. Unless required by applicable law or - agreed to in writing, Licensor provides the Work (and each - Contributor provides its Contributions) on an "AS IS" BASIS, - WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or - implied, including, without limitation, any warranties or conditions - of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A - PARTICULAR PURPOSE. You are solely responsible for determining the - appropriateness of using or redistributing the Work and assume any - risks associated with Your exercise of permissions under this License. - - 8. Limitation of Liability. In no event and under no legal theory, - whether in tort (including negligence), contract, or otherwise, - unless required by applicable law (such as deliberate and grossly - negligent acts) or agreed to in writing, shall any Contributor be - liable to You for damages, including any direct, indirect, special, - incidental, or consequential damages of any character arising as a - result of this License or out of the use or inability to use the - Work (including but not limited to damages for loss of goodwill, - work stoppage, computer failure or malfunction, or any and all - other commercial damages or losses), even if such Contributor - has been advised of the possibility of such damages. - - 9. Accepting Warranty or Additional Liability. While redistributing - the Work or Derivative Works thereof, You may choose to offer, - and charge a fee for, acceptance of support, warranty, indemnity, - or other liability obligations and/or rights consistent with this - License. However, in accepting such obligations, You may act only - on Your own behalf and on Your sole responsibility, not on behalf - of any other Contributor, and only if You agree to indemnify, - defend, and hold each Contributor harmless for any liability - incurred by, or claims asserted against, such Contributor by reason - of your accepting any such warranty or additional liability. +Apache License +Version 2.0, January 2004 +http://www.apache.org/licenses/ + + TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION + + 1. Definitions. + + "License" shall mean the terms and conditions for use, reproduction, + and distribution as defined by Sections 1 through 9 of this document. + + "Licensor" shall mean the copyright owner or entity authorized by + the copyright owner that is granting the License. + + "Legal Entity" shall mean the union of the acting entity and all + other entities that control, are controlled by, or are under common + control with that entity. For the purposes of this definition, + "control" means (i) the power, direct or indirect, to cause the + direction or management of such entity, whether by contract or + otherwise, or (ii) ownership of fifty percent (50%) or more of the + outstanding shares, or (iii) beneficial ownership of such entity. + + "You" (or "Your") shall mean an individual or Legal Entity + exercising permissions granted by this License. + + "Source" form shall mean the preferred form for making modifications, + including but not limited to software source code, documentation + source, and configuration files. + + "Object" form shall mean any form resulting from mechanical + transformation or translation of a Source form, including but + not limited to compiled object code, generated documentation, + and conversions to other media types. + + "Work" shall mean the work of authorship, whether in Source or + Object form, made available under the License, as indicated by a + copyright notice that is included in or attached to the work + (an example is provided in the Appendix below). + + "Derivative Works" shall mean any work, whether in Source or Object + form, that is based on (or derived from) the Work and for which the + editorial revisions, annotations, elaborations, or other modifications + represent, as a whole, an original work of authorship. For the purposes + of this License, Derivative Works shall not include works that remain + separable from, or merely link (or bind by name) to the interfaces of, + the Work and Derivative Works thereof. + + "Contribution" shall mean any work of authorship, including + the original version of the Work and any modifications or additions + to that Work or Derivative Works thereof, that is intentionally + submitted to Licensor for inclusion in the Work by the copyright owner + or by an individual or Legal Entity authorized to submit on behalf of + the copyright owner. For the purposes of this definition, "submitted" + means any form of electronic, verbal, or written communication sent + to the Licensor or its representatives, including but not limited to + communication on electronic mailing lists, source code control systems, + and issue tracking systems that are managed by, or on behalf of, the + Licensor for the purpose of discussing and improving the Work, but + excluding communication that is conspicuously marked or otherwise + designated in writing by the copyright owner as "Not a Contribution." + + "Contributor" shall mean Licensor and any individual or Legal Entity + on behalf of whom a Contribution has been received by Licensor and + subsequently incorporated within the Work. + + 2. Grant of Copyright License. Subject to the terms and conditions of + this License, each Contributor hereby grants to You a perpetual, + worldwide, non-exclusive, no-charge, royalty-free, irrevocable + copyright license to reproduce, prepare Derivative Works of, + publicly display, publicly perform, sublicense, and distribute the + Work and such Derivative Works in Source or Object form. + + 3. Grant of Patent License. Subject to the terms and conditions of + this License, each Contributor hereby grants to You a perpetual, + worldwide, non-exclusive, no-charge, royalty-free, irrevocable + (except as stated in this section) patent license to make, have made, + use, offer to sell, sell, import, and otherwise transfer the Work, + where such license applies only to those patent claims licensable + by such Contributor that are necessarily infringed by their + Contribution(s) alone or by combination of their Contribution(s) + with the Work to which such Contribution(s) was submitted. If You + institute patent litigation against any entity (including a + cross-claim or counterclaim in a lawsuit) alleging that the Work + or a Contribution incorporated within the Work constitutes direct + or contributory patent infringement, then any patent licenses + granted to You under this License for that Work shall terminate + as of the date such litigation is filed. + + 4. Redistribution. You may reproduce and distribute copies of the + Work or Derivative Works thereof in any medium, with or without + modifications, and in Source or Object form, provided that You + meet the following conditions: + + (a) You must give any other recipients of the Work or + Derivative Works a copy of this License; and + + (b) You must cause any modified files to carry prominent notices + stating that You changed the files; and + + (c) You must retain, in the Source form of any Derivative Works + that You distribute, all copyright, patent, trademark, and + attribution notices from the Source form of the Work, + excluding those notices that do not pertain to any part of + the Derivative Works; and + + (d) If the Work includes a "NOTICE" text file as part of its + distribution, then any Derivative Works that You distribute must + include a readable copy of the attribution notices contained + within such NOTICE file, excluding those notices that do not + pertain to any part of the Derivative Works, in at least one + of the following places: within a NOTICE text file distributed + as part of the Derivative Works; within the Source form or + documentation, if provided along with the Derivative Works; or, + within a display generated by the Derivative Works, if and + wherever such third-party notices normally appear. The contents + of the NOTICE file are for informational purposes only and + do not modify the License. You may add Your own attribution + notices within Derivative Works that You distribute, alongside + or as an addendum to the NOTICE text from the Work, provided + that such additional attribution notices cannot be construed + as modifying the License. + + You may add Your own copyright statement to Your modifications and + may provide additional or different license terms and conditions + for use, reproduction, or distribution of Your modifications, or + for any such Derivative Works as a whole, provided Your use, + reproduction, and distribution of the Work otherwise complies with + the conditions stated in this License. + + 5. Submission of Contributions. Unless You explicitly state otherwise, + any Contribution intentionally submitted for inclusion in the Work + by You to the Licensor shall be under the terms and conditions of + this License, without any additional terms or conditions. + Notwithstanding the above, nothing herein shall supersede or modify + the terms of any separate license agreement you may have executed + with Licensor regarding such Contributions. + + 6. Trademarks. This License does not grant permission to use the trade + names, trademarks, service marks, or product names of the Licensor, + except as required for reasonable and customary use in describing the + origin of the Work and reproducing the content of the NOTICE file. + + 7. Disclaimer of Warranty. Unless required by applicable law or + agreed to in writing, Licensor provides the Work (and each + Contributor provides its Contributions) on an "AS IS" BASIS, + WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or + implied, including, without limitation, any warranties or conditions + of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A + PARTICULAR PURPOSE. You are solely responsible for determining the + appropriateness of using or redistributing the Work and assume any + risks associated with Your exercise of permissions under this License. + + 8. Limitation of Liability. In no event and under no legal theory, + whether in tort (including negligence), contract, or otherwise, + unless required by applicable law (such as deliberate and grossly + negligent acts) or agreed to in writing, shall any Contributor be + liable to You for damages, including any direct, indirect, special, + incidental, or consequential damages of any character arising as a + result of this License or out of the use or inability to use the + Work (including but not limited to damages for loss of goodwill, + work stoppage, computer failure or malfunction, or any and all + other commercial damages or losses), even if such Contributor + has been advised of the possibility of such damages. + + 9. Accepting Warranty or Additional Liability. While redistributing + the Work or Derivative Works thereof, You may choose to offer, + and charge a fee for, acceptance of support, warranty, indemnity, + or other liability obligations and/or rights consistent with this + License. However, in accepting such obligations, You may act only + on Your own behalf and on Your sole responsibility, not on behalf + of any other Contributor, and only if You agree to indemnify, + defend, and hold each Contributor harmless for any liability + incurred by, or claims asserted against, such Contributor by reason + of your accepting any such warranty or additional liability. \ No newline at end of file diff --git a/NOTICE.txt b/NOTICE.txt index 49c4f7b..965e168 100755 --- a/NOTICE.txt +++ b/NOTICE.txt @@ -1,10 +1,10 @@ AWS Ops Automator -Copyright 2019 Amazon.com, Inc. or its affiliates. All Rights Reserved. -Licensed under the Apache License Version 2.0 (the "License"). You may not use this file except -in compliance with the License. A copy of the License is located at http://www.apache.org/licenses/ -or in the "license" file accompanying this file. This file is distributed on an "AS IS" BASIS, -WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, express or implied. See the License for the -specific language governing permissions and limitations under the License. +Copyright 2017 Amazon.com, Inc. or its affiliates. All Rights Reserved. +Licensed under the Apache License Version 2.0 (the "License"). You may not use this file except +in compliance with the License. A copy of the License is located at http://www.apache.org/licenses/ +or in the "license" file accompanying this file. This file is distributed on an "AS IS" BASIS, +WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, express or implied. See the License for the +specific language governing permissions and limitations under the License. ********************** THIRD PARTY COMPONENTS @@ -12,4 +12,4 @@ THIRD PARTY COMPONENTS This software includes third party software subject to the following copyrights: AWS SDK under the Apache License Version 2.0 -pytz under the Massachusetts Institute of Technology (MIT) license \ No newline at end of file +pytz under the Massachusetts Institute of Technology (MIT) license diff --git a/README.md b/README.md index 4a046bc..8194bf4 100755 --- a/README.md +++ b/README.md @@ -1,10 +1,9 @@ -# AWS Ops Automator - -## Description +AWS Ops Automator +================= **Ops Automator is a developer framework** for running actions to manage AWS environments with explicit support for multiple accounts and regions. -Ops Automator's primary function is to run tasks. A task is an action with a set of parameters that runs at scheduled times or events and operated on a select set of AWS resources. Events are triggered by changes in your environments and resources are selected through the resource discovery and tagging mechanisms built into Ops Automator. +Ops Automator's primary function is to run tasks. A task is an action with a set of parameters that runs at scheduled times or events and operated on a select set of AWS resources. Events are triggered by changes in your environments and resources are selected through the resource discovery and tagging mechanisms built into Ops Automator. Ops Automator comes with a number of actions. These are ready to use in your AWS environment and can be used as an example/starting point for developing your own actions. @@ -12,26 +11,422 @@ Examples of actions included are creating backups, setting capacity, cleaning up Ops Automator helps you to develop your own operation automations tasks in a consistent way with the framework handling all the heavy lifting. -### The Ops Automator framework handles the following functionality: -* Operations across multiple accounts and regions -* Task audit trails -* Logging -* Resource selection -* Scaling -* AWS API retries -* Completion handling for long running tasks -* Concurrency handling via queue throttling +The Ops Automator framework handles the following functionality: +---------------------------------------------------------------- + +- Operations across multiple accounts and regions +- Task audit trails +- Logging +- Resource selection +- Scaling +- AWS API retries +- Completion handling for long running tasks +- Concurrency handling via queue throttling Ops Automator lets you focus on implementing the actual logic of the action. Actions are developed in Python and can be added easily to the Ops Automator solution. Ops Automator has the ability to generate CloudFormation scripts for configuring tasks, based on metadata of the action that are part of the deployment. Development of actions is described in the Ops Automator Action Developers guide. +Documentation +------------- + +[Ops Automator full documentation](https://docs.aws.amazon.com/solutions/latest/ops-automator/welcome.html) is available on the AWS web site. + +Platform Support +---------------- + +Ops Automator v2.2.0 and later supports AWS Lambda, AWS ECS, AWS Fargate for the execution platform. Choose ECSFargate = Yes in the CloudFormation template to use ECS or Fargate, or leave it set to No to use Lambda. Note that with ECS/Fargate you may choose to use Lambda or containers at the Task level. To implement ECS/Fargate see the instructions later in this README. + +Building from GitHub +-------------------- + +### Overview of the Process + +Building from GitHub source will allow you to modify the solution, such as adding custom actions or upgrading to a new release. The process consists of downloading the source from GitHub, creating buckets to be used for deployment, building the solution, and uploading the artifacts needed for deployment. + +### You will need: + +- a Linux client with the AWS CLI installed and python 3.6+ +- source code downloaded from GitHub +- two S3 buckets (minimum): 1 global and 1 for each region where you will + deploy + +### Download from GitHub + +Clone or download the repository to a local directory on your linux client. Note: if you intend to modify Ops Automator you may wish to create your own fork of the GitHub repo and work from that. This allows you to check in any changes you make to your private copy of the solution. + +**Git Clone example:** + +~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ +git clone https://github.com/awslabs/aws-ops-automator.git +~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ + +**Download Zip example:** + +~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ +wget https://github.com/awslabs/aws-ops-automator/archive/master.zip +~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ + +### Customize to your needs + +Some customers have implementations of older versions of Ops Automator that include deprecated or custom actions. In order to upgrade to the latest release you will need to bring these actions forward to the latest build. See details later in this file. + +[See Ops Automator documentation for more details.](https://docs.aws.amazon.com/solutions/latest/ops-automator/welcome.html) +### Build for Distribution + +AWS Solutions use two types of buckets: a bucket for global access to templates, which is accessed via HTTP, and regional buckets for access to assets within the region, such as Lambda code. You will need: + +- One global bucket that is access via the http end point. AWS CloudFormation + templates are stored here. Ex. "mybucket" +- One regional bucket for each region where you plan to deploy using the name + of the global bucket as the root, and suffixed with the region name. Ex. + "mybucket-us-east-1" +- Your buckets should be encrypted and disallow public access + +From the *deployment* folder in your cloned repo, run build-s3-dist.sh + +~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ +chmod +x build-s3-dist.sh +build-s3-dist.sh ops-automator {version} +~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ + +**\**: name of the "global bucket" - *mybucket* in the example above + +**ops-automator**: name of the solution. This is used to form the first level prefix in the regional s3 bucket + +**version**: Optionally, you can override the version (from version.txt). You will want to do this when doing incremental updates within the same version, as this causes CloudFormation to update the infrastructure, particularly Lambdas when the source code has changed. We recommend using a build suffix to the semver version. Ex. for version 2.2.0 suffix it with ".001" and increment for each subsequent build. 2.2.0.001, 2.2.0.002, and so on. This value is used as the second part of the prefix for artifacts in the S3 buckets. The default version is the value in **version.txt.** + +Automatically upload files using the **deployment/upload-s3-dist.sh** script. You must have configured the AWS CLI and have access to the S3 buckets. The upload script automatically receives the values you provided the the build script. You may run **upload-s3-dist.sh** for each region where you plan to deploy the solution. + +~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ +chmod +x upload-s3-dist.sh +upload-s3-dist.sh +~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ + +Upgrading from a 2.0.0 Release +------------------------------ + +Version 2.1.0 and later include 7 supported actions: \* DynamoDbSetCapacity \* Ec2CopySnapshot \* Ec2CreateSnapshot \* Ec2DeleteSnapshot \* Ec2ReplaceInstance \* Ec2ResizeInstance \* Ec2TagCpuInstance + +Many customers have older versions of Ops Automator that include custom actions. It is possible to add these actions to Ops Automator 2.1 and later. You will need a copy of the source for your current implementation. You can find a zip file containing your current deployment as follows: + +1. Open CloudFormation and locate your Ops Automator stack +2. Open the stack and view the template (**Tenplate** tab) +3. Find the Resource definition for OpsAutomatorLambdaFunctionStandard. Note + the values for S3Bucket and S3Key. +4. Interpret these values to get the bucket name and prefix. +5. Derive the url: https://\-\/\ +6. Get the file +7. Extract the file to a convenient location + +**Example** + +~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ +OpsAutomatorLambdaFunctionStandard": { + "Type": "AWS::Lambda::Function", + "Properties": { + "Code": { + "S3Bucket": { + "Fn::Join": [ + "-", + [ + "ops-automator-deploy", + { + "Ref": "AWS::Region" + } + ] + ] + }, + "S3Key": "ops-automator/latest/ops-automator-2.2.0.61.zip" + }, + "FunctionName": { + "Fn::Join": [ + "-", + [ + { + "Ref": "AWS::StackName" + }, + { + "Fn::FindInMap": [ + "Settings", + "Names", + "OpsAutomatorLambdaFunction" + ] + }, + "Standard" + ] + ] + }, +~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ + +**S3Bucket:** ops-automator-deploy + +**S3Key:** ops-automator/latest/ops-automator-2.2.0.61.zip + +**url for us-east-1:** +https://ops-automator-deploy-us-east-1.s3.amazonaws.com/ops-automator/latest/ops-automator-2.2.0.61.zip + +**Get the source:** + +~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ +wget https://ops-automator-deploy-us-east-1.s3.amazonaws.com/ops-automator/latest/ops-automator-2.2.0.61.zip +~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ + +### Create Build Area + +Follow the instructions above to create a development copy of Ops Automator from GitHub. Go to the root of that copy. You should see: + +~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ +(python3) [ec2-user@ip-10-0-20-184 oa-220-customer]$ ll +total 28 +-rw-rw-r-- 1 ec2-user ec2-user 324 Jan 6 14:29 CHANGELOG.md +drwxrwxr-x 2 ec2-user ec2-user 122 Jan 6 14:29 deployment +-rwxrwxr-x 1 ec2-user ec2-user 10577 Jan 6 14:29 LICENSE.txt +-rwxrwxr-x 1 ec2-user ec2-user 822 Jan 6 14:29 NOTICE.txt +-rwxrwxr-x 1 ec2-user ec2-user 3837 Jan 6 14:29 README.md +drwxrwxr-x 5 ec2-user ec2-user 51 Jan 6 14:29 source +-rw-rw-r-- 1 ec2-user ec2-user 5 Jan 6 14:29 version.txt +(python3) [ec2-user@ip-10-0-20-184 oa-220-customer]$ +~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ + +### Initial Build + +To verify that all is well, do a base build from this source. You will need the base name of the global bucket you created earlier. + +Ex. "mybucket" will use "mybucket" for the templates and "mybucket-us-east-1" for a deployment in us-east-1. + +~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ +cd deployment +chmod +x *.sh +./build-s3-dist.sh ops-automator +~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ + +After your build completes without error copy the output files to your S3 buckets using the upload-s3-dist.sh script to send the files to the desire region: + +~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ +./upload-s3-dist.sh +~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ + +This will create the prefix ops-automator/\<*version*\> in both buckets, one containing the templates and the other a zip of the Lambda source code. This is your baseline, box-stock OA build. + +### Upgrading Actions + +#### Overview + +1. Get a list of Actions to be migrated +2. Copy action source to source/code/actions +- Audit for prereqs +- Audit for Python 3 compatibility +1. Run build-s3-dist.sh +2. Run upload-s3-dist.sh +3. Update the stack using the S3 url for the updated new template after all + actions imported + +#### Get a list of Actions + +Use the DynamoDB Console to query the Ops Automator ConfigurationTable for unique values in the Action column. For any action not in the above list you will need to find the source code in your current release, **source/code/actions.** Repeat the following steps for each Action. + +#### Update each Action + +1. Check dependencies + +Locate the Action file in your current deployment. For example, we'll work with DynamodbCreateBackup, which was a supported action as recently as 2.0.0.213, removed in a later 2.0 build. + +Copy the file to source/code/actions in the new release source. + +~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ +(python3) [ec2-user@ip-10-0-20-184 actions]$ ll +total 312 +-rw-rw-r-- 1 ec2-user ec2-user 7911 Jan 6 15:17 action_base.py +-rw-rw-r-- 1 ec2-user ec2-user 9365 Jan 6 15:17 action_ec2_events_base.py +-rw-rw-r-- 1 ec2-user ec2-user 9688 Jan 6 16:37 dynamodb_create_backup_action.py +-rw-rw-r-- 1 ec2-user ec2-user 12694 Jan 6 15:17 dynamodb_set_capacity_action.py +-rw-rw-r-- 1 ec2-user ec2-user 49681 Jan 6 15:17 ec2_copy_snapshot_action.py +-rwxrwxr-x 1 ec2-user ec2-user 38045 Jan 6 15:17 ec2_create_snapshot_action.py +-rwxrwxr-x 1 ec2-user ec2-user 16840 Jan 6 15:17 ec2_delete_snapshot_action.py +-rwxrwxr-x 1 ec2-user ec2-user 55337 Jan 6 15:17 ec2_replace_instance_action.py +-rwxrwxr-x 1 ec2-user ec2-user 34373 Jan 6 15:17 ec2_resize_instance_action.py +-rwxrwxr-x 1 ec2-user ec2-user 15825 Jan 6 15:17 ec2_tag_cpu_instance_action.py +-rw-rw-r-- 1 ec2-user ec2-user 14559 Jan 6 15:17 __init__.py +-rwxrwxr-x 1 ec2-user ec2-user 6199 Jan 6 15:17 scheduler_config_backup_action.py +-rwxrwxr-x 1 ec2-user ec2-user 8092 Jan 6 15:17 scheduler_task_cleanup_action.py +-rwxrwxr-x 1 ec2-user ec2-user 9132 Jan 6 15:17 scheduler_task_export_action.py +~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ + +Open the file in an editor and observe the imports: + +~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ +import services.dynamodb_service +import tagging +from actions import * +from actions.action_base import ActionBase +from boto_retry import get_client_with_retries, get_default_retry_strategy +from helpers import safe_json +~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ + +Verify that dynamodb_service.py exists in the source/code/services: + +~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ +(python3) [ec2-user@ip-10-0-20-184 deployment]$ cd ../services +(python3) [ec2-user@ip-10-0-20-184 services]$ ll +total 212 +-rwxrwxr-x 1 ec2-user ec2-user 29299 Jan 6 15:17 aws_service.py +-rwxrwxr-x 1 ec2-user ec2-user 4882 Jan 6 15:17 cloudformation_service.py +-rwxrwxr-x 1 ec2-user ec2-user 4871 Jan 6 15:17 cloudwatchlogs_service.py +-rwxrwxr-x 1 ec2-user ec2-user 5390 Jan 6 15:17 dynamodb_service.py +-rwxrwxr-x 1 ec2-user ec2-user 12657 Jan 6 15:17 ec2_service.py +-rwxrwxr-x 1 ec2-user ec2-user 4987 Jan 6 15:17 ecs_service.py +-rwxrwxr-x 1 ec2-user ec2-user 6861 Jan 6 15:17 elasticache_service.py +-rwxrwxr-x 1 ec2-user ec2-user 5214 Jan 6 15:17 elb_service.py +-rwxrwxr-x 1 ec2-user ec2-user 5369 Jan 6 15:17 elbv2_service.py +-rwxrwxr-x 1 ec2-user ec2-user 6125 Jan 6 15:17 iam_service.py +-rwxrwxr-x 1 ec2-user ec2-user 8193 Jan 6 15:17 __init__.py +-rwxrwxr-x 1 ec2-user ec2-user 5341 Jan 6 15:17 kms_service.py +-rwxrwxr-x 1 ec2-user ec2-user 5291 Jan 6 15:17 lambda_service.py +-rwxrwxr-x 1 ec2-user ec2-user 7558 Jan 6 15:17 opsautomatortest_service.py +-rwxrwxr-x 1 ec2-user ec2-user 11413 Jan 6 15:17 rds_service.py +-rw-rw-r-- 1 ec2-user ec2-user 13363 Jan 6 15:17 route53_service.py +-rwxrwxr-x 1 ec2-user ec2-user 9749 Jan 6 15:17 s3_service.py +-rwxrwxr-x 1 ec2-user ec2-user 6725 Jan 6 15:17 servicecatalog_service.py +-rwxrwxr-x 1 ec2-user ec2-user 5769 Jan 6 15:17 storagegateway_service.py +-rwxrwxr-x 1 ec2-user ec2-user 3441 Jan 6 15:17 tagging_service.py +-rwxrwxr-x 1 ec2-user ec2-user 4086 Jan 6 15:17 time_service.py +~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ + +Note that this action uses ActionBase, which is already in the actions folder (see above listing). + +1. Verify the code / compatibility + +Do a quick scan to make sure there are no Python 3 compatibility issues. + +**TIP**: use a linter. This code looks clean with regards to Python 3 issues. + +1. Repeat for all actions to be added from the old release + +#### Build it as a new version + +Open the directory for your new version and go to the **deployment** folder. Find out the current base semver version: + +~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ +(python3) [ec2-user@ip-10-0-20-184 customer]$ more version.txt +2.2.0 +(python3) [ec2-user@ip-10-0-20-184 customer]$ +~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ + +Append a build number. Start with 001. We will use version 2.2.0.001 for our first build. This is important as it will allow us to update the install. Do not change the semver version, as this allows AWS to match your installation back to the original. + +Run build_s3_dist: + +~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ +(python3) [ec2-user@ip-10-0-20-184 deployment]$ ./build-s3-dist.sh mybucket ops-automator v2.2.0.001 +~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ + +Upon successful completion, upload to S3: + +~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ +(python3) [ec2-user@ip-10-0-20-184 deployment]$ ./upload-s3-dist.sh us-east-1 +========================================================================== +Deploying ops-automator version v2.2.0 to bucket mybucket-us-east-1 +========================================================================== +Templates: mybucket/ops-automator/v2.2.0/ +Lambda code: mybucket-us-east-1/ops-automator/v2.2.0/ +--- +Press [Enter] key to start upload to us-east-1 +upload: global-s3-assets/ops-automator-ecs-cluster.template to s3://mybucket/ops-automator/v2.2.0/ops-automator-ecs-cluster.template +upload: global-s3-assets/ops-automator.template to s3://mybucket/ops-automator/v2.2.0/ops-automator.template +upload: regional-s3-assets/cloudwatch-handler.zip to s3://mybucket-us-east-1/ops-automator/v2.2.0/cloudwatch-handler.zip +upload: regional-s3-assets/ops-automator.zip to s3://mybucket-us-east-1/ops-automator/v2.2.0/ops-automator.zip +Completed uploading distribution. You may now install from the templates in mybucket/ops-automator/v2.2.0/ +(python3) [ec2-user@ip-10-0-20-184 deployment]$ +~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ + +#### Update the Stack or Deploy as New + +We generally recommend that you deploy a new stack with the new version and then migrate your actions from old to new. You may optionally update the stack in place. We have tested upgrade-in-place from v2.0.0 to v2.2.0 successfully, following the instructions above very carefully. + +**To Update** + +Replace the template with the one from the new version. + +Ex. +https://mybucket.s3-us-west-2.amazonaws.com/ops-automator/v2.2.0.001/ops-automator.template. + +There is no need to change any parameters. + +**Validate the change in Lambda** + +Open the Lambda console. Find all of your Lambdas by filtering by stack name. All should show an update at the time you updated the stack. Open onf of the OpsAutomator-\ Lambdas - ex. *OA-220-customer-OpsAutomator-Large*. View the Function code. Expand the actions folder. You should see the new action, dynamodb_create_backup_action.py. + +Verify that the action template was uploaded to the S3 configuration bucket: + +~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ +(python3) [ec2-user@ip-10-0-20-184 deployment]$ aws s3 ls s3://oa-220-customer-configuration-1wg089n4zjpt4/TaskConfiguration/ + PRE ScenarioTemplates/ + PRE Scripts/ +2020-01-06 17:13:52 6324 ActionsConfiguration.html +2020-01-06 17:13:46 25492 DynamodbCreateBackup.template +2020-01-06 17:13:45 33083 DynamodbSetCapacity.template +2020-01-06 17:13:50 37284 Ec2CopySnapshot.template +2020-01-06 17:13:49 34782 Ec2CreateSnapshot.template +2020-01-06 17:13:49 27938 Ec2DeleteSnapshot.template +2020-01-06 17:13:48 39649 Ec2ReplaceInstance.template +2020-01-06 17:13:46 38499 Ec2ResizeInstance.template +2020-01-06 17:13:51 26854 Ec2TagCpuInstance.template +~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ + +**Check the Logs** + +Examine both the Lambda logs and the Ops Automator logs for errors. + +ECS/Fargate Implementation +-------------------------- + +This section describes how to setup the use of an AWS Fargate cluster that is used by the Ops Automator framework for long-running tasks. Starting with Ops Automator 2.2.0 you can deploy with AWS ECS / Fargate or add it later. With ECS/Fargate enabled you can choose which tasks to run on containers, and which to run on Lambda - they are not mutually exclusive. ECS/Fargate may be a desirable option for customers with tasks that run longer than 15 minutes. + +## Setup +This assumes that you have downloaded the Ops Automator source from Github, built, and deployed the solution from source in your account. If you installed from the AWS Solutions template a simpler deployment is described in the AWS Ops Automator Implementation Guide, Appendix K. + +### Overview + +1. Deploy/update the AWS Ops Automator stack to use the ECS option +2. Build and deploy the Docker container +3. Update/deploy tasks using ECS/Fargate + +### Deploy Ops Automator with ECS + +See above procedure to build and deploy the solution from source. Select the ECS/Fargate option. You must do this first, as this option will create the ECS Container Registry needed in the last step. + +ECS can deploy a cluster in an existing VPC. You will need to provide the VPC ID and subnet IDs for at least two subnets. + +Fargate is automatically selected if you do not provide a VPC ID. It deploys a new VPC and public subnets for the Fargate cluster. + +### Build and Deploy the Docker Container + +From the */deployment/ecs* folder where you built the solution the ecs files, run the following command + +```./build-and-deploy-image.sh -s -r ``` + +This step pulls down the required files to build docker image based on the AWS Linux Docker optimized AMI. It installs the ops-automator-ecs-runner.py script on the image. + +The image is then pushed to the **ops-automator** repository, created by the Ops Automator template + +### Deploy Ops Automator Actions using ECS + +The ECS option is now available for Actions. You can now deploy additional tasks using the ECS option or modify existing tasks to use ECS. Note: if you deployed tasks prior to selecting ECS in the main AWS Ops Automator stack you will need to update their template from the S3 Ops Automator configuration bucket. ECS will now be an option for Resource Selection Memory and Execution Memory. + *** -Copyright 2019 Amazon.com, Inc. or its affiliates. All Rights Reserved. +Copyright 2020 Amazon.com, Inc. or its affiliates. All Rights Reserved. -Licensed under the Apache License Version 2.0 (the "License"). You may not use this file except in compliance with the License. A copy of the License is located at +Licensed under the Apache License Version 2.0 (the "License"). You may not use +this file except in compliance with the License. A copy of the License is +located at - http://www.apache.org/licenses/ +~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ +http://www.apache.org/licenses/ +~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ -or in the "license" file accompanying this file. This file is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, express or implied. See the License for the specific language governing permissions and limitations under the License. +or in the "license" file accompanying this file. This file is distributed on an +"AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, express or implied. +See the License for the specific language governing permissions and limitations +under the License. diff --git a/deployment/build-s3-dist.sh b/deployment/build-s3-dist.sh index e88c837..5332325 100755 --- a/deployment/build-s3-dist.sh +++ b/deployment/build-s3-dist.sh @@ -15,17 +15,78 @@ # - trademarked-solution-name: name of the solution for consistency # # - version-code: version of the solution +function do_cmd { + echo "------ EXEC $*" + $* +} +function do_replace { + replace="s/$2/$3/g" + file=$1 + do_cmd sed -i -e $replace $file +} -if [ -z "$1" ] || [ -z "$2" ] || [ -z "$3" ]; then +if [ -z "$1" ] | [ -z "$2" ]; then + echo "Usage: $0 [bucket] [solution-name] {version}" echo "Please provide the base source bucket name, trademark approved solution name and version where the lambda code will eventually reside." echo "For example: ./build-s3-dist.sh solutions trademarked-solution-name v1.0.0" exit 1 fi -echo "make bucket=$1 solution=$2 version=$3" -cd ../source/code -ls -echo $PWD -make bucket=$1 solution=$2 version=$3 -cd ../../deployment +bucket=$1 +echo "export DIST_OUTPUT_BUCKET=$bucket" > ./setenv.sh +solution_name=$2 +echo "export DIST_SOLUTION_NAME=$solution_name" >> ./setenv.sh + +# Version from the command line is definitive. Otherwise, use version.txt +if [ ! -z "$3" ]; then + version=$3 +elif [ -e ../source/version.txt ]; then + version=`cat ../source/version.txt` +else + echo "Version not found. Version must be passed as argument 3 or in version.txt in the format vn.n.n" +fi + +if [[ ! "$version" =~ ^v.*? ]]; then + version=v$version +fi +echo "export DIST_VERSION=$version" >> ./setenv.sh + +echo "==========================================================================" +echo "Building $solution_name version $version for bucket $bucket" +echo "==========================================================================" + +# Get reference for all important folders +template_dir="$PWD" # /deployment +template_dist_dir="$template_dir/global-s3-assets" +build_dist_dir="$template_dir/regional-s3-assets" +source_dir="$template_dir/../source" +dist_dir="$template_dir/dist" + +echo "------------------------------------------------------------------------------" +echo "[Init] Clean old dist folders" +echo "------------------------------------------------------------------------------" +do_cmd rm -rf $template_dist_dir +do_cmd mkdir -p $template_dist_dir +do_cmd rm -rf $build_dist_dir +do_cmd mkdir -p $build_dist_dir +do_cmd rm -rf $dist_dir +do_cmd mkdir -p $dist_dir + +# Copy the source tree to deployment/dist +do_cmd cp -r $source_dir/* $dist_dir + +do_cmd pip install --upgrade pip +# awscli will also install the compatible version of boto3 and botocore +do_cmd pip install --upgrade awscli +do_cmd pip install -r $source_dir/code/requirements.txt -t $dist_dir/code + +echo "------------------------------------------------------------------------------" +echo "[Make] Set up and call make from deployment/dist/code" +echo "------------------------------------------------------------------------------" +cp $source_dir/version.txt $dist_dir/code +cd $dist_dir/code +do_cmd make bucket=$bucket solution=$solution_name version=$version +cd $template_dir +# rm -rf dist +chmod +x setenv.sh echo "Completed building distribution" diff --git a/deployment/ops-automator.template b/deployment/ops-automator.template index d57ecb8..9f15688 100644 --- a/deployment/ops-automator.template +++ b/deployment/ops-automator.template @@ -1,6 +1,6 @@ { "AWSTemplateFormatVersion": "2010-09-09", - "Description": "Ops Automator, template version %version%, (SO0029)", + "Description": "Ops Automator, template version v2.2.0, (SO0029)", "Mappings": { "Send": { "AnonymousUsage": { @@ -44,12 +44,9 @@ "XXXLarge": "3008", "ECS": "" }, - "EcsCluster": { - "ClusterName": "" - }, "Resources": { "ResourceToS3SizeKB": 16, - "EncryptResourceData": "False" + "EncryptResourceData": "True" }, "ServiceLimits": { "MaxConcurrentEbsSnapshotCopies": 5, @@ -60,6 +57,11 @@ "MaxPutCallsPerStream": 5, "MaxDescribeCalls": 5, "MaxApiCalls": 40 + }, + "VpcCidrs": { + "vpc": "10.0.0.0/16", + "pubsubnet1": "10.0.0.0/24", + "pubsubnet2": "10.0.1.0/24" } }, "EnabledDisabled": { @@ -167,6 +169,32 @@ ], "Default": "Yes", "Description": "Activate or deactivate scheduling of task." + }, + "VpcId": { + "Type": "String", + "Description": "Optional - VPC Id of existing VPC. Leave blank to have a new VPC created", + "Default": "", + "AllowedPattern": "^(?:vpc-[0-9a-f]{8,17}|)$", + "ConstraintDescription": "VPC Id must begin with 'vpc-' or leave blank to have a new VPC created" + }, + "SubnetIds": { + "Type": "CommaDelimitedList", + "Description": "Optional - Comma separated list of two (2) existing VPC Subnet Ids where ECS instances will run. Required if setting VPC.", + "Default": "" + }, + "VpcAvailabilityZones": { + "Type": "CommaDelimitedList", + "Description": "Optional - Comma-delimited list of VPC availability zones in which to create subnets. Required if setting VPC.", + "Default": "" + }, + "ECSFargate": { + "Type": "String", + "Description": "Should Ops Automator use Fargate to run tasks?", + "AllowedValues": [ + "Yes", + "No" + ], + "Default": "No" } }, "Metadata": { @@ -174,7 +202,7 @@ "ParameterGroups": [ { "Label": { - "default": "Ops Automator (version %version%)" + "default": "Ops Automator (version v2.2.0)" }, "Parameters": [ "TagName", @@ -194,6 +222,17 @@ "LogRetentionDays", "ConfigBackupDays" ] + }, + { + "Label": { + "default": "ECS/Fargate Parameters" + }, + "Parameters": [ + "ECSFargate", + "VpcId", + "SubnetIds", + "VpcAvailabilityZones" + ] } ], "ParameterLabels": { @@ -220,6 +259,18 @@ }, "UseCloudWatchMetrics": { "default": "Enable CloudWatch Metrics" + }, + "VpcId": { + "default": "Cluster VPC" + }, + "SubnetIds": { + "default": "Cluster Subnets" + }, + "VpcAvailabilityZones": { + "default": "Cluster Availability zones" + }, + "ECSFargate": { + "default": "ECS/Fargate" } } } @@ -309,22 +360,6 @@ "True" ] }, - "UseEcs": { - "Fn::Not": [ - { - "Fn::Equals": [ - { - "Fn::FindInMap": [ - "Settings", - "EcsCluster", - "ClusterName" - ] - }, - "" - ] - } - ] - }, "CloudWatchPutLimit800": { "Fn::Or": [ { @@ -460,11 +495,49 @@ ] } ] + }, + "UseEcs": { + "Fn::Not": [ + { "Fn::Equals": [ { "Ref": "ECSFargate" }, "No" ] } + ] + }, + "CreateVpcResources": { + "Fn::And": [ + { "Fn::Equals": [ { "Ref": "VpcId" }, "" ] }, + { "Condition": "UseEcs" } + ] + }, + "UseSpecifiedVpcAvailabilityZones": { + "Fn::Not": [ + { + "Fn::Equals": [ + { + "Fn::Join": [ + "", + { + "Ref": "VpcAvailabilityZones" + } + ] + }, + "" + ] + } + ] } }, "Resources": { "OpsAutomatorEventsForwardRole": { "Type": "AWS::IAM::Role", + "Metadata": { + "cfn_nag": { + "rules_to_suppress": [ + { + "id": "W11", + "reason": "logs:*LogStream actions requires * resource." + } + ] + } + }, "Properties": { "AssumeRolePolicyDocument": { "Version": "2012-10-17", @@ -492,14 +565,32 @@ "Resource": { "Ref": "OpsAutomatorEventsTopic" } + }, + { + "Effect": "Allow", + "Action": [ + "logs:CreateLogGroup", + "logs:CreateLogStream", + "logs:PutLogEvents" + ], + "Resource": "*" + }, + { + "Sid": "KMS", + "Effect": "Allow", + "Resource": { "Fn::GetAtt": [ "ResourceEncryptionKey", "Arn" ] }, + "Action": [ + "kms:Encrypt", + "kms:Decrypt", + "kms:ReEncrypt*", + "kms:GenerateDataKey*", + "kms:DescribeKey" + ] } ] } } ], - "ManagedPolicyArns": [ - "arn:aws:iam::aws:policy/service-role/AWSLambdaBasicExecutionRole" - ], "Path": "/" } }, @@ -592,14 +683,32 @@ "Resource": [ "*" ] + }, + { + "Effect": "Allow", + "Action": [ + "logs:CreateLogGroup", + "logs:CreateLogStream", + "logs:PutLogEvents" + ], + "Resource": "*" + }, + { + "Sid": "KMS", + "Effect": "Allow", + "Resource": { "Fn::GetAtt": [ "ResourceEncryptionKey", "Arn" ] }, + "Action": [ + "kms:Encrypt", + "kms:Decrypt", + "kms:ReEncrypt*", + "kms:GenerateDataKey*", + "kms:DescribeKey" + ] } ] } } ], - "ManagedPolicyArns": [ - "arn:aws:iam::aws:policy/service-role/AWSLambdaBasicExecutionRole" - ], "Path": "/" } }, @@ -615,6 +724,10 @@ { "id": "F38", "reason": "To allow the IAM:PassRole action. A condition to allow only lambda to assume the role has also been added." + }, + { + "id": "W76", + "reason": "Complex role needs refactoring. This will be addressed in a future release." } ] } @@ -772,11 +885,6 @@ } ] }, - { - "Effect": "Allow", - "Action": "s3:ListBucket", - "Resource": "arn:aws:s3:::*" - }, { "Effect": "Allow", "Action": [ @@ -844,18 +952,6 @@ "Fn::GetAtt": [ "TriggerTable", "Arn" ] } }, - { - "Effect": "Allow", - "Action": [ - "dynamodb:GetRecords", - "dynamodb:GetShardIterator", - "dynamodb:DescribeStream", - "dynamodb:ListStreams" - ], - "Resource": { - "Fn::GetAtt": [ "TaskTrackingTable", "StreamArn" ] - } - }, { "Effect": "Allow", "Action": [ @@ -893,22 +989,15 @@ } ] }, - { - "Effect": "Allow", - "Action": "cloudformation:ListStackResources", - "Resource": { - "Ref": "AWS::StackId" - } - }, { "Effect": "Allow", "Action": [ "cloudformation:GetTemplate", "cloudformation:ListStackResources" ], - "Resource": { - "Fn::Sub": "arn:aws:cloudformation:${AWS::Region}:${AWS::AccountId}:stack/${AWS::StackName}/*" - } + "Resource": [ + { "Fn::Sub": "arn:aws:cloudformation:${AWS::Region}:${AWS::AccountId}:stack/${AWS::StackName}/*" } + ] }, { "Effect": "Allow", @@ -979,17 +1068,13 @@ ] } }, - { "Sid": "ActionsRequireAllResources", "Effect": "Allow", "Action": [ "cloudformation:DeleteStack", - "cloudwatch:PutMetricData", - "dynamodb:ListTables", - "ec2:DescribeVpcs", "ec2:DescribeSubnets", "ec2:DescribeImages", @@ -1002,31 +1087,135 @@ "iam:PassRole", "events:PutRule", "events:ListRules", - "logs:DescribeLogGroups", - "pricing:GetAttributeValues", - "ssm:GetParameter", "ssm:GetParameters", - "sts:AssumeRole", - - "tag:GetResources" + "tag:GetResources", + "s3:ListBuckets", + "dynamodb:ListStreams" + ], + "Resource": "*" + }, + { + "Effect": "Allow", + "Action": [ + "logs:CreateLogGroup", + "logs:CreateLogStream", + "logs:PutLogEvents" ], "Resource": "*" + }, + { + "Effect": "Allow", + "Action": [ + "dynamodb:DescribeStream", + "dynamodb:GetRecords", + "dynamodb:GetShardIterator", + "dynamodb:ListStreams" + ], + "Resource": [ + { "Fn::GetAtt": ["ConfigurationTable", "StreamArn"] }, + { "Fn::GetAtt": ["TaskTrackingTable", "StreamArn"] }, + { "Fn::GetAtt": ["ConcurrencyTable", "StreamArn"] }, + { "Fn::GetAtt": ["TriggerTable", "StreamArn"] } + ] + }, + { + "Sid": "KMS", + "Effect": "Allow", + "Resource": { "Fn::GetAtt": [ "ResourceEncryptionKey", "Arn" ] }, + "Action": [ + "kms:Encrypt", + "kms:Decrypt", + "kms:ReEncrypt*", + "kms:GenerateDataKey*", + "kms:DescribeKey" + ] } ] } } ], - "ManagedPolicyArns": [ - "arn:aws:iam::aws:policy/service-role/AWSLambdaDynamoDBExecutionRole", - "arn:aws:iam::aws:policy/service-role/AWSLambdaBasicExecutionRole" - ], "Path": "/" } }, + "EcsExecutionRole": { + "Type": "AWS::IAM::Role", + "Metadata": { + "cfn_nag": { + "rules_to_suppress": [ + { + "id": "W11", + "reason": "GetAuthorizationToken requires global resource" + } + ] + } + }, + "Properties": { + "AssumeRolePolicyDocument": { + "Version": "2012-10-17", + "Statement": [ + { + "Sid": "", + "Effect": "Allow", + "Principal": { + "Service": "ecs-tasks.amazonaws.com" + }, + "Action": "sts:AssumeRole" + } + ] + }, + "Policies": [ + { + "PolicyName": "OpsAutomatorLambdaRolePolicy", + "PolicyDocument": { + "Version": "2012-10-17", + "Statement": [ + { + "Effect": "Allow", + "Action": "ecr:GetAuthorizationToken", + "Resource": "*" + }, + { + "Effect": "Allow", + "Action": [ + "ecr:BatchCheckLayerAvailability", + "ecr:GetDownloadUrlForLayer", + "ecr:BatchGetImage" + ], + "Resource": { "Fn::Sub": "arn:aws:ecr:${AWS::Region}:${AWS::AccountId}:repository/ops-automator" } + }, + { + "Effect": "Allow", + "Action": [ + "logs:CreateLogStream", + "logs:PutLogEvents" + ], + "Resource": [ + { "Fn::Sub": "arn:aws:logs:${AWS::Region}:${AWS::AccountId}:log-group:${AWS::StackName}-logs" }, + { "Fn::Sub": "arn:aws:logs:${AWS::Region}:${AWS::AccountId}:log-group:${AWS::StackName}-logs:log-stream:*" } + ] + }, + { + "Sid": "KMS", + "Effect": "Allow", + "Resource": { "Fn::GetAtt": [ "ResourceEncryptionKey", "Arn" ] }, + "Action": [ + "kms:Encrypt", + "kms:Decrypt", + "kms:ReEncrypt*", + "kms:GenerateDataKey*", + "kms:DescribeKey" + ] + } + ] + } + } + ] + } + }, "LastSchedulerExecutionTable": { "Type": "AWS::DynamoDB::Table", "Properties": { @@ -1042,7 +1231,21 @@ "KeyType": "HASH" } ], - "BillingMode": "PAY_PER_REQUEST" + "BillingMode": "PAY_PER_REQUEST", + "SSESpecification": { + "Fn::If": [ + "EncryptResourceData", + { + "KMSMasterKeyId": { "Ref": "ResourceEncryptionKey" }, + "SSEEnabled": true, + "SSEType": "KMS" + }, + { + "KMSMasterKeyId": "", + "SSEEnabled": false + } + ] + } } }, "ConfigurationTable": { @@ -1063,6 +1266,20 @@ "BillingMode": "PAY_PER_REQUEST", "StreamSpecification": { "StreamViewType": "NEW_AND_OLD_IMAGES" + }, + "SSESpecification": { + "Fn::If": [ + "EncryptResourceData", + { + "KMSMasterKeyId": { "Ref": "ResourceEncryptionKey" }, + "SSEEnabled": true, + "SSEType": "KMS" + }, + { + "KMSMasterKeyId": "", + "SSEEnabled": false + } + ] } } }, @@ -1089,6 +1306,20 @@ "KeyType": "HASH" } ], + "SSESpecification": { + "Fn::If": [ + "EncryptResourceData", + { + "KMSMasterKeyId": { "Ref": "ResourceEncryptionKey" }, + "SSEEnabled": true, + "SSEType": "KMS" + }, + { + "KMSMasterKeyId": "", + "SSEEnabled": false + } + ] + }, "BillingMode": "PAY_PER_REQUEST", "TimeToLiveSpecification": { "AttributeName": "TTL", @@ -1157,6 +1388,20 @@ "BillingMode": "PAY_PER_REQUEST", "StreamSpecification": { "StreamViewType": "NEW_AND_OLD_IMAGES" + }, + "SSESpecification": { + "Fn::If": [ + "EncryptResourceData", + { + "KMSMasterKeyId": { "Ref": "ResourceEncryptionKey" }, + "SSEEnabled": true, + "SSEType": "KMS" + }, + { + "KMSMasterKeyId": "", + "SSEEnabled": false + } + ] } } }, @@ -1178,6 +1423,20 @@ "BillingMode": "PAY_PER_REQUEST", "StreamSpecification": { "StreamViewType": "KEYS_ONLY" + }, + "SSESpecification": { + "Fn::If": [ + "EncryptResourceData", + { + "KMSMasterKeyId": { "Ref": "ResourceEncryptionKey" }, + "SSEEnabled": true, + "SSEType": "KMS" + }, + { + "KMSMasterKeyId": "", + "SSEEnabled": false + } + ] } } }, @@ -1236,6 +1495,15 @@ } ] ] + }, + "KmsMasterKeyId": { + "Fn::If": [ + "EncryptResourceData", + { + "Ref": "ResourceEncryptionKey" + }, + "" + ] } } }, @@ -1275,6 +1543,15 @@ } ] ] + }, + "KmsMasterKeyId": { + "Fn::If": [ + "EncryptResourceData", + { + "Ref": "ResourceEncryptionKey" + }, + "" + ] } } }, @@ -1314,6 +1591,15 @@ } ] ] + }, + "KmsMasterKeyId": { + "Fn::If": [ + "EncryptResourceData", + { + "Ref": "ResourceEncryptionKey" + }, + "" + ] } } }, @@ -1424,7 +1710,7 @@ ] ] }, - "S3Key": "%solution%/%version%/ops-automator.zip" + "S3Key": "%solution%/v2.2.0/ops-automator.zip" }, "FunctionName": { "Fn::Join": [ @@ -1445,7 +1731,7 @@ ] }, "Handler": "main.lambda_handler", - "Runtime": "python3.7", + "Runtime": "python3.8", "Role": { "Fn::GetAtt": [ "OpsAutomatorLambdaRole", @@ -1542,7 +1828,7 @@ { "Ref": "AWS::StackName" }, - "%version%" + "v2.2.0" ] ] }, @@ -1624,11 +1910,7 @@ "Fn::If": [ "UseEcs", { - "Fn::FindInMap": [ - "Settings", - "EcsCluster", - "ClusterName" - ] + "Ref": "EcsCluster" }, { "Ref": "AWS::NoValue" @@ -1690,22 +1972,58 @@ "Metrics", "Url" ] - } - } - }, - "MemorySize": { - "Fn::FindInMap": [ - "Settings", - "ActionMemory", + }, + "AWSVPC_SUBNETS": { + "Fn::If": [ + "UseEcs", + { "Fn::If": [ + "CreateVpcResources", + { "Fn::Join": [ ",", + [ { "Ref": "PublicSubnetAz1" }, { "Ref": "PublicSubnetAz2" } ] + ] }, + { + "Fn::Join": [ + ",", + { "Ref": "SubnetIds" } + ] + } + ] }, + "None" + ] + }, + "AWSVPC_SECURITYGROUPS": { + "Fn::If": [ + "UseEcs", + { "Ref": "FargateSecurityGroup" }, + "None" + ] + }, + "AWSVPC_ASSIGNPUBLICIP": "ENABLED" + } + }, + "MemorySize": { + "Fn::FindInMap": [ + "Settings", + "ActionMemory", "Standard" ] }, "Timeout": 900, - "Description": "Ops Automator %size%, version %version%" + "Description": "Ops Automator %size%, version v2.2.0" } }, "OpsAutomatorCloudWatchQueueHandlerLambda": { "Type": "AWS::Lambda::Function", + "Metadata": { + "cfn_nag": { + "rules_to_suppress": [ + { + "id": "W48", + "reason": "KMS not required: no customer data" + } + ] + } + }, "Properties": { "Code": { "S3Bucket": { @@ -1719,7 +2037,7 @@ ] ] }, - "S3Key": "%solution%/%version%/cloudwatch-handler.zip" + "S3Key": "%solution%/v2.2.0/cloudwatch-handler.zip" }, "FunctionName": { "Fn::Join": [ @@ -1739,7 +2057,7 @@ ] }, "Handler": "cloudwatch_queue_handler_lambda.lambda_handler", - "Runtime": "python3.7", + "Runtime": "python3.8", "Role": { "Fn::GetAtt": [ "OpsAutomatorCloudWatchLogHandlerRole", @@ -1782,7 +2100,7 @@ }, "MemorySize": 128, "Timeout": 900, - "Description": "Ops Automator CloudWatch Queue Handler, version %version%" + "Description": "Ops Automator CloudWatch Queue Handler, version v2.2.0" } }, "OpsAutomatorEventsTopicSubscription": { @@ -1846,12 +2164,17 @@ "MemoryReservation": 128 } ], + "Cpu": "1024", + "Memory": "2048", + "RequiresCompatibilities": [ "FARGATE" ], + "NetworkMode": "awsvpc", "TaskRoleArn": { "Fn::GetAtt": [ "OpsAutomatorLambdaRole", "Arn" ] }, + "ExecutionRoleArn": { "Ref": "EcsExecutionRole" }, "Volumes": [] } }, @@ -1861,8 +2184,8 @@ "cfn_nag": { "rules_to_suppress": [ { - "id": "W35", - "reason": "Access logging is not required for this bucket. Custom metrics reporting is already done." + "id": "W51", + "reason": "The bucket is not public. When using the CF template in PROD, create a bucket policy to allow only administrators/ auditors access to the bucket" } ] } @@ -1910,6 +2233,10 @@ "BlockPublicPolicy": true, "IgnorePublicAcls": true, "RestrictPublicBuckets": true + }, + "LoggingConfiguration": { + "DestinationBucketName": { "Ref": "S3LoggingBucket" }, + "LogFilePrefix": "access-logs" } } }, @@ -1919,8 +2246,8 @@ "cfn_nag": { "rules_to_suppress": [ { - "id": "W35", - "reason": "Access logging is not required for this bucket. Custom metrics reporting is already done." + "id": "W51", + "reason": "The bucket is not public. When using the CF template in PROD, create a bucket policy to allow only administrators/ auditors access to the bucket" } ] } @@ -1957,6 +2284,10 @@ "BlockPublicPolicy": true, "IgnorePublicAcls": true, "RestrictPublicBuckets": true + }, + "LoggingConfiguration": { + "DestinationBucketName": { "Ref": "S3LoggingBucket" }, + "LogFilePrefix": "access-logs" } } }, @@ -2005,6 +2336,15 @@ } ] ] + }, + "KmsMasterKeyId": { + "Fn::If": [ + "EncryptResourceData", + { + "Ref": "ResourceEncryptionKey" + }, + "" + ] } } }, @@ -2077,8 +2417,8 @@ "cfn_nag": { "rules_to_suppress": [ { - "id": "W35", - "reason": "Access logging is not required for this bucket. Custom metrics reporting is already done." + "id": "W51", + "reason": "The bucket is not public. When using the CF template in PROD, create a bucket policy to allow only administrators/ auditors access to the bucket" } ] } @@ -2115,6 +2455,10 @@ "BlockPublicPolicy": true, "IgnorePublicAcls": true, "RestrictPublicBuckets": true + }, + "LoggingConfiguration": { + "DestinationBucketName": { "Ref": "S3LoggingBucket" }, + "LogFilePrefix": "access-logs" } } }, @@ -2539,7 +2883,7 @@ "LogRetentionDays": { "Ref": "LogRetentionDays" }, - "StackVersion": "%version%", + "StackVersion": "v2.2.0", "OpsAutomatorLambdaRole": { "Fn::GetAtt": [ "OpsAutomatorLambdaRole", @@ -2562,7 +2906,7 @@ "OpsAutomatorTopicArn": { "Ref": "OpsAutomatorEventsTopic" }, - "DeploymentVersion": "%version%" + "DeploymentVersion": "v2.2.0" }, "DependsOn": [ "ConfigurationTable", @@ -2737,7 +3081,21 @@ }, "OpsAutomatorEcsRepository": { "Condition": "UseEcs", - "Type": "AWS::ECR::Repository" + "Type": "AWS::ECR::Repository", + "Metadata": { + "cfn_nag": { + "rules_to_suppress": [ + { + "id": "W28", + "reason": "Dynamic names for ECS do not follow standard practice for resource naming and are not deriveable from stackname." + } + ] + } + }, + "DeletionPolicy": "Retain", + "Properties": { + "RepositoryName": "ops-automator" + } }, "ResourceEncryptionKey": { "Type": "AWS::KMS::Key", @@ -2767,26 +3125,6 @@ }, "Action": "kms:*", "Resource": "*" - }, - { - "Sid": "Allow use of the key", - "Effect": "Allow", - "Principal": { - "AWS": { - "Fn::GetAtt": [ - "OpsAutomatorLambdaRole", - "Arn" - ] - } - }, - "Action": [ - "kms:Encrypt", - "kms:Decrypt", - "kms:ReEncrypt*", - "kms:GenerateDataKey*", - "kms:DescribeKey" - ], - "Resource": "*" } ] } @@ -2804,6 +3142,372 @@ } }, "Condition": "EncryptResourceData" + }, + "S3LoggingBucket": { + "DeletionPolicy": "Retain", + "Type": "AWS::S3::Bucket", + "Properties": { + "BucketName": { + "Fn::Sub": "aws-opsautomator-s3-access-logs-${AWS::AccountId}-${AWS::Region}" + }, + "AccessControl": "LogDeliveryWrite", + "VersioningConfiguration": { + "Status": "Enabled" + }, + "BucketEncryption": { + "ServerSideEncryptionConfiguration": [ + { + "ServerSideEncryptionByDefault": { + "SSEAlgorithm": "AES256" + } + } + ] + }, + "Tags": [ + { + "Key": "Name", + "Value": "AWS Ops Automator Access Logs" + } + ], + "PublicAccessBlockConfiguration": { + "BlockPublicAcls": true, + "BlockPublicPolicy": true, + "IgnorePublicAcls": true, + "RestrictPublicBuckets": true + } + }, + "Metadata": { + "cfn_nag": { + "rules_to_suppress": [ + { + "id": "W35", + "reason": "This S3 bucket is used as the destination for storing access logs" + }, + { + "id": "W51", + "reason": "Log delivery controlled by ACL, not bucket policy" + } + ] + } + } + }, + "EcsCluster": { + "Condition": "UseEcs", + "Type": "AWS::ECS::Cluster" + }, + "Vpc": { + "Condition": "CreateVpcResources", + "Type": "AWS::EC2::VPC", + "Metadata": { + "cfn_nag": { + "rules_to_suppress": [ + { + "id": "W60", + "reason": "VPC is only used for the Fargate cluster, reducing the importance of Flow Logs (which add cost)." + } + ] + } + }, + "Properties": { + "CidrBlock": { + "Fn::FindInMap": [ + "Settings", + "VpcCidrs", + "vpc" + ] + }, + "Tags": [ + { + "Key": "Name", + "Value": { + "Fn::Join": [ + "-", + [ + { + "Ref": "AWS::StackName" + }, + "vpc" + ] + ] + } + } + ] + } + }, + "PublicSubnetAz1": { + "Condition": "CreateVpcResources", + "Type": "AWS::EC2::Subnet", + "Properties": { + "VpcId": { + "Ref": "Vpc" + }, + "CidrBlock": { + "Fn::FindInMap": [ + "Settings", + "VpcCidrs", + "pubsubnet1" + ] + }, + "AvailabilityZone": { + "Fn::If": [ + "UseSpecifiedVpcAvailabilityZones", + { + "Fn::Select": [ + "0", + { + "Ref": "VpcAvailabilityZones" + } + ] + }, + { + "Fn::Select": [ + "0", + { + "Fn::GetAZs": { + "Ref": "AWS::Region" + } + } + ] + } + ] + } + } + }, + "PublicSubnetAz2": { + "Condition": "CreateVpcResources", + "Type": "AWS::EC2::Subnet", + "Properties": { + "VpcId": { + "Ref": "Vpc" + }, + "CidrBlock": { + "Fn::FindInMap": [ + "Settings", + "VpcCidrs", + "pubsubnet2" + ] + }, + "AvailabilityZone": { + "Fn::If": [ + "UseSpecifiedVpcAvailabilityZones", + { + "Fn::Select": [ + "1", + { + "Ref": "VpcAvailabilityZones" + } + ] + }, + { + "Fn::Select": [ + "1", + { + "Fn::GetAZs": { + "Ref": "AWS::Region" + } + } + ] + } + ] + } + } + }, + "InternetGateway": { + "Condition": "CreateVpcResources", + "Type": "AWS::EC2::InternetGateway" + }, + "AttachGateway": { + "Condition": "CreateVpcResources", + "Type": "AWS::EC2::VPCGatewayAttachment", + "Properties": { + "VpcId": { + "Ref": "Vpc" + }, + "InternetGatewayId": { + "Ref": "InternetGateway" + } + } + }, + "RouteViaIgw": { + "Condition": "CreateVpcResources", + "Type": "AWS::EC2::RouteTable", + "Properties": { + "VpcId": { + "Ref": "Vpc" + } + } + }, + "PublicRouteViaIgw": { + "Condition": "CreateVpcResources", + "DependsOn": "AttachGateway", + "Type": "AWS::EC2::Route", + "Properties": { + "RouteTableId": { + "Ref": "RouteViaIgw" + }, + "DestinationCidrBlock": "0.0.0.0/0", + "GatewayId": { + "Ref": "InternetGateway" + } + } + }, + "PubSubnet1RouteTableAssociation": { + "Condition": "CreateVpcResources", + "Type": "AWS::EC2::SubnetRouteTableAssociation", + "Properties": { + "SubnetId": { + "Ref": "PublicSubnetAz1" + }, + "RouteTableId": { + "Ref": "RouteViaIgw" + } + } + }, + "PubSubnet2RouteTableAssociation": { + "Condition": "CreateVpcResources", + "Type": "AWS::EC2::SubnetRouteTableAssociation", + "Properties": { + "SubnetId": { + "Ref": "PublicSubnetAz2" + }, + "RouteTableId": { + "Ref": "RouteViaIgw" + } + } + }, + "FargateSecurityGroup": { + "Condition": "UseEcs", + "Type": "AWS::EC2::SecurityGroup", + "Metadata": { + "cfn_nag": { + "rules_to_suppress": [ + { + "id": "W40", + "reason": "Must allow egress of all protocols, all dests." + }, + { + "id": "W5", + "reason": "Must allow egress of all protocols, all dests." + } + ] + } + }, + "Description": "Security group for Fargate cluster", + "Properties": { + "GroupDescription": "Access to the Fargate containers", + "VpcId": { + "Fn::If": [ + "CreateVpcResources", + { + "Ref": "Vpc" + }, + { + "Ref": "VpcId" + } + ] + } + } + }, + "FargateSGSelfEgress": { + "Condition": "UseEcs", + "Type": "AWS::EC2::SecurityGroupEgress", + "Properties": { + "Description": "Egress to other containers in the same security group", + "GroupId": { + "Ref": "FargateSecurityGroup" + }, + "IpProtocol": -1, + "FromPort": -1, + "ToPort": -1, + "SourceSecurityGroupId": { + "Ref": "FargateSecurityGroup" + } + } + }, + "FargateSGExtEgress": { + "Condition": "UseEcs", + "Type": "AWS::EC2::SecurityGroupEgress", + "Properties": { + "Description": "Egress to external networks", + "GroupId": { + "Ref": "FargateSecurityGroup" + }, + "IpProtocol": -1, + "FromPort": -1, + "ToPort": -1, + "CidrIp": "0.0.0.0/0" + } + }, + "ECSRole": { + "Condition": "UseEcs", + "Type": "AWS::IAM::Role", + "Metadata": { + "cfn_nag": { + "rules_to_suppress": [ + { + "id": "W11", + "reason": "Global access to services required for automation to function" + } + ] + } + }, + "Properties": { + "AssumeRolePolicyDocument": { + "Statement": [ + { + "Effect": "Allow", + "Principal": { + "Service": [ + "ecs.amazonaws.com" + ] + }, + "Action": [ + "sts:AssumeRole" + ] + } + ] + }, + "Path": "/", + "Policies": [ + { + "PolicyName": "ecs-service", + "PolicyDocument": { + "Statement": [ + { + "Effect": "Allow", + "Action": [ + "ec2:AttachNetworkInterface", + "ec2:CreateNetworkInterface", + "ec2:CreateNetworkInterfacePermission", + "ec2:DeleteNetworkInterface", + "ec2:DeleteNetworkInterfacePermission", + "ec2:Describe*", + "ec2:DetachNetworkInterface", + "elasticloadbalancing:DeregisterInstancesFromLoadBalancer", + "elasticloadbalancing:DeregisterTargets", + "elasticloadbalancing:Describe*", + "elasticloadbalancing:RegisterInstancesWithLoadBalancer", + "elasticloadbalancing:RegisterTargets" + ], + "Resource": "*" + }, + { + "Sid": "KMS", + "Effect": "Allow", + "Resource": { "Fn::GetAtt": [ "ResourceEncryptionKey", "Arn" ] }, + "Action": [ + "kms:Encrypt", + "kms:Decrypt", + "kms:ReEncrypt*", + "kms:GenerateDataKey*", + "kms:DescribeKey" + ] + } + ] + } + } + ] + } } }, "Outputs": { @@ -2875,6 +3579,15 @@ "" ] } + }, + "Cluster": { + "Value": { + "Fn::If": [ + "UseEcs", + { "Ref": "EcsCluster" }, + "" + ] + } } } } \ No newline at end of file diff --git a/deployment/upload-s3-dist.sh b/deployment/upload-s3-dist.sh new file mode 100755 index 0000000..9d37ede --- /dev/null +++ b/deployment/upload-s3-dist.sh @@ -0,0 +1,90 @@ +#!/usr/bin/env bash +# +# This assumes build-s3-dist.sh has run successfully in the same shell environment. +# The following environmental variables are set by build-s3-dist.sh and used by this +# script: +# +# $DIST_OUTPUT_BUCKET global/root bucket name +# $DIST_SOLUTION_NAME solution name +# $DIST_VERSION version of the solution +# +# This script should be run from the repo's deployment directory +# cd deployment +# ./upload-s3-dist.sh +# +function do_cmd { + echo "------ EXEC $*" + $* +} + +function do_replace { + replace="s/$2/$3/g" + file=$1 + do_cmd sed -i -e $replace $file +} + +if [ -z "$1" ]; then + echo "You must specify a region to deploy to. Ex. us-east-1" + exit 1 +else + region=$1 +fi + +if [ -e "./setenv.sh" ]; then + source ./setenv.sh +else + echo "build-s3-dist.sh must be run immediately prior to this script. Please (re)run build-s3-dist.sh, then try again." +fi + +if [ -z "$DIST_OUTPUT_BUCKET" ] | [ -z "$DIST_SOLUTION_NAME" ] | [ -z "$DIST_VERSION" ]; then + echo "build-s3-dist.sh must be run immediately prior to this script. Please (re)run build-s3-dist.sh, then try again." + exit 1 +fi + +bucket=$DIST_OUTPUT_BUCKET +solution_name=$DIST_SOLUTION_NAME +version=$DIST_VERSION + +# Test the AWS CLI +caller=`aws sts get-caller-identity` +status=$? +if [ $status != 0 ]; then + echo "The AWS CLI is not present or not configured." + exit 1 +fi + +# Validate region +region_check=`aws ec2 describe-regions --region $region | grep ec2.$region.amazonaws.com | wc -l` +status=$? +if [ $status != 0 ] | [ $region_check != 1 ]; then + echo "$region is not a valid AWS region name." + exit 1 +fi + +bucket_state=`aws s3 ls s3://$bucket` +status=$? +if [ $status != 0 ]; then + echo "Bucket $bucket does not exist. Please see README.md" + exit 1 +fi + +bucket_state=`aws s3 ls s3://$bucket-$region` +status=$? +if [ $status != 0 ]; then + echo "Bucket $bucket-$region does not exist. Please see README.md" + exit 1 +fi + +echo "==========================================================================" +echo "Deploying $solution_name version $version to bucket $bucket-$region" +echo "==========================================================================" +echo "Templates: $bucket/$solution_name/$version/" +echo "Lambda code: $bucket-$region/$solution_name/$version/" +echo "---" + +read -p "Press [Enter] key to start upload to $region" + +aws s3 sync ./global-s3-assets s3://$bucket/$solution_name/$version/ +aws s3 sync ./regional-s3-assets s3://$bucket-$region/$solution_name/$version/ + +echo "Completed uploading distribution. You may now install from the templates in $bucket/$solution_name/$version/" diff --git a/source/cloudformation/ops-automator-ecs-cluster.template b/source/cloudformation/ops-automator-ecs-cluster.template deleted file mode 100644 index b1912d3..0000000 --- a/source/cloudformation/ops-automator-ecs-cluster.template +++ /dev/null @@ -1,824 +0,0 @@ -{ - "AWSTemplateFormatVersion": "2010-09-09", - "Description": "AWS Instance Scheduler cluster, version %version%", - "Mappings": { - "VpcCidrs": { - "vpc": { - "cidr": "10.0.0.0/16" - }, - "pubsubnet1": { - "cidr": "10.0.0.0/24" - }, - "pubsubnet2": { - "cidr": "10.0.1.0/24" - } - }, - "AWSRegionToAMI": { - "us-east-1": { - "AMIID": "ami-eca289fb" - }, - "us-east-2": { - "AMIID": "ami-446f3521" - }, - "us-west-1": { - "AMIID": "ami-9fadf8ff" - }, - "us-west-2": { - "AMIID": "ami-7abc111a" - }, - "eu-west-1": { - "AMIID": "ami-a1491ad2" - }, - "eu-central-1": { - "AMIID": "ami-54f5303b" - }, - "ap-northeast-1": { - "AMIID": "ami-9cd57ffd" - }, - "ap-southeast-1": { - "AMIID": "ami-a900a3ca" - }, - "ap-southeast-2": { - "AMIID": "ami-5781be34" - } - } - }, - "Parameters": { - "EcsInstanceType": { - "Type": "String", - "Description": "ECS EC2 instance type", - "Default": "t2.micro", - "AllowedValues": [ - "t2.micro", - "t2.small", - "t2.medium", - "t2.large", - "m3.medium", - "m3.large", - "m3.xlarge", - "m3.2xlarge", - "m4.large", - "m4.xlarge", - "m4.2xlarge", - "m4.4xlarge", - "m4.10xlarge", - "c4.large", - "c4.xlarge", - "c4.2xlarge", - "c4.4xlarge", - "c4.8xlarge", - "c3.large", - "c3.xlarge", - "c3.2xlarge", - "c3.4xlarge", - "c3.8xlarge" - ], - "ConstraintDescription": "Must be a valid EC2 instance type." - }, - "KeyName": { - "Type": "AWS::EC2::KeyPair::KeyName", - "Description": "Name of an existing EC2 KeyPair to enable SSH access to the ECS instances", - "Default": "" - }, - "VpcId": { - "Type": "String", - "Description": "Optional - VPC Id of existing VPC. Leave blank to have a new VPC created", - "Default": "", - "AllowedPattern": "^(?:vpc-[0-9a-f]{8}|)$", - "ConstraintDescription": "VPC Id must begin with 'vpc-' or leave blank to have a new VPC created" - }, - "SubnetIds": { - "Type": "CommaDelimitedList", - "Description": "Optional - Comma separated list of two (2) existing VPC Subnet Ids where ECS instances will run. Required if setting VPC.", - "Default": "" - }, - "AutoScalingGroupMinSize": { - "Type": "Number", - "Description": "Minimum size of ECS Auto Scaling Group", - "Default": "2", - "MinValue": 1 - }, - "AutoScalingGroupMaxSize": { - "Type": "Number", - "Description": "Maximum size of ECS Auto Scaling Group", - "Default": "10" - }, - "SecurityGroup": { - "Type": "AWS::EC2::SecurityGroup::Id", - "Description": "Optional - Existing security group to associate the container instances. Creates one by default.", - "Default": "" - }, - "SourceCidr": { - "Type": "String", - "Description": "Optional - CIDR/IP range for EcsPort - defaults to 0.0.0.0/0", - "Default": "0.0.0.0/0", - "AllowedPattern": "^(([0-9]|[1-9][0-9]|1[0-9]{2}|2[0-4][0-9]|25[0-5])\\.){3}([0-9]|[1-9][0-9]|1[0-9]{2}|2[0-4][0-9]|25[0-5])(\\/(3[0-2]|[1-2][0-9]|[0-9]))$" - }, - "EcsPort": { - "Type": "String", - "Description": "Optional - Security Group port to open on ECS instances - defaults to port 80", - "Default": "80" - }, - "VpcAvailabilityZones": { - "Type": "CommaDelimitedList", - "Description": "Optional - Comma-delimited list of VPC availability zones in which to create subnets. Required if setting VPC.", - "Default": "" - } - }, - "Metadata": { - "AWS::CloudFormation::Interface": { - "ParameterGroups": [ - { - "Label": { - "default": "AutoScaling Group and Instance Parameters" - }, - "Parameters": [ - "AutoScalingGroupMinSize", - "AutoScalingGroupMaxSize", - "EcsInstanceType", - "KeyName", - "SecurityGroup" - ] - }, - { - "Label": { - "default": "ECS Parameters" - }, - "Parameters": [ - "EcsPort", - "SourceCidr" - ] - }, - { - "Label": { - "default": "VPC Parameters" - }, - "Parameters": [ - "VpcId", - "SubnetIds", - "VpcAvailabilityZones" - ] - } - ], - "ParameterLabels": { - "AutoScalingGroupMinSize": { - "default": "Minimum instances" - }, - "AutoScalingGroupMaxSize": { - "default": "Maximum instances" - }, - "EcsInstanceType": { - "default": "Instance type" - }, - "SecurityGroup": { - "default": "Security group" - }, - "KeyName": { - "default": "Key name" - }, - "EcsPort": { - "default": "Ecs port" - }, - "SourceCidr": { - "default": "Source IP range" - }, - "VpcId": { - "default": "VPC" - }, - "SubnetIds": { - "default": "Subnets" - }, - "VpcAvailabilityZones": { - "default": "Availability zones" - } - } - } - }, - "Conditions": { - "CreateVpcResources": { - "Fn::Equals": [ - { - "Ref": "VpcId" - }, - "" - ] - }, - "CreateSecurityGroup": { - "Fn::Equals": [ - { - "Ref": "SecurityGroup" - }, - "" - ] - }, - "CreateEC2LaunchConfigurationWithKeyPair": { - "Fn::Not": [ - { - "Fn::Equals": [ - { - "Ref": "KeyName" - }, - "" - ] - } - ] - }, - "CreateEC2LaunchConfigurationWithoutKeyPair": { - "Fn::Equals": [ - { - "Ref": "KeyName" - }, - "" - ] - }, - "UseSpecifiedVpcAvailabilityZones": { - "Fn::Not": [ - { - "Fn::Equals": [ - { - "Fn::Join": [ - "", - { - "Ref": "VpcAvailabilityZones" - } - ] - }, - "" - ] - } - ] - } - }, - "Resources": { - "EcsCluster": { - "Type": "AWS::ECS::Cluster" - }, - "Vpc": { - "Condition": "CreateVpcResources", - "Type": "AWS::EC2::VPC", - "Properties": { - "CidrBlock": { - "Fn::FindInMap": [ - "VpcCidrs", - "vpc", - "cidr" - ] - }, - "Tags": [ - { - "Key": "Name", - "Value": { - "Fn::Join": [ - "-", - [ - { - "Ref": "AWS::StackName" - }, - "vpc" - ] - ] - } - } - ] - } - }, - "PublicSubnetAz1": { - "Condition": "CreateVpcResources", - "Type": "AWS::EC2::Subnet", - "Properties": { - "VpcId": { - "Ref": "Vpc" - }, - "CidrBlock": { - "Fn::FindInMap": [ - "VpcCidrs", - "pubsubnet1", - "cidr" - ] - }, - "AvailabilityZone": { - "Fn::If": [ - "UseSpecifiedVpcAvailabilityZones", - { - "Fn::Select": [ - "0", - { - "Ref": "VpcAvailabilityZones" - } - ] - }, - { - "Fn::Select": [ - "0", - { - "Fn::GetAZs": { - "Ref": "AWS::Region" - } - } - ] - } - ] - } - } - }, - "PublicSubnetAz2": { - "Condition": "CreateVpcResources", - "Type": "AWS::EC2::Subnet", - "Properties": { - "VpcId": { - "Ref": "Vpc" - }, - "CidrBlock": { - "Fn::FindInMap": [ - "VpcCidrs", - "pubsubnet2", - "cidr" - ] - }, - "AvailabilityZone": { - "Fn::If": [ - "UseSpecifiedVpcAvailabilityZones", - { - "Fn::Select": [ - "1", - { - "Ref": "VpcAvailabilityZones" - } - ] - }, - { - "Fn::Select": [ - "1", - { - "Fn::GetAZs": { - "Ref": "AWS::Region" - } - } - ] - } - ] - } - } - }, - "InternetGateway": { - "Condition": "CreateVpcResources", - "Type": "AWS::EC2::InternetGateway" - }, - "AttachGateway": { - "Condition": "CreateVpcResources", - "Type": "AWS::EC2::VPCGatewayAttachment", - "Properties": { - "VpcId": { - "Ref": "Vpc" - }, - "InternetGatewayId": { - "Ref": "InternetGateway" - } - } - }, - "RouteViaIgw": { - "Condition": "CreateVpcResources", - "Type": "AWS::EC2::RouteTable", - "Properties": { - "VpcId": { - "Ref": "Vpc" - } - } - }, - "PublicRouteViaIgw": { - "Condition": "CreateVpcResources", - "DependsOn": "AttachGateway", - "Type": "AWS::EC2::Route", - "Properties": { - "RouteTableId": { - "Ref": "RouteViaIgw" - }, - "DestinationCidrBlock": "0.0.0.0/0", - "GatewayId": { - "Ref": "InternetGateway" - } - } - }, - "PubSubnet1RouteTableAssociation": { - "Condition": "CreateVpcResources", - "Type": "AWS::EC2::SubnetRouteTableAssociation", - "Properties": { - "SubnetId": { - "Ref": "PublicSubnetAz1" - }, - "RouteTableId": { - "Ref": "RouteViaIgw" - } - } - }, - "PubSubnet2RouteTableAssociation": { - "Condition": "CreateVpcResources", - "Type": "AWS::EC2::SubnetRouteTableAssociation", - "Properties": { - "SubnetId": { - "Ref": "PublicSubnetAz2" - }, - "RouteTableId": { - "Ref": "RouteViaIgw" - } - } - }, - "EcsSecurityGroup": { - "Condition": "CreateSecurityGroup", - "Type": "AWS::EC2::SecurityGroup", - "Metadata": { - "cfn_nag": { - "rules_to_suppress": [ - { - "id": "F1000", - "reason": "Allow all outbound traffic." - } - ] - } - }, - "Properties": { - "GroupDescription": "ECS Allowed Ports", - "VpcId": { - "Fn::If": [ - "CreateVpcResources", - { - "Ref": "Vpc" - }, - { - "Ref": "VpcId" - } - ] - }, - "SecurityGroupIngress": [ - { - "IpProtocol": "tcp", - "FromPort": { - "Ref": "EcsPort" - }, - "ToPort": { - "Ref": "EcsPort" - }, - "CidrIp": { - "Ref": "SourceCidr" - } - } - ] - } - }, - "EcsInstancePolicy": { - "Type": "AWS::IAM::Role", - "Metadata": { - "cfn_nag": { - "rules_to_suppress": [ - { - "id": "W11", - "reason": "The CloudWatch Logs policy has been scoped down to logs namespace." - } - ] - } - }, - "Properties": { - "AssumeRolePolicyDocument": { - "Version": "2012-10-17", - "Statement": [ - { - "Effect": "Allow", - "Principal": { - "Service": [ - "ec2.amazonaws.com" - ] - }, - "Action": [ - "sts:AssumeRole" - ] - } - ] - }, - "Path": "/", - "ManagedPolicyArns": [ - "arn:aws:iam::aws:policy/service-role/AmazonEC2ContainerServiceforEC2Role" - ], - "Policies": [ - { - "PolicyName": "ecs-service", - "PolicyDocument": { - "Version": "2012-10-17", - "Statement": [ - { - "Effect": "Allow", - "Action": [ - "logs:CreateLogStream", - "logs:PutLogEvents" - ], - "Resource": "arn:aws:logs:*:*:*" - } - ] - } - } - ] - } - }, - "EcsInstanceProfile": { - "Type": "AWS::IAM::InstanceProfile", - "Properties": { - "Path": "/", - "Roles": [ - { - "Ref": "EcsInstancePolicy" - } - ] - } - }, - "EcsInstanceLaunchConfiguration": { - "Condition": "CreateEC2LaunchConfigurationWithKeyPair", - "Type": "AWS::AutoScaling::LaunchConfiguration", - "Properties": { - "ImageId": { - "Fn::FindInMap": [ - "AWSRegionToAMI", - { - "Ref": "AWS::Region" - }, - "AMIID" - ] - }, - "InstanceType": { - "Ref": "EcsInstanceType" - }, - "AssociatePublicIpAddress": true, - "IamInstanceProfile": { - "Ref": "EcsInstanceProfile" - }, - "KeyName": { - "Ref": "KeyName" - }, - "SecurityGroups": { - "Fn::If": [ - "CreateSecurityGroup", - [ - { - "Ref": "EcsSecurityGroup" - } - ], - [ - { - "Ref": "SecurityGroup" - } - ] - ] - }, - "UserData": { - "Fn::Base64": { - "Fn::Join": [ - "", - [ - "#!/bin/bash\n", - "echo ECS_CLUSTER=", - { - "Ref": "EcsCluster" - }, - " >> /etc/ecs/ecs.config\n" - ] - ] - } - } - } - }, - "EcsInstanceLaunchConfigurationWithoutKeyPair": { - "Condition": "CreateEC2LaunchConfigurationWithoutKeyPair", - "Type": "AWS::AutoScaling::LaunchConfiguration", - "Properties": { - "ImageId": { - "Fn::FindInMap": [ - "AWSRegionToAMI", - { - "Ref": "AWS::Region" - }, - "AMIID" - ] - }, - "InstanceType": { - "Ref": "EcsInstanceType" - }, - "AssociatePublicIpAddress": true, - "IamInstanceProfile": { - "Ref": "EcsInstanceProfile" - }, - "SecurityGroups": { - "Fn::If": [ - "CreateSecurityGroup", - [ - { - "Ref": "EcsSecurityGroup" - } - ], - [ - { - "Ref": "SecurityGroup" - } - ] - ] - }, - "UserData": { - "Fn::Base64": { - "Fn::Join": [ - "", - [ - "#!/bin/bash\n", - "echo ECS_CLUSTER=", - { - "Ref": "EcsCluster" - }, - " >> /etc/ecs/ecs.config\n" - ] - ] - } - } - } - }, - "EcsInstanceAutoScalingGroup": { - "Type": "AWS::AutoScaling::AutoScalingGroup", - "Properties": { - "VPCZoneIdentifier": { - "Fn::If": [ - "CreateVpcResources", - [ - { - "Fn::Join": [ - ",", - [ - { - "Ref": "PublicSubnetAz1" - }, - { - "Ref": "PublicSubnetAz2" - } - ] - ] - } - ], - { - "Ref": "SubnetIds" - } - ] - }, - "LaunchConfigurationName": { - "Fn::If": [ - "CreateEC2LaunchConfigurationWithKeyPair", - { - "Ref": "EcsInstanceLaunchConfiguration" - }, - { - "Ref": "EcsInstanceLaunchConfigurationWithoutKeyPair" - } - ] - }, - "MinSize": { - "Ref": "AutoScalingGroupMinSize" - }, - "MaxSize": { - "Ref": "AutoScalingGroupMaxSize" - }, - "DesiredCapacity": { - "Ref": "AutoScalingGroupMinSize" - }, - "Tags": [ - { - "Key": "Name", - "Value": { - "Fn::Join": [ - "", - [ - "ECS Instance - ", - { - "Ref": "AWS::StackName" - } - ] - ] - }, - "PropagateAtLaunch": true - } - ] - } - }, - "EcsScaleUpPolicy": { - "Type": "AWS::AutoScaling::ScalingPolicy", - "Properties": { - "AdjustmentType": "ChangeInCapacity", - "AutoScalingGroupName": { - "Ref": "EcsInstanceAutoScalingGroup" - }, - "EstimatedInstanceWarmup": 60, - "PolicyType": "StepScaling", - "StepAdjustments": [ - { - "MetricIntervalLowerBound": 0, - "MetricIntervalUpperBound": 10, - "ScalingAdjustment": 1 - }, - { - "MetricIntervalLowerBound": 10, - "MetricIntervalUpperBound": 25, - "ScalingAdjustment": 2 - }, - { - "MetricIntervalLowerBound": 25, - "ScalingAdjustment": 3 - } - ] - } - }, - "EcsMemoryReservationHighAlarmHigh": { - "Type": "AWS::CloudWatch::Alarm", - "Properties": { - "EvaluationPeriods": 1, - "Statistic": "Average", - "Threshold": 50, - "AlarmDescription": "Alarm if ECS Memory reservation is high", - "Period": 60, - "Unit": "Percent", - "AlarmActions": [ - { - "Ref": "EcsScaleUpPolicy" - } - ], - "Namespace": "AWS/ECS", - "Dimensions": [ - { - "Name": "ClusterName", - "Value": { - "Ref": "EcsCluster" - } - } - ], - "ComparisonOperator": "GreaterThanThreshold", - "MetricName": "MemoryReservation" - } - }, - "EcsScaleDownPolicy": { - "Type": "AWS::AutoScaling::ScalingPolicy", - "Properties": { - "AdjustmentType": "ChangeInCapacity", - "AutoScalingGroupName": { - "Ref": "EcsInstanceAutoScalingGroup" - }, - "PolicyType": "StepScaling", - "StepAdjustments": [ - { - "MetricIntervalLowerBound": -10, - "MetricIntervalUpperBound": 0, - "ScalingAdjustment": 0 - }, - { - "MetricIntervalLowerBound": -20, - "MetricIntervalUpperBound": -10, - "ScalingAdjustment": -1 - }, - { - "MetricIntervalLowerBound": -30, - "MetricIntervalUpperBound": -20, - "ScalingAdjustment": -2 - }, - { - "MetricIntervalUpperBound": -30, - "ScalingAdjustment": -3 - } - ] - } - }, - "EcsMemoryReservationHighAlarmLow": { - "Type": "AWS::CloudWatch::Alarm", - "Properties": { - "EvaluationPeriods": 1, - "Statistic": "Average", - "Threshold": 50, - "AlarmDescription": "Alarm if ECS Memory reservation is low", - "Period": 60, - "Unit": "Percent", - "AlarmActions": [ - { - "Ref": "EcsScaleDownPolicy" - } - ], - "Namespace": "AWS/ECS", - "Dimensions": [ - { - "Name": "ClusterName", - "Value": { - "Ref": "EcsCluster" - } - } - ], - "ComparisonOperator": "LessThanOrEqualToThreshold", - "MetricName": "MemoryReservation" - } - } - }, - "Outputs": { - "Cluster": { - "Value": { - "Ref": "EcsCluster" - } - } - } -} diff --git a/source/cloudformation/ops-automator.template b/source/cloudformation/ops-automator.template index d57ecb8..2dacbdc 100644 --- a/source/cloudformation/ops-automator.template +++ b/source/cloudformation/ops-automator.template @@ -44,12 +44,9 @@ "XXXLarge": "3008", "ECS": "" }, - "EcsCluster": { - "ClusterName": "" - }, "Resources": { "ResourceToS3SizeKB": 16, - "EncryptResourceData": "False" + "EncryptResourceData": "True" }, "ServiceLimits": { "MaxConcurrentEbsSnapshotCopies": 5, @@ -60,6 +57,11 @@ "MaxPutCallsPerStream": 5, "MaxDescribeCalls": 5, "MaxApiCalls": 40 + }, + "VpcCidrs": { + "vpc": "10.0.0.0/16", + "pubsubnet1": "10.0.0.0/24", + "pubsubnet2": "10.0.1.0/24" } }, "EnabledDisabled": { @@ -167,6 +169,32 @@ ], "Default": "Yes", "Description": "Activate or deactivate scheduling of task." + }, + "VpcId": { + "Type": "String", + "Description": "Optional - VPC Id of existing VPC. Leave blank to have a new VPC created", + "Default": "", + "AllowedPattern": "^(?:vpc-[0-9a-f]{8,17}|)$", + "ConstraintDescription": "VPC Id must begin with 'vpc-' or leave blank to have a new VPC created" + }, + "SubnetIds": { + "Type": "CommaDelimitedList", + "Description": "Optional - Comma separated list of two (2) existing VPC Subnet Ids where ECS instances will run. Required if setting VPC.", + "Default": "" + }, + "VpcAvailabilityZones": { + "Type": "CommaDelimitedList", + "Description": "Optional - Comma-delimited list of VPC availability zones in which to create subnets. Required if setting VPC.", + "Default": "" + }, + "ECSFargate": { + "Type": "String", + "Description": "Should Ops Automator use Fargate to run tasks?", + "AllowedValues": [ + "Yes", + "No" + ], + "Default": "No" } }, "Metadata": { @@ -194,6 +222,17 @@ "LogRetentionDays", "ConfigBackupDays" ] + }, + { + "Label": { + "default": "ECS/Fargate Parameters" + }, + "Parameters": [ + "ECSFargate", + "VpcId", + "SubnetIds", + "VpcAvailabilityZones" + ] } ], "ParameterLabels": { @@ -220,6 +259,18 @@ }, "UseCloudWatchMetrics": { "default": "Enable CloudWatch Metrics" + }, + "VpcId": { + "default": "Cluster VPC" + }, + "SubnetIds": { + "default": "Cluster Subnets" + }, + "VpcAvailabilityZones": { + "default": "Cluster Availability zones" + }, + "ECSFargate": { + "default": "ECS/Fargate" } } } @@ -309,22 +360,6 @@ "True" ] }, - "UseEcs": { - "Fn::Not": [ - { - "Fn::Equals": [ - { - "Fn::FindInMap": [ - "Settings", - "EcsCluster", - "ClusterName" - ] - }, - "" - ] - } - ] - }, "CloudWatchPutLimit800": { "Fn::Or": [ { @@ -460,11 +495,49 @@ ] } ] + }, + "UseEcs": { + "Fn::Not": [ + { "Fn::Equals": [ { "Ref": "ECSFargate" }, "No" ] } + ] + }, + "CreateVpcResources": { + "Fn::And": [ + { "Fn::Equals": [ { "Ref": "VpcId" }, "" ] }, + { "Condition": "UseEcs" } + ] + }, + "UseSpecifiedVpcAvailabilityZones": { + "Fn::Not": [ + { + "Fn::Equals": [ + { + "Fn::Join": [ + "", + { + "Ref": "VpcAvailabilityZones" + } + ] + }, + "" + ] + } + ] } }, "Resources": { "OpsAutomatorEventsForwardRole": { "Type": "AWS::IAM::Role", + "Metadata": { + "cfn_nag": { + "rules_to_suppress": [ + { + "id": "W11", + "reason": "logs:*LogStream actions requires * resource." + } + ] + } + }, "Properties": { "AssumeRolePolicyDocument": { "Version": "2012-10-17", @@ -492,14 +565,32 @@ "Resource": { "Ref": "OpsAutomatorEventsTopic" } + }, + { + "Effect": "Allow", + "Action": [ + "logs:CreateLogGroup", + "logs:CreateLogStream", + "logs:PutLogEvents" + ], + "Resource": "*" + }, + { + "Sid": "KMS", + "Effect": "Allow", + "Resource": { "Fn::GetAtt": [ "ResourceEncryptionKey", "Arn" ] }, + "Action": [ + "kms:Encrypt", + "kms:Decrypt", + "kms:ReEncrypt*", + "kms:GenerateDataKey*", + "kms:DescribeKey" + ] } ] } } ], - "ManagedPolicyArns": [ - "arn:aws:iam::aws:policy/service-role/AWSLambdaBasicExecutionRole" - ], "Path": "/" } }, @@ -592,14 +683,32 @@ "Resource": [ "*" ] + }, + { + "Effect": "Allow", + "Action": [ + "logs:CreateLogGroup", + "logs:CreateLogStream", + "logs:PutLogEvents" + ], + "Resource": "*" + }, + { + "Sid": "KMS", + "Effect": "Allow", + "Resource": { "Fn::GetAtt": [ "ResourceEncryptionKey", "Arn" ] }, + "Action": [ + "kms:Encrypt", + "kms:Decrypt", + "kms:ReEncrypt*", + "kms:GenerateDataKey*", + "kms:DescribeKey" + ] } ] } } ], - "ManagedPolicyArns": [ - "arn:aws:iam::aws:policy/service-role/AWSLambdaBasicExecutionRole" - ], "Path": "/" } }, @@ -615,6 +724,10 @@ { "id": "F38", "reason": "To allow the IAM:PassRole action. A condition to allow only lambda to assume the role has also been added." + }, + { + "id": "W76", + "reason": "Complex role needs refactoring. This will be addressed in a future release." } ] } @@ -772,11 +885,6 @@ } ] }, - { - "Effect": "Allow", - "Action": "s3:ListBucket", - "Resource": "arn:aws:s3:::*" - }, { "Effect": "Allow", "Action": [ @@ -844,18 +952,6 @@ "Fn::GetAtt": [ "TriggerTable", "Arn" ] } }, - { - "Effect": "Allow", - "Action": [ - "dynamodb:GetRecords", - "dynamodb:GetShardIterator", - "dynamodb:DescribeStream", - "dynamodb:ListStreams" - ], - "Resource": { - "Fn::GetAtt": [ "TaskTrackingTable", "StreamArn" ] - } - }, { "Effect": "Allow", "Action": [ @@ -893,22 +989,15 @@ } ] }, - { - "Effect": "Allow", - "Action": "cloudformation:ListStackResources", - "Resource": { - "Ref": "AWS::StackId" - } - }, { "Effect": "Allow", "Action": [ "cloudformation:GetTemplate", "cloudformation:ListStackResources" ], - "Resource": { - "Fn::Sub": "arn:aws:cloudformation:${AWS::Region}:${AWS::AccountId}:stack/${AWS::StackName}/*" - } + "Resource": [ + { "Fn::Sub": "arn:aws:cloudformation:${AWS::Region}:${AWS::AccountId}:stack/${AWS::StackName}/*" } + ] }, { "Effect": "Allow", @@ -979,17 +1068,13 @@ ] } }, - { "Sid": "ActionsRequireAllResources", "Effect": "Allow", "Action": [ "cloudformation:DeleteStack", - "cloudwatch:PutMetricData", - "dynamodb:ListTables", - "ec2:DescribeVpcs", "ec2:DescribeSubnets", "ec2:DescribeImages", @@ -1002,31 +1087,135 @@ "iam:PassRole", "events:PutRule", "events:ListRules", - "logs:DescribeLogGroups", - "pricing:GetAttributeValues", - "ssm:GetParameter", "ssm:GetParameters", - "sts:AssumeRole", - - "tag:GetResources" + "tag:GetResources", + "s3:ListBuckets", + "dynamodb:ListStreams" + ], + "Resource": "*" + }, + { + "Effect": "Allow", + "Action": [ + "logs:CreateLogGroup", + "logs:CreateLogStream", + "logs:PutLogEvents" ], "Resource": "*" + }, + { + "Effect": "Allow", + "Action": [ + "dynamodb:DescribeStream", + "dynamodb:GetRecords", + "dynamodb:GetShardIterator", + "dynamodb:ListStreams" + ], + "Resource": [ + { "Fn::GetAtt": ["ConfigurationTable", "StreamArn"] }, + { "Fn::GetAtt": ["TaskTrackingTable", "StreamArn"] }, + { "Fn::GetAtt": ["ConcurrencyTable", "StreamArn"] }, + { "Fn::GetAtt": ["TriggerTable", "StreamArn"] } + ] + }, + { + "Sid": "KMS", + "Effect": "Allow", + "Resource": { "Fn::GetAtt": [ "ResourceEncryptionKey", "Arn" ] }, + "Action": [ + "kms:Encrypt", + "kms:Decrypt", + "kms:ReEncrypt*", + "kms:GenerateDataKey*", + "kms:DescribeKey" + ] } ] } } ], - "ManagedPolicyArns": [ - "arn:aws:iam::aws:policy/service-role/AWSLambdaDynamoDBExecutionRole", - "arn:aws:iam::aws:policy/service-role/AWSLambdaBasicExecutionRole" - ], "Path": "/" } }, + "EcsExecutionRole": { + "Type": "AWS::IAM::Role", + "Metadata": { + "cfn_nag": { + "rules_to_suppress": [ + { + "id": "W11", + "reason": "GetAuthorizationToken requires global resource" + } + ] + } + }, + "Properties": { + "AssumeRolePolicyDocument": { + "Version": "2012-10-17", + "Statement": [ + { + "Sid": "", + "Effect": "Allow", + "Principal": { + "Service": "ecs-tasks.amazonaws.com" + }, + "Action": "sts:AssumeRole" + } + ] + }, + "Policies": [ + { + "PolicyName": "OpsAutomatorLambdaRolePolicy", + "PolicyDocument": { + "Version": "2012-10-17", + "Statement": [ + { + "Effect": "Allow", + "Action": "ecr:GetAuthorizationToken", + "Resource": "*" + }, + { + "Effect": "Allow", + "Action": [ + "ecr:BatchCheckLayerAvailability", + "ecr:GetDownloadUrlForLayer", + "ecr:BatchGetImage" + ], + "Resource": { "Fn::Sub": "arn:aws:ecr:${AWS::Region}:${AWS::AccountId}:repository/ops-automator" } + }, + { + "Effect": "Allow", + "Action": [ + "logs:CreateLogStream", + "logs:PutLogEvents" + ], + "Resource": [ + { "Fn::Sub": "arn:aws:logs:${AWS::Region}:${AWS::AccountId}:log-group:${AWS::StackName}-logs" }, + { "Fn::Sub": "arn:aws:logs:${AWS::Region}:${AWS::AccountId}:log-group:${AWS::StackName}-logs:log-stream:*" } + ] + }, + { + "Sid": "KMS", + "Effect": "Allow", + "Resource": { "Fn::GetAtt": [ "ResourceEncryptionKey", "Arn" ] }, + "Action": [ + "kms:Encrypt", + "kms:Decrypt", + "kms:ReEncrypt*", + "kms:GenerateDataKey*", + "kms:DescribeKey" + ] + } + ] + } + } + ] + } + }, "LastSchedulerExecutionTable": { "Type": "AWS::DynamoDB::Table", "Properties": { @@ -1042,7 +1231,21 @@ "KeyType": "HASH" } ], - "BillingMode": "PAY_PER_REQUEST" + "BillingMode": "PAY_PER_REQUEST", + "SSESpecification": { + "Fn::If": [ + "EncryptResourceData", + { + "KMSMasterKeyId": { "Ref": "ResourceEncryptionKey" }, + "SSEEnabled": true, + "SSEType": "KMS" + }, + { + "KMSMasterKeyId": "", + "SSEEnabled": false + } + ] + } } }, "ConfigurationTable": { @@ -1063,6 +1266,20 @@ "BillingMode": "PAY_PER_REQUEST", "StreamSpecification": { "StreamViewType": "NEW_AND_OLD_IMAGES" + }, + "SSESpecification": { + "Fn::If": [ + "EncryptResourceData", + { + "KMSMasterKeyId": { "Ref": "ResourceEncryptionKey" }, + "SSEEnabled": true, + "SSEType": "KMS" + }, + { + "KMSMasterKeyId": "", + "SSEEnabled": false + } + ] } } }, @@ -1089,6 +1306,20 @@ "KeyType": "HASH" } ], + "SSESpecification": { + "Fn::If": [ + "EncryptResourceData", + { + "KMSMasterKeyId": { "Ref": "ResourceEncryptionKey" }, + "SSEEnabled": true, + "SSEType": "KMS" + }, + { + "KMSMasterKeyId": "", + "SSEEnabled": false + } + ] + }, "BillingMode": "PAY_PER_REQUEST", "TimeToLiveSpecification": { "AttributeName": "TTL", @@ -1157,6 +1388,20 @@ "BillingMode": "PAY_PER_REQUEST", "StreamSpecification": { "StreamViewType": "NEW_AND_OLD_IMAGES" + }, + "SSESpecification": { + "Fn::If": [ + "EncryptResourceData", + { + "KMSMasterKeyId": { "Ref": "ResourceEncryptionKey" }, + "SSEEnabled": true, + "SSEType": "KMS" + }, + { + "KMSMasterKeyId": "", + "SSEEnabled": false + } + ] } } }, @@ -1178,6 +1423,20 @@ "BillingMode": "PAY_PER_REQUEST", "StreamSpecification": { "StreamViewType": "KEYS_ONLY" + }, + "SSESpecification": { + "Fn::If": [ + "EncryptResourceData", + { + "KMSMasterKeyId": { "Ref": "ResourceEncryptionKey" }, + "SSEEnabled": true, + "SSEType": "KMS" + }, + { + "KMSMasterKeyId": "", + "SSEEnabled": false + } + ] } } }, @@ -1236,6 +1495,15 @@ } ] ] + }, + "KmsMasterKeyId": { + "Fn::If": [ + "EncryptResourceData", + { + "Ref": "ResourceEncryptionKey" + }, + "" + ] } } }, @@ -1275,6 +1543,15 @@ } ] ] + }, + "KmsMasterKeyId": { + "Fn::If": [ + "EncryptResourceData", + { + "Ref": "ResourceEncryptionKey" + }, + "" + ] } } }, @@ -1314,6 +1591,15 @@ } ] ] + }, + "KmsMasterKeyId": { + "Fn::If": [ + "EncryptResourceData", + { + "Ref": "ResourceEncryptionKey" + }, + "" + ] } } }, @@ -1445,7 +1731,7 @@ ] }, "Handler": "main.lambda_handler", - "Runtime": "python3.7", + "Runtime": "python3.8", "Role": { "Fn::GetAtt": [ "OpsAutomatorLambdaRole", @@ -1624,11 +1910,7 @@ "Fn::If": [ "UseEcs", { - "Fn::FindInMap": [ - "Settings", - "EcsCluster", - "ClusterName" - ] + "Ref": "EcsCluster" }, { "Ref": "AWS::NoValue" @@ -1690,13 +1972,39 @@ "Metrics", "Url" ] - } - } - }, - "MemorySize": { - "Fn::FindInMap": [ - "Settings", - "ActionMemory", + }, + "AWSVPC_SUBNETS": { + "Fn::If": [ + "UseEcs", + { "Fn::If": [ + "CreateVpcResources", + { "Fn::Join": [ ",", + [ { "Ref": "PublicSubnetAz1" }, { "Ref": "PublicSubnetAz2" } ] + ] }, + { + "Fn::Join": [ + ",", + { "Ref": "SubnetIds" } + ] + } + ] }, + "None" + ] + }, + "AWSVPC_SECURITYGROUPS": { + "Fn::If": [ + "UseEcs", + { "Ref": "FargateSecurityGroup" }, + "None" + ] + }, + "AWSVPC_ASSIGNPUBLICIP": "ENABLED" + } + }, + "MemorySize": { + "Fn::FindInMap": [ + "Settings", + "ActionMemory", "Standard" ] }, @@ -1706,6 +2014,16 @@ }, "OpsAutomatorCloudWatchQueueHandlerLambda": { "Type": "AWS::Lambda::Function", + "Metadata": { + "cfn_nag": { + "rules_to_suppress": [ + { + "id": "W48", + "reason": "KMS not required: no customer data" + } + ] + } + }, "Properties": { "Code": { "S3Bucket": { @@ -1739,7 +2057,7 @@ ] }, "Handler": "cloudwatch_queue_handler_lambda.lambda_handler", - "Runtime": "python3.7", + "Runtime": "python3.8", "Role": { "Fn::GetAtt": [ "OpsAutomatorCloudWatchLogHandlerRole", @@ -1846,12 +2164,17 @@ "MemoryReservation": 128 } ], + "Cpu": "1024", + "Memory": "2048", + "RequiresCompatibilities": [ "FARGATE" ], + "NetworkMode": "awsvpc", "TaskRoleArn": { "Fn::GetAtt": [ "OpsAutomatorLambdaRole", "Arn" ] }, + "ExecutionRoleArn": { "Ref": "EcsExecutionRole" }, "Volumes": [] } }, @@ -1861,8 +2184,8 @@ "cfn_nag": { "rules_to_suppress": [ { - "id": "W35", - "reason": "Access logging is not required for this bucket. Custom metrics reporting is already done." + "id": "W51", + "reason": "The bucket is not public. When using the CF template in PROD, create a bucket policy to allow only administrators/ auditors access to the bucket" } ] } @@ -1910,6 +2233,10 @@ "BlockPublicPolicy": true, "IgnorePublicAcls": true, "RestrictPublicBuckets": true + }, + "LoggingConfiguration": { + "DestinationBucketName": { "Ref": "S3LoggingBucket" }, + "LogFilePrefix": "access-logs" } } }, @@ -1919,8 +2246,8 @@ "cfn_nag": { "rules_to_suppress": [ { - "id": "W35", - "reason": "Access logging is not required for this bucket. Custom metrics reporting is already done." + "id": "W51", + "reason": "The bucket is not public. When using the CF template in PROD, create a bucket policy to allow only administrators/ auditors access to the bucket" } ] } @@ -1957,6 +2284,10 @@ "BlockPublicPolicy": true, "IgnorePublicAcls": true, "RestrictPublicBuckets": true + }, + "LoggingConfiguration": { + "DestinationBucketName": { "Ref": "S3LoggingBucket" }, + "LogFilePrefix": "access-logs" } } }, @@ -2005,6 +2336,15 @@ } ] ] + }, + "KmsMasterKeyId": { + "Fn::If": [ + "EncryptResourceData", + { + "Ref": "ResourceEncryptionKey" + }, + "" + ] } } }, @@ -2077,8 +2417,8 @@ "cfn_nag": { "rules_to_suppress": [ { - "id": "W35", - "reason": "Access logging is not required for this bucket. Custom metrics reporting is already done." + "id": "W51", + "reason": "The bucket is not public. When using the CF template in PROD, create a bucket policy to allow only administrators/ auditors access to the bucket" } ] } @@ -2115,6 +2455,10 @@ "BlockPublicPolicy": true, "IgnorePublicAcls": true, "RestrictPublicBuckets": true + }, + "LoggingConfiguration": { + "DestinationBucketName": { "Ref": "S3LoggingBucket" }, + "LogFilePrefix": "access-logs" } } }, @@ -2737,7 +3081,21 @@ }, "OpsAutomatorEcsRepository": { "Condition": "UseEcs", - "Type": "AWS::ECR::Repository" + "Type": "AWS::ECR::Repository", + "Metadata": { + "cfn_nag": { + "rules_to_suppress": [ + { + "id": "W28", + "reason": "Dynamic names for ECS do not follow standard practice for resource naming and are not deriveable from stackname." + } + ] + } + }, + "DeletionPolicy": "Retain", + "Properties": { + "RepositoryName": "ops-automator" + } }, "ResourceEncryptionKey": { "Type": "AWS::KMS::Key", @@ -2767,26 +3125,6 @@ }, "Action": "kms:*", "Resource": "*" - }, - { - "Sid": "Allow use of the key", - "Effect": "Allow", - "Principal": { - "AWS": { - "Fn::GetAtt": [ - "OpsAutomatorLambdaRole", - "Arn" - ] - } - }, - "Action": [ - "kms:Encrypt", - "kms:Decrypt", - "kms:ReEncrypt*", - "kms:GenerateDataKey*", - "kms:DescribeKey" - ], - "Resource": "*" } ] } @@ -2804,6 +3142,372 @@ } }, "Condition": "EncryptResourceData" + }, + "S3LoggingBucket": { + "DeletionPolicy": "Retain", + "Type": "AWS::S3::Bucket", + "Properties": { + "BucketName": { + "Fn::Sub": "aws-opsautomator-s3-access-logs-${AWS::AccountId}-${AWS::Region}" + }, + "AccessControl": "LogDeliveryWrite", + "VersioningConfiguration": { + "Status": "Enabled" + }, + "BucketEncryption": { + "ServerSideEncryptionConfiguration": [ + { + "ServerSideEncryptionByDefault": { + "SSEAlgorithm": "AES256" + } + } + ] + }, + "Tags": [ + { + "Key": "Name", + "Value": "AWS Ops Automator Access Logs" + } + ], + "PublicAccessBlockConfiguration": { + "BlockPublicAcls": true, + "BlockPublicPolicy": true, + "IgnorePublicAcls": true, + "RestrictPublicBuckets": true + } + }, + "Metadata": { + "cfn_nag": { + "rules_to_suppress": [ + { + "id": "W35", + "reason": "This S3 bucket is used as the destination for storing access logs" + }, + { + "id": "W51", + "reason": "Log delivery controlled by ACL, not bucket policy" + } + ] + } + } + }, + "EcsCluster": { + "Condition": "UseEcs", + "Type": "AWS::ECS::Cluster" + }, + "Vpc": { + "Condition": "CreateVpcResources", + "Type": "AWS::EC2::VPC", + "Metadata": { + "cfn_nag": { + "rules_to_suppress": [ + { + "id": "W60", + "reason": "VPC is only used for the Fargate cluster, reducing the importance of Flow Logs (which add cost)." + } + ] + } + }, + "Properties": { + "CidrBlock": { + "Fn::FindInMap": [ + "Settings", + "VpcCidrs", + "vpc" + ] + }, + "Tags": [ + { + "Key": "Name", + "Value": { + "Fn::Join": [ + "-", + [ + { + "Ref": "AWS::StackName" + }, + "vpc" + ] + ] + } + } + ] + } + }, + "PublicSubnetAz1": { + "Condition": "CreateVpcResources", + "Type": "AWS::EC2::Subnet", + "Properties": { + "VpcId": { + "Ref": "Vpc" + }, + "CidrBlock": { + "Fn::FindInMap": [ + "Settings", + "VpcCidrs", + "pubsubnet1" + ] + }, + "AvailabilityZone": { + "Fn::If": [ + "UseSpecifiedVpcAvailabilityZones", + { + "Fn::Select": [ + "0", + { + "Ref": "VpcAvailabilityZones" + } + ] + }, + { + "Fn::Select": [ + "0", + { + "Fn::GetAZs": { + "Ref": "AWS::Region" + } + } + ] + } + ] + } + } + }, + "PublicSubnetAz2": { + "Condition": "CreateVpcResources", + "Type": "AWS::EC2::Subnet", + "Properties": { + "VpcId": { + "Ref": "Vpc" + }, + "CidrBlock": { + "Fn::FindInMap": [ + "Settings", + "VpcCidrs", + "pubsubnet2" + ] + }, + "AvailabilityZone": { + "Fn::If": [ + "UseSpecifiedVpcAvailabilityZones", + { + "Fn::Select": [ + "1", + { + "Ref": "VpcAvailabilityZones" + } + ] + }, + { + "Fn::Select": [ + "1", + { + "Fn::GetAZs": { + "Ref": "AWS::Region" + } + } + ] + } + ] + } + } + }, + "InternetGateway": { + "Condition": "CreateVpcResources", + "Type": "AWS::EC2::InternetGateway" + }, + "AttachGateway": { + "Condition": "CreateVpcResources", + "Type": "AWS::EC2::VPCGatewayAttachment", + "Properties": { + "VpcId": { + "Ref": "Vpc" + }, + "InternetGatewayId": { + "Ref": "InternetGateway" + } + } + }, + "RouteViaIgw": { + "Condition": "CreateVpcResources", + "Type": "AWS::EC2::RouteTable", + "Properties": { + "VpcId": { + "Ref": "Vpc" + } + } + }, + "PublicRouteViaIgw": { + "Condition": "CreateVpcResources", + "DependsOn": "AttachGateway", + "Type": "AWS::EC2::Route", + "Properties": { + "RouteTableId": { + "Ref": "RouteViaIgw" + }, + "DestinationCidrBlock": "0.0.0.0/0", + "GatewayId": { + "Ref": "InternetGateway" + } + } + }, + "PubSubnet1RouteTableAssociation": { + "Condition": "CreateVpcResources", + "Type": "AWS::EC2::SubnetRouteTableAssociation", + "Properties": { + "SubnetId": { + "Ref": "PublicSubnetAz1" + }, + "RouteTableId": { + "Ref": "RouteViaIgw" + } + } + }, + "PubSubnet2RouteTableAssociation": { + "Condition": "CreateVpcResources", + "Type": "AWS::EC2::SubnetRouteTableAssociation", + "Properties": { + "SubnetId": { + "Ref": "PublicSubnetAz2" + }, + "RouteTableId": { + "Ref": "RouteViaIgw" + } + } + }, + "FargateSecurityGroup": { + "Condition": "UseEcs", + "Type": "AWS::EC2::SecurityGroup", + "Metadata": { + "cfn_nag": { + "rules_to_suppress": [ + { + "id": "W40", + "reason": "Must allow egress of all protocols, all dests." + }, + { + "id": "W5", + "reason": "Must allow egress of all protocols, all dests." + } + ] + } + }, + "Description": "Security group for Fargate cluster", + "Properties": { + "GroupDescription": "Access to the Fargate containers", + "VpcId": { + "Fn::If": [ + "CreateVpcResources", + { + "Ref": "Vpc" + }, + { + "Ref": "VpcId" + } + ] + } + } + }, + "FargateSGSelfEgress": { + "Condition": "UseEcs", + "Type": "AWS::EC2::SecurityGroupEgress", + "Properties": { + "Description": "Egress to other containers in the same security group", + "GroupId": { + "Ref": "FargateSecurityGroup" + }, + "IpProtocol": -1, + "FromPort": -1, + "ToPort": -1, + "SourceSecurityGroupId": { + "Ref": "FargateSecurityGroup" + } + } + }, + "FargateSGExtEgress": { + "Condition": "UseEcs", + "Type": "AWS::EC2::SecurityGroupEgress", + "Properties": { + "Description": "Egress to external networks", + "GroupId": { + "Ref": "FargateSecurityGroup" + }, + "IpProtocol": -1, + "FromPort": -1, + "ToPort": -1, + "CidrIp": "0.0.0.0/0" + } + }, + "ECSRole": { + "Condition": "UseEcs", + "Type": "AWS::IAM::Role", + "Metadata": { + "cfn_nag": { + "rules_to_suppress": [ + { + "id": "W11", + "reason": "Global access to services required for automation to function" + } + ] + } + }, + "Properties": { + "AssumeRolePolicyDocument": { + "Statement": [ + { + "Effect": "Allow", + "Principal": { + "Service": [ + "ecs.amazonaws.com" + ] + }, + "Action": [ + "sts:AssumeRole" + ] + } + ] + }, + "Path": "/", + "Policies": [ + { + "PolicyName": "ecs-service", + "PolicyDocument": { + "Statement": [ + { + "Effect": "Allow", + "Action": [ + "ec2:AttachNetworkInterface", + "ec2:CreateNetworkInterface", + "ec2:CreateNetworkInterfacePermission", + "ec2:DeleteNetworkInterface", + "ec2:DeleteNetworkInterfacePermission", + "ec2:Describe*", + "ec2:DetachNetworkInterface", + "elasticloadbalancing:DeregisterInstancesFromLoadBalancer", + "elasticloadbalancing:DeregisterTargets", + "elasticloadbalancing:Describe*", + "elasticloadbalancing:RegisterInstancesWithLoadBalancer", + "elasticloadbalancing:RegisterTargets" + ], + "Resource": "*" + }, + { + "Sid": "KMS", + "Effect": "Allow", + "Resource": { "Fn::GetAtt": [ "ResourceEncryptionKey", "Arn" ] }, + "Action": [ + "kms:Encrypt", + "kms:Decrypt", + "kms:ReEncrypt*", + "kms:GenerateDataKey*", + "kms:DescribeKey" + ] + } + ] + } + } + ] + } } }, "Outputs": { @@ -2875,6 +3579,15 @@ "" ] } + }, + "Cluster": { + "Value": { + "Fn::If": [ + "UseEcs", + { "Ref": "EcsCluster" }, + "" + ] + } } } } \ No newline at end of file diff --git a/source/code/build-docker-script.py b/source/code/build-docker-script.py index 3a8bd21..c778f70 100644 --- a/source/code/build-docker-script.py +++ b/source/code/build-docker-script.py @@ -41,8 +41,17 @@ def build_script(script, bucket, version, prefix): if __name__ == "__main__": try: - print((build_script(script=sys.argv[1], bucket=sys.argv[2], version=sys.argv[3], - prefix=sys.argv[4] if len(sys.argv) > 3 else ""))) + script = sys.argv[1] + bucket = sys.argv[2] + version = sys.argv[3] + if len(sys.argv) > 4: + prefix = sys.argv[4] + else: + prefix = "" + + print(build_script(script, bucket, version, prefix)) + except Exception as ex: print(ex) + raise ex exit(1) diff --git a/source/code/build-ops-automator-template.py b/source/code/build-ops-automator-template.py index 211f34d..0d12401 100644 --- a/source/code/build-ops-automator-template.py +++ b/source/code/build-ops-automator-template.py @@ -39,12 +39,13 @@ def build_action_policy_statement(action_name, action_permissions): def get_versioned_template(template_filename, bucket, solution, version): - + versionhyphen = version.replace(".", "-") with open(template_filename, "rt") as f: template_text = "".join(f.readlines()) template_text = template_text.replace("%bucket%", bucket) template_text = template_text.replace("%solution%", solution) template_text = template_text.replace("%version%", version) + template_text = template_text.replace("%versionhyphen%", versionhyphen) return json.loads(template_text, object_pairs_hook=OrderedDict) @@ -106,7 +107,7 @@ def action_select_resources_permissions(action_prop): # with possible additional permissions to retrieve tags action_permissions += list(action_select_resources_permissions(action_properties)) - if len(action_permissions) is not 0: + if len(action_permissions) != 0: required_actions.update(action_permissions) # if using these lines individual statemens are built for every action # statements = build_action_policy_statement(action_name, action_permissions) @@ -172,7 +173,7 @@ def update_list(l, old_name, new_name): for resource_name in action_resources_to_add: template_resources[resource_name] = action_resources_to_add[resource_name] - if len(stack_resource_permissions) is not 0: + if len(stack_resource_permissions) != 0: # statements = build_action_policy_statement(action_name, stack_resource_permissions) stack_resource_permissions["Sid"] = re.sub("[^0-9A-Za-z]", "", action_name + "Resources") action_statement.append(stack_resource_permissions) diff --git a/source/code/builders/action_template_builder.py b/source/code/builders/action_template_builder.py index e70b021..5d0cde1 100644 --- a/source/code/builders/action_template_builder.py +++ b/source/code/builders/action_template_builder.py @@ -54,7 +54,8 @@ PARAM_LABEL_COMPLETION_MEMORY = "Completion test memory" PARAM_LABEL_EXECUTION_MEMORY = "Execution memory" PARAM_LABEL_SELECT_MEMORY = "Resource selection memory" -PARAM_LABEL_ECS_MEMORY = "Reserved memory" +PARAM_LABEL_ECS_SELECT_MEMORY = "Selection reserved memory" +PARAM_LABEL_ECS_EXEC_MEMORY = "Execution reserved memory" PARAM_LABEL_SCOPE = "Resource selection scope for {} event" PARAM_DESCRIPTION_EVENT_SCOPE = \ @@ -412,7 +413,7 @@ def setup_memory_parameters(): config_ecs_memory_param=configuration.CONFIG_ECS_SELECT_MEMORY, description=PARAM_DESCRIPTION_SELECT_SIZE, label=PARAM_LABEL_SELECT_MEMORY, - ecs_memory_label=PARAM_LABEL_ECS_MEMORY, + ecs_memory_label=PARAM_LABEL_ECS_SELECT_MEMORY, ecs_description=PARAM_DESCRIPTION_ECS_SELECT_MEMORY) build_memory_parameter(size_group=memory_group, @@ -421,7 +422,7 @@ def setup_memory_parameters(): config_ecs_memory_param=configuration.CONFIG_ECS_EXECUTE_MEMORY, description=PARAM_DESCRIPTION_EXECUTE_SIZE, label=PARAM_LABEL_EXECUTION_MEMORY, - ecs_memory_label=PARAM_LABEL_ECS_MEMORY, + ecs_memory_label=PARAM_LABEL_ECS_EXEC_MEMORY, ecs_description=PARAM_DESCRIPTION_ECS_EXECUTE_MEMORY) if self.has_completion_logic: @@ -431,7 +432,7 @@ def setup_memory_parameters(): config_ecs_memory_param=configuration.CONFIG_ECS_COMPLETION_MEMORY, description=PARAM_DESCRIPTION_COMPLETION_SIZE, label=PARAM_LABEL_COMPLETION_MEMORY, - ecs_memory_label=PARAM_LABEL_ECS_MEMORY, + ecs_memory_label=PARAM_LABEL_COMPLETION_MEMORY, ecs_description=PARAM_DESCRIPTION_ECS_COMPLETION_MEMORY) if len(memory_group["Parameters"]) > 0: diff --git a/source/code/cloudwatch_queue_handler_lambda.py b/source/code/cloudwatch_queue_handler_lambda.py index 04195b5..e7ab3d9 100644 --- a/source/code/cloudwatch_queue_handler_lambda.py +++ b/source/code/cloudwatch_queue_handler_lambda.py @@ -1,9 +1,23 @@ +###################################################################################################################### +# Copyright 2019 Amazon.com, Inc. or its affiliates. All Rights Reserved. # +# # +# Licensed under the Apache License Version 2.0 (the "License"). You may not use this file except in compliance # +# with the License. A copy of the License is located at # +# # +# http://www.apache.org/licenses/ # +# # +# or in the "license" file accompanying this file. This file is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES # +# OR CONDITIONS OF ANY KIND, express or implied. See the License for the specific language governing permissions # +# and limitations under the License. # +###################################################################################################################### + import collections import os import time from datetime import datetime import boto3 +from botocore.exceptions import ClientError MIN_REMAINING_EXEC_TIME_SEC = 60 @@ -127,8 +141,8 @@ def flush(self): resp = self._log_client.put_log_events(**put_event_args) self._stream_tokens[log_stream] = resp.get("nextSequenceToken", None) break - except Exception as ex: - exception_type = type(ex).__name__ + except ClientError as ex: + exception_type = ex.response['Error']['Code'] # stream did not exist, in that case create it and try again with token set in create method if exception_type == "ResourceNotFoundException": self._create_log_stream(log_stream=log_stream) @@ -136,8 +150,8 @@ def flush(self): elif exception_type in ["InvalidSequenceTokenException", "DataAlreadyAcceptedException"]: # noinspection PyBroadException try: - token = ex.message.split(":")[-1].strip() - self._stream_tokens[log_stream] = ex.message.split(":")[-1].strip() + token = ex.response['Error']['Message'].split(":")[-1].strip() + self._stream_tokens[log_stream] = token print(("Token for existing stream {} is {}".format(log_stream, token))) except: self._stream_tokens[log_stream] = None diff --git a/source/code/handlers/__init__.py b/source/code/handlers/__init__.py index 2afa8d3..e23452c 100644 --- a/source/code/handlers/__init__.py +++ b/source/code/handlers/__init__.py @@ -98,9 +98,6 @@ ENV_SERVICE_LIMIT_CONCURRENT_IMAGE_COPY = "SERVICE_LIMIT_CONCURRENT_IMAGE_COPY" # ops automator role ENV_OPS_AUTOMATOR_ROLE_ARN = "OPS_AUTOMATOR_ROLE_ARN" -# key use to encrypt resource data in DynamoDB and S3 -ENV_RESOURCE_ENCRYPTION_KEY = "RESOURCE_ENCRYPTION_KEY" - # Default tag for resource tasks DEFAULT_SCHEDULER_TAG = "OpsAutomatorTaskList" @@ -127,7 +124,6 @@ TASK_TR_DEBUG = "Debug" TASK_TR_DRYRUN = "Dryrun" TASK_TR_DT = "TaskDatetime" -TASK_TR_ENCRYPTED_RESOURCES = "EncryptedResources" TASK_TR_ERROR = "Error" TASK_TR_EVENTS = "Events" TASK_TR_EXECUTE_SIZE = "ExecuteSize" @@ -496,16 +492,27 @@ def timed_out_by_lambda_timeout(next_wait): "cluster": os.getenv(ENV_ECS_CLUSTER), "taskDefinition": os.getenv(ENV_ECS_TASK), "startedBy": "{}:{}".format(stack_name, args[TASK_NAME])[0:35], + "launchType": "FARGATE", + "networkConfiguration": { + "awsvpcConfiguration": { + "subnets": runner_args['subnets'].split(','), + "securityGroups": runner_args['securitygroups'].split(','), + "assignPublicIp": runner_args['assignpublicip'] + } + }, "overrides": { "containerOverrides": [ { "name": "ops-automator", - "command": ["python", "ops-automator-ecs-runner.py", safe_json(runner_args)], - "memoryReservation": int(ecs_memory_size if ecs_memory_size is not None else ECS_DEFAULT_MEMORY_RESERVATION) + "command": ["python3", "ops-automator-ecs-runner.py", safe_json(runner_args)], + "memoryReservation": int(ecs_memory_size if ecs_memory_size is not None else ECS_DEFAULT_MEMORY_RESERVATION), + "memory": 2048, + "cpu": 1024 } - ], - }, + ] + } } + log_to_debug(logger, str(ecs_params)) for wait_until_next_retry in boto_retry.LinearWaitStrategy(start=5, incr=5, max_wait=30, random_factor=0.50): @@ -536,10 +543,7 @@ def timed_out_by_lambda_timeout(next_wait): def get_item_resource_data(item, context): global _kms_client resource_data = item.get(TASK_TR_RESOURCES, "{}") - if item.get(TASK_TR_ENCRYPTED_RESOURCES): - if _kms_client is None: - _kms_client = boto_retry.get_client_with_retries("kms", ["decrypt"], context=context) - resource_data = _kms_client.decrypt(CiphertextBlob=base64.b64decode(resource_data))["Plaintext"] + return resource_data if type(resource_data) in [dict, list] else json.loads(resource_data) diff --git a/source/code/handlers/execution_handler.py b/source/code/handlers/execution_handler.py index 30a9e77..1c2e015 100644 --- a/source/code/handlers/execution_handler.py +++ b/source/code/handlers/execution_handler.py @@ -253,7 +253,7 @@ def handle_metrics(result): except Exception as ex: self._logger.warning(WARN_METRICS_DATA, str(ex)) - self._logger.info(INF_ACTION, self.action, self.action_id, self.task, json.dumps(self.action_parameters, indent=3)) + self._logger.info(INF_ACTION, self.action, self.action_id, self.task, safe_json(self.action_parameters, indent=3)) if not handlers.running_local(self._context): self._logger.info(INF_LAMBDA_MEMORY, self._context.function_name, self._context.memory_limit_in_mb) diff --git a/source/code/handlers/schedule_handler.py b/source/code/handlers/schedule_handler.py index 3e909cd..8920b85 100644 --- a/source/code/handlers/schedule_handler.py +++ b/source/code/handlers/schedule_handler.py @@ -434,6 +434,9 @@ def _execute_task(self, task, dt=None, task_group=None): else: ecs_args = { + "subnets": os.getenv('AWSVPC_SUBNETS'), + "securitygroups": os.getenv('AWSVPC_SECURITYGROUPS'), + "assignpublicip": os.getenv('AWSVPC_ASSIGNPUBLICIP'), handlers.HANDLER_EVENT_ACTION: handlers.HANDLER_ACTION_SELECT_RESOURCES, handlers.TASK_NAME: task[handlers.TASK_NAME], handlers.HANDLER_EVENT_SUB_TASK: sub_task @@ -447,6 +450,9 @@ def _execute_task(self, task, dt=None, task_group=None): else: if task[handlers.TASK_SELECT_SIZE] == actions.ACTION_USE_ECS: ecs_args = { + "subnets": os.getenv('AWSVPC_SUBNETS'), + "securitygroups": os.getenv('AWSVPC_SECURITYGROUPS'), + "assignpublicip": os.getenv('AWSVPC_ASSIGNPUBLICIP'), handlers.HANDLER_EVENT_ACTION: handlers.HANDLER_ACTION_SELECT_RESOURCES, handlers.TASK_NAME: task[handlers.TASK_NAME], handlers.HANDLER_EVENT_SUB_TASK: sub_task diff --git a/source/code/handlers/task_tracking_handler.py b/source/code/handlers/task_tracking_handler.py index a3faad3..ea9864a 100644 --- a/source/code/handlers/task_tracking_handler.py +++ b/source/code/handlers/task_tracking_handler.py @@ -376,6 +376,9 @@ def _start_task_execution(self, task_item, action=handlers.HANDLER_ACTION_EXECUT else: # run as ECS job ecs_args = { + "subnets": os.getenv('AWSVPC_SUBNETS'), + "securitygroups": os.getenv('AWSVPC_SECURITYGROUPS'), + "assignpublicip": os.getenv('AWSVPC_ASSIGNPUBLICIP'), handlers.HANDLER_EVENT_ACTION: action, handlers.TASK_NAME: task_item[handlers.TASK_TR_NAME], handlers.TASK_TR_ID: task_item[handlers.TASK_TR_ID]} diff --git a/source/code/handlers/task_tracking_table.py b/source/code/handlers/task_tracking_table.py index 33d19f3..3b2785c 100644 --- a/source/code/handlers/task_tracking_table.py +++ b/source/code/handlers/task_tracking_table.py @@ -61,7 +61,6 @@ def __init__(self, context=None, logger=None): self._s3_client = None self._account = None self._run_local = handlers.running_local(self._context) - self._resource_encryption_key = os.getenv(handlers.ENV_RESOURCE_ENCRYPTION_KEY, "") self._kms_client = None def __enter__(self): @@ -159,17 +158,8 @@ def add_task_action(self, task, assumed_role, action_resources, task_datetime, s resource_data_str = safe_json(action_resources) - encrypted = self._resource_encryption_key not in [None, ""] - item[handlers.TASK_TR_ENCRYPTED_RESOURCES] = encrypted - if encrypted: - resource_data_str = base64.b64encode(self.kms_client.encrypt_with_retries( - KeyId=self._resource_encryption_key, Plaintext=resource_data_str)["CiphertextBlob"]) - if len(resource_data_str) < int(os.getenv(handlers.ENV_RESOURCE_TO_S3_SIZE, 16)) * 1024: - if encrypted: - item[handlers.TASK_TR_RESOURCES] = action_resources if not encrypted else resource_data_str - else: - item[handlers.TASK_TR_RESOURCES] = as_dynamo_safe_types(action_resources) + item[handlers.TASK_TR_RESOURCES] = as_dynamo_safe_types(action_resources) else: bucket = os.getenv(handlers.ENV_RESOURCE_BUCKET) key = "{}.json".format(item[handlers.TASK_TR_ID]) diff --git a/source/code/main.py b/source/code/main.py index 8b64146..08d00ad 100644 --- a/source/code/main.py +++ b/source/code/main.py @@ -156,4 +156,8 @@ def ecs_handler(args): handlers.HANDLER_EVENT_TASK_DT: datetime.now().isoformat() } - return lambda_handler(event=event, context=EcsTaskContext(timeout_seconds=task_item.get(handlers.TASK_TIMEOUT, 3600))) + timeout = task_item.get(handlers.TASK_TIMEOUT, 3600) + if not timeout: + timeout = 3600 + + return lambda_handler(event=event, context=EcsTaskContext(timeout_seconds=timeout)) diff --git a/source/code/makefile b/source/code/makefile index ada4899..b3751cc 100644 --- a/source/code/makefile +++ b/source/code/makefile @@ -1,15 +1,17 @@ -###################################################################################################################### -# Copyright 2019 Amazon.com, Inc. or its affiliates. All Rights Reserved. # -# # -# Licensed under the Apache License Version 2.0 (the "License"). You may not use this file except in compliance # -# with the License. A copy of the License is located at # -# # -# http://www.apache.org/licenses/ # -# # -# or in the "license" file accompanying this file. This file is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES # -# OR CONDITIONS OF ANY KIND, express or implied. See the License for the specific language governing permissions # -# and limitations under the License. # -###################################################################################################################### +################################################################################ +# Copyright 2019 Amazon.com, Inc. or its affiliates. All Rights Reserved. # +# # +# Licensed under the Apache License Version 2.0 (the "License"). You may not # +# use this file except in compliance with the License. A copy of the License # +# is located at # +# # +# http://www.apache.org/licenses/ # +# # +# or in the "license" file accompanying this file. This file is distributed # +# on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, express # +# or implied. See the License for the specific language governing permissions # +# and limitations under the License. # +################################################################################ s3bucket=s3://$(bucket)/ @@ -22,21 +24,26 @@ ecsdir=../ecs ecs=$(wildcard $(ecsdir)/*) # destination directory to deploy to -deployment=../../deployment +deployment=../.. +# deployment=../../deployment global_assets_dir=$(deployment)/global-s3-assets regional_assets_dir=$(deployment)/regional-s3-assets zip = $(regional_assets_dir)/ops-automator.zip cw_zip = $(regional_assets_dir)/cloudwatch-handler.zip -templates=$(wildcard ../../deployment/*.template) +templates=$(wildcard ../../*.template) # build targets build: lambda cloudwatchhandler cfn docker -###################################################################################################################### -# lambda code # -###################################################################################################################### +####################################################################### +# lambda code # +####################################################################### lambda:$(py) main.py version.txt + #================================================================== + # make lambda + #================================================================== + mkdir -p $(deployment) mkdir -p $(regional_assets_dir) @@ -64,10 +71,14 @@ lambda:$(py) main.py version.txt zip $(zip) ../cloudformation/scenarios/*.template zip $(zip) builders/actions.html -###################################################################################################################### -# cloudwatch handler code # -###################################################################################################################### +####################################################################### +# cloudwatch handler code # +####################################################################### cloudwatchhandler: cloudwatch_queue_handler_lambda.py version.txt + #================================================================== + # make cloudwatchhandler + #================================================================== + mkdir -p $(deployment) # delete old zip files @@ -82,10 +93,14 @@ cloudwatchhandler: cloudwatch_queue_handler_lambda.py version.txt mv cloudwatch_queue_handler_lambda.py.org cloudwatch_queue_handler_lambda.py -###################################################################################################################### -# cloudformation templates # -###################################################################################################################### - cfn:version.txt $(templates) +####################################################################### +# cloudformation templates # +####################################################################### +cfn: version.txt $(templates) + #================================================================== + # make cfn + #================================================================== + mkdir -p $(deployment) mkdir -p $(global_assets_dir) @@ -95,10 +110,14 @@ cloudwatchhandler: cloudwatch_queue_handler_lambda.py version.txt # build main ops automator template python ./build-ops-automator-template.py ../cloudformation/ops-automator.template $(bucket) $(solution) $(version) > $(global_assets_dir)/ops-automator.template -###################################################################################################################### -# docker / ECS # -###################################################################################################################### +####################################################################### +# docker / ECS # +####################################################################### docker: $(ecs) + #================================================================== + # make docker + #================================================================== + mkdir -p $(deployment) mkdir -p $(deployment)/ecs @@ -109,36 +128,12 @@ docker: $(ecs) sed s/%version%/$(version)/g $(ecsdir)/ops-automator-ecs-runner.py > $(deployment)/ecs/ops-automator-ecs-runner.py # docker and readme file - cp $(ecsdir)/Dockerfile $(deployment)/ecs/Dockerfile - cp $(ecsdir)/README.md $(deployment)/ecs/README.md - - # version ecs cluster template - sed s/%version%/$(version)/g ../cloudformation/ops-automator-ecs-cluster.template > $(deployment)/ops-automator-ecs-cluster.template + sed s/%version%/$(version)/g $(ecsdir)/Dockerfile > $(deployment)/ecs/Dockerfile.orig + cp $(deployment)/dist/code/requirements.txt $(deployment)/ecs # build shell script for creating and pushing docker image python build-docker-script.py $(ecsdir)/build-and-deploy-image.sh $(bucket) $(version) $(prefix) > $(deployment)/ecs/build-and-deploy-image.sh - -###################################################################################################################### -# Deploy to S3 # -###################################################################################################################### -deploy: build - # ops automator versioned copy - aws s3 cp $(deployment)/ops-automator.template $(s3bucket)$(prefix) --acl public-read - # ops automator lambda code zip file - aws s3 cp $(deployment)/ops-automator.zip $(s3bucket)$(prefix) --acl public-read - # cloudwatch queue handler - aws s3 cp $(deployment)/cloudwatch-handler.zip $(s3bucket)$(prefix) --acl public-read - - # ecs related files - aws s3 cp --recursive $(deployment)/ecs $(s3bucket)$(prefix)ecs - - # extra copy of script to build docker image - aws s3 cp $(deployment)/ops-automator-latest.template $(s3bucket) --acl public-read - # extra copy of script to build docker image - aws s3 cp $(deployment)/ecs/build-and-deploy-image.sh $(s3bucket) --acl public-read - # extra copy of ecs-cluster template - aws s3 cp $(deployment)/ops-automator-ecs-cluster.template $(s3bucket) --acl public-read - - # update build number after deployment - # python update-build-number.py version.txt + chmod +x $(deployment)/ecs/*.sh + cp -R $(deployment)/ecs $(global_assets_dir) + \ No newline at end of file diff --git a/source/code/requirements.txt b/source/code/requirements.txt new file mode 100644 index 0000000..b21ef62 --- /dev/null +++ b/source/code/requirements.txt @@ -0,0 +1,4 @@ +python_version > '3.6' +boto3 +requests +pytz diff --git a/source/code/run_unit_tests.sh b/source/code/run_unit_tests.sh new file mode 100644 index 0000000..0572475 --- /dev/null +++ b/source/code/run_unit_tests.sh @@ -0,0 +1,32 @@ +#!/bin/bash +# *** INTERNAL DOCUMENT --- NOT FOR DISTRIBUTION *** +function run_test() { + if [ -e "tests/action_tests/$1/test_action.py" ]; then + if [ -z $2 ]; then + echo Running test $1 + python -m unittest tests.action_tests.$1.test_action > test_$1.out + else + echo Running test $1 - $specific_test + python -m unittest tests.action_tests.$1.test_action.TestAction.$specific_test > test_$1.out + fi + else + echo "ERROR: Test $1 not found" + fi +} + +if [ ! -z "$1" ]; then + specific_test="" + if [ ! -z "$2" ]; then + specific_test=$2 + fi + run_test $1 $specific_test +else + ls tests/action_tests | while read file; do + if [[ $file == "__"* ]]; then + continue + fi + if [ -d "tests/action_tests/${file}" ]; then + run_test $file + fi + done +fi diff --git a/source/code/testing/ec2.py b/source/code/testing/ec2.py index 508e2eb..50e4ee5 100644 --- a/source/code/testing/ec2.py +++ b/source/code/testing/ec2.py @@ -12,6 +12,7 @@ ###################################################################################################################### import re as regex import time +from functools import cmp_to_key import boto3 import boto3.exceptions @@ -36,6 +37,10 @@ def __init__(self, region=None, session=None): @property def latest_aws_linux_image(self): + def compare_date(a,b): + return int( + (dateutil.parser.parse(a["CreationDate"]) - dateutil.parser.parse(b["CreationDate"])).total_seconds() + ) if self._latest_aws_image is None: # noinspection PyPep8 images = sorted(list( @@ -56,8 +61,7 @@ def latest_aws_linux_image(self): } ], ExecutableUsers=["all"])), - cmp=lambda a, b: int( - (dateutil.parser.parse(a["CreationDate"]) - dateutil.parser.parse(b["CreationDate"])).total_seconds()), + key=cmp_to_key(compare_date), reverse=True) assert (len(images) > 0) self._latest_aws_image = images[0] @@ -65,6 +69,10 @@ def latest_aws_linux_image(self): @property def latest_aws_windows_core_image(self): + def compare_date(a,b): + return int( + (dateutil.parser.parse(a["CreationDate"]) - dateutil.parser.parse(b["CreationDate"])).total_seconds() + ) if self._latest_aws_image is None: # noinspection PyPep8 images = sorted(list( @@ -85,8 +93,7 @@ def latest_aws_windows_core_image(self): } ], ExecutableUsers=["all"])), - cmp=lambda a, b: int( - (dateutil.parser.parse(a["CreationDate"]) - dateutil.parser.parse(b["CreationDate"])).total_seconds()), + key=cmp_to_key(compare_date), reverse=True) assert (len(images) > 0) self._latest_aws_image = images[0] diff --git a/source/code/testing/stack.py b/source/code/testing/stack.py index 1580f80..356cb44 100644 --- a/source/code/testing/stack.py +++ b/source/code/testing/stack.py @@ -13,6 +13,7 @@ import time import boto3 +from botocore.exceptions import ClientError import services import services.cloudformation_service @@ -135,8 +136,8 @@ def is_stack_present(self): stacks = resp.get("Stacks", []) # double check for deleted stacks in case a stack id was used return any([s["StackStatus"] != "DELETE_COMPLETE" for s in stacks]) - except Exception as ex: - if ex.message.endswith("does not exist"): + except ClientError as ex: + if str(ex).endswith("does not exist"): return False raise ex diff --git a/source/code/tests/action_tests/dynamodb_set_capacity/test_action.py b/source/code/tests/action_tests/dynamodb_set_capacity/test_action.py index 9689235..d84bfea 100644 --- a/source/code/tests/action_tests/dynamodb_set_capacity/test_action.py +++ b/source/code/tests/action_tests/dynamodb_set_capacity/test_action.py @@ -13,6 +13,7 @@ import inspect import unittest from types import FunctionType +import sys import actions.dynamodb_set_capacity_action as dbb import services @@ -48,6 +49,9 @@ def get_methods(cls): @classmethod def setUpClass(cls): + if not sys.warnoptions: + import warnings + warnings.simplefilter("ignore") cls.logger = ConsoleLogger() diff --git a/source/code/tests/action_tests/dynamodb_set_capacity/test_resources.template b/source/code/tests/action_tests/dynamodb_set_capacity/test_resources.template index 1d47c79..28d9c5d 100644 --- a/source/code/tests/action_tests/dynamodb_set_capacity/test_resources.template +++ b/source/code/tests/action_tests/dynamodb_set_capacity/test_resources.template @@ -26,6 +26,20 @@ "Resources": { "CreateDynamDBBackupTestTable": { "Type": "AWS::DynamoDB::Table", + "Metadata": { + "cfn_nag": { + "rules_to_suppress": [ + { + "id": "W73", + "reason": "This template is only for test cases" + }, + { + "id": "W74", + "reason": "This template is only for test cases" + } + ] + } + }, "Properties": { "AttributeDefinitions": [ { diff --git a/source/code/tests/action_tests/ec2_copy_snapshot/test_action.py b/source/code/tests/action_tests/ec2_copy_snapshot/test_action.py index 0b1b0c7..2ed20c6 100644 --- a/source/code/tests/action_tests/ec2_copy_snapshot/test_action.py +++ b/source/code/tests/action_tests/ec2_copy_snapshot/test_action.py @@ -14,6 +14,7 @@ import json import unittest from types import FunctionType +import sys import actions import actions.ec2_copy_snapshot_action as copy_snapshot @@ -56,7 +57,10 @@ def get_methods(cls): @classmethod def setUpClass(cls): - + if not sys.warnoptions: + import warnings + warnings.simplefilter("ignore") + cls.logger = ConsoleLogger() cls.task_runner = get_task_runner(TESTED_ACTION, KEEP_AND_USE_EXISTING_ACTION_STACK) diff --git a/source/code/tests/action_tests/ec2_copy_snapshot/test_resources.template b/source/code/tests/action_tests/ec2_copy_snapshot/test_resources.template index a4b9de7..623e211 100644 --- a/source/code/tests/action_tests/ec2_copy_snapshot/test_resources.template +++ b/source/code/tests/action_tests/ec2_copy_snapshot/test_resources.template @@ -38,6 +38,20 @@ }, "Resources": { "Volume0": { + "Metadata": { + "cfn_nag": { + "rules_to_suppress": [ + { + "id": "W37", + "reason": "Unit test template. Not part of the solution" + }, + { + "id": "F1", + "reason": "Unit test template. Not part of the solution" + } + ] + } + }, "Type": "AWS::EC2::Volume", "Properties": { "Size": 10, @@ -62,6 +76,16 @@ } }, "Volume1": { + "Metadata": { + "cfn_nag": { + "rules_to_suppress": [ + { + "id" : "W37", + "reason": "Unit test template. Not part of the solution" + } + ] + } + }, "Type": "AWS::EC2::Volume", "Properties": { "Size": 4, @@ -87,6 +111,16 @@ } }, "Volume2": { + "Metadata": { + "cfn_nag": { + "rules_to_suppress": [ + { + "id": "F19", + "reason": "Unit test template. Not part of the solution" + } + ] + } + }, "Type": "AWS::EC2::Volume", "Properties": { "Size": 4, @@ -118,6 +152,16 @@ } }, "EncryptionKey0": { + "Metadata": { + "cfn_nag": { + "rules_to_suppress": [ + { + "id" : "F19", + "reason": "Unit test template. Not part of the solution" + } + ] + } + }, "Type": "AWS::KMS::Key", "Properties": { "Enabled": "True", diff --git a/source/code/tests/action_tests/ec2_copy_snapshot/test_resources_destination_region.template b/source/code/tests/action_tests/ec2_copy_snapshot/test_resources_destination_region.template index 5f07e75..9416a5d 100644 --- a/source/code/tests/action_tests/ec2_copy_snapshot/test_resources_destination_region.template +++ b/source/code/tests/action_tests/ec2_copy_snapshot/test_resources_destination_region.template @@ -20,6 +20,16 @@ }, "Resources": { "EncryptionKey0": { + "Metadata": { + "cfn_nag": { + "rules_to_suppress": [ + { + "id": "F19", + "reason": "Template for unit test only" + } + ] + } + }, "Type": "AWS::KMS::Key", "Properties": { "Enabled": "True", diff --git a/source/code/tests/action_tests/ec2_create_snapshot/test_action.py b/source/code/tests/action_tests/ec2_create_snapshot/test_action.py index 6fa83a3..5ef792a 100644 --- a/source/code/tests/action_tests/ec2_create_snapshot/test_action.py +++ b/source/code/tests/action_tests/ec2_create_snapshot/test_action.py @@ -14,6 +14,7 @@ import re import unittest from types import FunctionType +import sys import actions import actions.ec2_create_snapshot_action as create_snapshot_action @@ -50,6 +51,9 @@ def get_methods(cls): @classmethod def setUpClass(cls): + if not sys.warnoptions: + import warnings + warnings.simplefilter("ignore") cls.logger = ConsoleLogger() diff --git a/source/code/tests/action_tests/ec2_create_snapshot/test_resources.template b/source/code/tests/action_tests/ec2_create_snapshot/test_resources.template index 7d0748f..099ee17 100644 --- a/source/code/tests/action_tests/ec2_create_snapshot/test_resources.template +++ b/source/code/tests/action_tests/ec2_create_snapshot/test_resources.template @@ -41,6 +41,20 @@ }, "Resources": { "Volume0": { + "Metadata": { + "cfn_nag": { + "rules_to_suppress": [ + { + "id": "W37", + "reason": "Unit test template - not deployed with solution" + }, + { + "id": "F1", + "reason": "Unit test template - not deployed with solution" + } + ] + } + }, "Type": "AWS::EC2::Volume", "Properties": { "Size": 10, @@ -65,6 +79,20 @@ } }, "Volume1": { + "Metadata": { + "cfn_nag": { + "rules_to_suppress": [ + { + "id": "W37", + "reason": "Unit test template - not deployed with solution" + }, + { + "id": "F1", + "reason": "Unit test template - not deployed with solution" + } + ] + } + }, "Type": "AWS::EC2::Volume", "Properties": { "Size": 10, diff --git a/source/code/tests/action_tests/ec2_delete_snapshot/test_action.py b/source/code/tests/action_tests/ec2_delete_snapshot/test_action.py index 7bb6c96..4fa1366 100644 --- a/source/code/tests/action_tests/ec2_delete_snapshot/test_action.py +++ b/source/code/tests/action_tests/ec2_delete_snapshot/test_action.py @@ -13,6 +13,7 @@ import inspect import unittest from datetime import timedelta +import sys import actions.ec2_delete_snapshot_action as delete_snapshot_action from testing.console_logger import ConsoleLogger @@ -42,7 +43,10 @@ def __init__(self, method_name): @classmethod def setUpClass(cls): - + if not sys.warnoptions: + import warnings + warnings.simplefilter("ignore") + cls.logger = ConsoleLogger() cls.resource_stack = get_resource_stack(TESTED_ACTION, diff --git a/source/code/tests/action_tests/ec2_delete_snapshot/test_resources.template b/source/code/tests/action_tests/ec2_delete_snapshot/test_resources.template index e9ea484..977b9f3 100644 --- a/source/code/tests/action_tests/ec2_delete_snapshot/test_resources.template +++ b/source/code/tests/action_tests/ec2_delete_snapshot/test_resources.template @@ -11,6 +11,20 @@ }, "Resources": { "Volume0": { + "Metadata": { + "cfn_nag": { + "rules_to_suppress": [ + { + "id": "W37", + "reason": "Unit test template - not deployed with solution" + }, + { + "id": "F1", + "reason": "Unit test template - not deployed with solution" + } + ] + } + }, "Type": "AWS::EC2::Volume", "Properties": { "Size": 10, diff --git a/source/code/tests/action_tests/ec2_replace_instance/test_action.py b/source/code/tests/action_tests/ec2_replace_instance/test_action.py index 3715c81..8d86905 100644 --- a/source/code/tests/action_tests/ec2_replace_instance/test_action.py +++ b/source/code/tests/action_tests/ec2_replace_instance/test_action.py @@ -14,6 +14,7 @@ import time import unittest from types import FunctionType +import sys import jmespath @@ -81,6 +82,9 @@ def get_methods(cls): @classmethod def setUpClass(cls): + if not sys.warnoptions: + import warnings + warnings.simplefilter("ignore") cls.ec2 = Ec2(region()) cls.elb = Elb(region()) diff --git a/source/code/tests/action_tests/ec2_replace_instance/test_resources.template b/source/code/tests/action_tests/ec2_replace_instance/test_resources.template index 771cc7a..37120c3 100644 --- a/source/code/tests/action_tests/ec2_replace_instance/test_resources.template +++ b/source/code/tests/action_tests/ec2_replace_instance/test_resources.template @@ -29,6 +29,16 @@ }, "Resources": { "InstanceRole": { + "Metadata": { + "cfn_nag": { + "rules_to_suppress": [ + { + "id": "W11", + "reason": "Unit test template - not deployed with solution" + } + ] + } + }, "Type": "AWS::IAM::Role", "Properties": { "AssumeRolePolicyDocument": { @@ -87,6 +97,16 @@ } }, "LoadBalancer": { + "Metadata": { + "cfn_nag": { + "rules_to_suppress": [ + { + "id": "W26", + "reason": "Unit test template - not deployed with solution" + } + ] + } + }, "Type": "AWS::ElasticLoadBalancing::LoadBalancer", "Properties": { "AvailabilityZones": { diff --git a/source/code/tests/action_tests/ec2_resize_instance/test_action.py b/source/code/tests/action_tests/ec2_resize_instance/test_action.py index 7ace79f..67d84dd 100644 --- a/source/code/tests/action_tests/ec2_resize_instance/test_action.py +++ b/source/code/tests/action_tests/ec2_resize_instance/test_action.py @@ -14,6 +14,7 @@ import time import unittest from types import FunctionType +import sys import actions.ec2_resize_instance_action as r import handlers.ec2_tag_event_handler @@ -49,6 +50,9 @@ def get_methods(cls): @classmethod def setUpClass(cls): + if not sys.warnoptions: + import warnings + warnings.simplefilter("ignore") cls.logger = ConsoleLogger() diff --git a/source/code/tests/action_tests/ec2_tag_cpu_instance/test_action.py b/source/code/tests/action_tests/ec2_tag_cpu_instance/test_action.py index 966ab7e..ed08cf0 100644 --- a/source/code/tests/action_tests/ec2_tag_cpu_instance/test_action.py +++ b/source/code/tests/action_tests/ec2_tag_cpu_instance/test_action.py @@ -14,6 +14,7 @@ import time import unittest from datetime import timedelta +import sys import actions.ec2_tag_cpu_instance_action as lt import pytz @@ -48,6 +49,9 @@ def __init__(self, method_name): @classmethod def setUpClass(cls): + if not sys.warnoptions: + import warnings + warnings.simplefilter("ignore") cls.logger = ConsoleLogger() diff --git a/source/code/version.txt b/source/code/version.txt deleted file mode 100644 index 50aea0e..0000000 --- a/source/code/version.txt +++ /dev/null @@ -1 +0,0 @@ -2.1.0 \ No newline at end of file diff --git a/source/ecs/Dockerfile b/source/ecs/Dockerfile new file mode 100755 index 0000000..63f6669 --- /dev/null +++ b/source/ecs/Dockerfile @@ -0,0 +1,13 @@ +# dockerfile for Ops Automator Image, version %version% +FROM amazonlinux:latest +ENV HOME /homes/ec2-user +WORKDIR /homes/ec2-user +ADD ops-automator-ecs-runner.py ./ +ADD requirements.txt ./ +RUN yum update -y && \ + yum install -y python3 +RUN echo "alias python=python3" >> /homes/ec2-user/.bashrc +RUN alias python=python3 && \ + alias pip=pip3 && \ + pip3 install boto3 && \ + pip3 install -r requirements.txt \ No newline at end of file diff --git a/source/ecs/build-and-deploy-image.sh b/source/ecs/build-and-deploy-image.sh new file mode 100755 index 0000000..b02350e --- /dev/null +++ b/source/ecs/build-and-deploy-image.sh @@ -0,0 +1,118 @@ +#!/usr/bin/env bash +###################################################################################################################### +# Copyright 2019 Amazon.com, Inc. or its affiliates. All Rights Reserved. # +# # +# Licensed under the Apache License Version 2.0 (the "License"). You may not use this file except in compliance # +# with the License. A copy of the License is located at # +# # +# http://www.apache.org/licenses/ # +# # +# or in the "license" file accompanying this file. This file is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES # +# OR CONDITIONS OF ANY KIND, express or implied. See the License for the specific language governing permissions # +# and limitations under the License. # +###################################################################################################################### +# If running Docker requires sudo to run on this system then also run this script with sudo +function usage { + echo "usage: $0 [--stack | -s] stackname [--region | -r] awsregion" +} + +function do_cmd { + cmd=$* + $cmd + if [ $? -gt 0 ]; + then + echo Command failed: ${cmd} + exit 1 + fi +} + +do_replace() { + replace="s|$2|$3|g" + file=$1 + sed -i -e $replace $file +} + +#------------------------------------------------------------------------------------------------ + +echo "Build and deploy docker image for Ops Automator, version %version%" +# Before running this script: +# - you have created an ECS Docker cluster +# - you have updated the OA stack with the name of the cluster +# - Cloudformation has created the ECR repo +# - you have the name of the stack + +while [[ $# -gt 1 ]] +do + + key="$1" + + case ${key} in + -r|--region) + region="$2" + shift # past argument + ;; + -s|--stack) + stack="$2" + shift # past argument + ;; + *) + usage + exit 1 + ;; + esac + shift # past argument or value +done + +if [ "${region}" == "" ] +then + echo "Error: No region specified, use -r or --region to specify the region." + usage + exit 1 +fi + +if [ "${stack}" == "" ] +then + echo "Error: No stack name specified, use -s or --stack to specify the name of the Ops Automator stack." + usage + exit 1 +fi + +# Get repository from the stack parameters +repository=`aws cloudformation describe-stacks --region ${region} --stack-name ${stack} --query "Stacks[0].Outputs[?OutputKey=='Repository']|[0].OutputValue" --output text` +if [ "${repository}" == "" ] +then + echo "No repository in output of stack $(stack)" + exit 1 +fi + +# Get account id +accountid=`aws sts get-caller-identity --region ${region} --output text | sed 's/\t.*//'` + +image=ops-automator + +echo +echo "Image is : " ${image} +echo "repository is : " ${repository} +echo "Region is : " ${region} +echo + +echo "=== Creating Dockerfile ==" +cp Dockerfile.orig Dockerfile +do_replace Dockerfile '' ${repository} +echo + +# Pulling latest AWS Linux image. Note that this repo/region must match FROM value in Docker file +echo "=== Pulling latest AWS Linux image from DockerHub ===" +do_cmd docker pull amazonlinux + +echo +echo "=== Building docker image ===" +do_cmd docker build -t ${image} . + +echo +echo "=== Tagging and pushing image $image to $repository ===" +do_cmd docker tag ${image}:latest ${repository}:latest + +login=`aws ecr get-login --region ${region} --no-include-email` +do_cmd $login +do_cmd docker push ${repository}:latest diff --git a/source/ecs/ops-automator-ecs-runner.py b/source/ecs/ops-automator-ecs-runner.py new file mode 100755 index 0000000..a85965e --- /dev/null +++ b/source/ecs/ops-automator-ecs-runner.py @@ -0,0 +1,113 @@ +###################################################################################################################### +# Copyright 2019 Amazon.com, Inc. or its affiliates. All Rights Reserved. # +# # +# Licensed under the Apache License Version 2.0 (the "License"). You may not use this file except in compliance # +# with the License. A copy of the License is located at # +# # +# http://www.apache.org/licenses/ # +# # +# or in the "license" file accompanying this file. This file is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES # +# OR CONDITIONS OF ANY KIND, express or implied. See the License for the specific language governing permissions # +# and limitations under the License. # +###################################################################################################################### + + +# import imp +import importlib +import types +import json +import os +import sys +import zipfile + +import boto3 +import requests + + +def get_lambda_code(cmdline_args): + """ + Downloads and unzips the code of the Lambda + :param cmdline_args: + :return: + """ + stack_name = cmdline_args["stack"] + stack_region = cmdline_args["stack_region"] + + lambda_client = boto3.client("lambda", region_name=stack_region) + + os.environ["AWS_DEFAULT_REGION"] = stack_region + + lambda_function_name = "{}-{}".format(stack_name, "OpsAutomator-Standard") + lambda_function = lambda_client.get_function(FunctionName=lambda_function_name) + lambda_environment = lambda_function["Configuration"]["Environment"]["Variables"] + + for ev in lambda_environment: + os.environ[ev] = lambda_environment[ev] + + code_url = lambda_function["Code"]["Location"] + code_stream = requests.get(code_url, stream=True) + + temp_code_directory = "./" + lambda_code_zip_file = os.path.join(temp_code_directory, "code.zip") + with open(lambda_code_zip_file, 'wb') as fd: + for chunk in code_stream.iter_content(chunk_size=10240): + fd.write(chunk) + + zip_ref = zipfile.ZipFile(lambda_code_zip_file, 'r') + zip_ref.extractall(temp_code_directory) + zip_ref.close() + + return temp_code_directory + + +def run_ops_automator_step(cmdline_args): + """ + Runs ecs_handler + :param cmdline_args: arguments used by ecs_handler to rebuild event for Ops Automator select or execute handler + :return: result of the Ops Automator handler + """ + code_directory = get_lambda_code(cmdline_args) + + # load main module + main_module_file = os.path.join(code_directory, "main.py") + + spec = importlib.util.find_spec("main") + try: + lambda_main_module = spec.loader.create_module(spec) + except AttributeError: + lambda_main_module = None + if lambda_main_module is None: + lambda_main_module = types.ModuleType(spec.name) + # No clear way to set import-related attributes. + spec.loader.exec_module(lambda_main_module) + + lambda_function_ecs_handler = lambda_main_module.ecs_handler + + # get and run ecs_handler method + return lambda_function_ecs_handler(cmdline_args) + + +if __name__ == "__main__": + + print("Running Ops Automator ECS Job, version %version%") + + if len(sys.argv) < 2: + print("No task arguments passed as first parameter") + exit(1) + + args = {} + + try: + args = json.loads(sys.argv[1]) + except Exception as ex: + print(("\"{}\" is not valid JSON, {}", sys.argv[1], ex)) + exit(1) + + try: + print(("Task arguments to run the job are\n {}".format(json.dumps(args, indent=3)))) + print(("Result is {}".format(run_ops_automator_step(args)))) + exit(0) + + except Exception as e: + print(e) + exit(1) diff --git a/source/version.txt b/source/version.txt new file mode 100644 index 0000000..e3a4f19 --- /dev/null +++ b/source/version.txt @@ -0,0 +1 @@ +2.2.0 \ No newline at end of file diff --git a/version.txt b/version.txt new file mode 100644 index 0000000..e3a4f19 --- /dev/null +++ b/version.txt @@ -0,0 +1 @@ +2.2.0 \ No newline at end of file