Skip to content

Commit

Permalink
Merge pull request #5 from DistributedScience/DSv0.1
Browse files Browse the repository at this point in the history
update with DSv0.1 improvements
  • Loading branch information
ErinWeisbart authored May 18, 2023
2 parents e385828 + f90a11f commit 452e2ee
Show file tree
Hide file tree
Showing 9 changed files with 561 additions and 100 deletions.
9 changes: 8 additions & 1 deletion config.py
Original file line number Diff line number Diff line change
Expand Up @@ -27,7 +27,14 @@
# SQS QUEUE INFORMATION:
SQS_QUEUE_NAME = APP_NAME + 'Queue'
SQS_MESSAGE_VISIBILITY = 4*60*60 # Timeout (secs) for messages in flight (average time to be processed)
SQS_DEAD_LETTER_QUEUE = 'arn:aws:sqs:some-region:111111100000:DeadMessages'
SQS_DEAD_LETTER_QUEUE = 'user_DeadMessages'

# MONITORING
AUTO_MONITOR = 'True'

# CLOUDWATCH DASHBOARD CREATION
CREATE_DASHBOARD = 'True' # Create a dashboard in Cloudwatch for run
CLEAN_DASHBOARD = 'True' # Automatically remove dashboard at end of run with Monitor

# REDUNDANCY CHECKS
CHECK_IF_DONE_BOOL = 'False' #True or False - should it check if there is already a .zarr file and delete the job if yes?
16 changes: 13 additions & 3 deletions documentation/DOZC-documentation/overview_2.md
Original file line number Diff line number Diff line change
Expand Up @@ -5,7 +5,7 @@
The steps for actually running the Distributed-OMEZARRCreator code are outlined in the repository [README](https://github.com/DistributedScience/Distributed-OMEZARRCreator/blob/master/README.md), and details of the parameters you set in each step are on their respective Documentation pages ([Step 1: Config](step_1_configuration.md), [Step 2: Jobs](step_2_submit_jobs.md), [Step 3: Fleet](step_3_start_cluster.md), and optional [Step 4: Monitor](step_4_monitor.md)).
We'll give an overview of what happens in AWS at each step here and explain what AWS does automatically once you have it set up.

![Distributed-Something Chronological Overview](images/Distributed-Something_chronological_overview.png)
![Distributed-OMEZARRCreator Chronological Overview](images/Distributed-OMEZARRCreator_chronological_overview.png)

**Step 1 (A)**:
In the Config file you set quite a number of specifics that are used by EC2, ECS, SQS, and in making Dockers.
Expand Down Expand Up @@ -42,7 +42,7 @@ If SQS tells them there are no visible jobs then they shut themselves down.
**Optional Step 4 (E)**:
If you choose to run `python3 run.py monitor` it will automatically scale down your hardware (e.g. intelligently scale down your spot fleet request) during a run and clean up all of the infrastructure you created for the run at the end of the run.

## What does this look like?
## What does an instance configuration look like?

![Example Instance Configuration](images/sample_DCP_config_1.png)

Expand All @@ -65,4 +65,14 @@ How long a job takes to run and how quickly you need the data may also affect ho
* Running a few large Docker containers (as opposed to many small ones) increases the amount of memory all the copies of your software are sharing, decreasing the likelihood you'll run out of memory if you stagger your job start times.
However, you're also at a greater risk of running out of hard disk space.

Keep an eye on all of the logs the first few times you run any workflow and you'll get a sense of whether your resources are being utilized well or if you need to do more tweaking of your configuration.
Keep an eye on all of the logs the first few times you run any workflow and you'll get a sense of whether your resources are being utilized well or if you need to do more tweaking of your configuration.

## What does this look like on AWS?
The following five are the primary resources that Distributed-OMEZARRCreator interacts with.
After you have finished [preparing for Distributed-OMEZARRCreator](step_0_prep), you do not need to directly interact with any of these services outside of Distributed-OMEZARRCreator.
If you would like a granular view of what Distributed-OMEZARRCreator is doing while it runs, you can open each console in a separate tab in your browser and watch their individual behaviors, though this is not necessary, especially if you run the [monitor command](step_4_monitor.md) and/or have DS automatically create a Dashboard for you (see [Configuration](step_1_configuration.md)).
* [S3 Console](https://console.aws.amazon.com/s3)
* [EC2 Console](https://console.aws.amazon.com/ec2/)
* [ECS Console](https://console.aws.amazon.com/ecs/)
* [SQS Console](https://console.aws.amazon.com/sqs/)
* [CloudWatch Console](https://console.aws.amazon.com/cloudwatch/)
115 changes: 56 additions & 59 deletions documentation/DOZC-documentation/step_0_prep.md
Original file line number Diff line number Diff line change
@@ -1,103 +1,100 @@
# Step 0: Prep
There are two classes of AWS resources that Distributed-OMEZARRCreator interacts with: 1) infrastructure that is made once per AWS account to enable any Distributed-OMEZARRCreator implementation to run and 2) infrastructure that is made and destroyed with every run.
This section describes the creation of the first class of AWS infrastructure and only needs to be followed once per account.

Distributed-OMEZARRCreator runs many parallel jobs in EC2 instances that are automatically managed by ECS.
To get jobs started, a control node to submit jobs and monitor progress is needed.
This section describes what you need in AWS and in the control node to get started.
This guide only needs to be followed once per account.
(Though we recommend each user has their own control node, further control nodes can be created from an AMI after this guide has been followed to completion once.)


## 1. AWS Configuration

The AWS resources involved in running Distributed-OMEZARRCreator can be primarily configured using the [AWS Web Console](https://aws.amazon.com/console/).
The architecture of Distributed-OMEZARRCreator is based in the [worker pattern](https://aws.amazon.com/blogs/compute/better-together-amazon-ecs-and-aws-lambda/) for distributed systems.
We have adapted and simplified that architecture for Distributed-OMEZARRCreator.

You need an active account configured to proceed.
Log in into your AWS account, and make sure the following list of resources is created:
## AWS Configuration
The AWS resources involved in running Distributed-OMEZARRCreator are configured using the [AWS Web Console](https://aws.amazon.com/console/) and a setup script we provide ([setup_AWS.py](../../setup_AWS.py)).
You need an active AWS account configured to proceed.
Login into your AWS account, and make sure the following list of resources is created:

### 1.1 Access keys
* Get [security credentials](http://docs.aws.amazon.com/IAM/latest/UserGuide/id_credentials_access-keys.html) for your account.
### 1.1 Manually created resources
* **Security Credentials**: Get [security credentials](http://docs.aws.amazon.com/IAM/latest/UserGuide/id_credentials_access-keys.html) for your account.
Store your credentials in a safe place that you can access later.
* You will probably need an ssh key to login into your EC2 instances (control or worker nodes).
* **SSH Key**: You will probably need an ssh key to login into your EC2 instances (control or worker nodes).
[Generate an SSH key](http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/ec2-key-pairs.html) and store it in a safe place for later use.
If you'd rather, you can generate a new key pair to use for this during creation of the control node; make sure to `chmod 600` the private key when you download it.

### 1.2 Roles and permissions
* You can use your default VPC, subnet, and security groups; you should add an inbound SSH connection from your IP address to your security group.
* [Create an ecsInstanceRole](http://docs.aws.amazon.com/AmazonECS/latest/developerguide/instance_IAM_role.html) with appropriate permissions (An S3 bucket access policy CloudWatchFullAccess, CloudWatchActionEC2Access, AmazonEC2ContainerServiceforEC2Role policies, ec2.amazonaws.com as a Trusted Entity)
* [Create an aws-ec2-spot-fleet-tagging-role](http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/spot-fleet-requests.html) with appropriate permissions (just needs AmazonEC2SpotFleetTaggingRole); ensure that in the "Trust Relationships" tab it says "spotfleet.amazonaws.com" rather than "ec2.amazonaws.com" (edit this if necessary).
In the current interface, it's easiest to click "Create role", select "EC2" from the main service list, then select "EC2- Spot Fleet Tagging".
* **SSH Connection**: You can use your default AWS account VPC, subnet, and security groups.
You should add an inbound SSH connection from your IP address to your security group.

### 1.2 Automatically created resources
* Run setup_AWS by entering `python setup_AWS.py` from your command line.
It will automatically create:
* an [ecsInstanceRole](http://docs.aws.amazon.com/AmazonECS/latest/developerguide/instance_IAM_role.html) with appropriate permissions.
This role is used by the EC2 instances generated by your spot fleet request and coordinated by ECS.
* an [aws-ec2-spot-fleet-tagging-role](http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/spot-fleet-requests.html) with appropriate permissions.
This role grants the Spot Fleet the permissions to request, launch, terminate, and tag instances.
* an SNS topic that is used for triggering the auto-Monitor.
* a Monitor lambda function that is used for auto-monitoring of your runs (see [Step 4: Monitor](step_4_monitor.md) for more information).

### 1.3 Auxiliary Resources
*You can certainly configure Distributed-OMEZARRCreator for use without S3, but most DS implementations use S3 for storage.*
* [Create an S3 bucket](http://docs.aws.amazon.com/AmazonS3/latest/gsg/CreatingABucket.html) and upload your data to it.
* Add permissions to your bucket so that [logs can be exported to it](https://docs.aws.amazon.com/AmazonCloudWatch/latest/logs/S3ExportTasksConsole.html) (Step 3, first code block)
* [Create an SQS](http://docs.aws.amazon.com/AWSSimpleQueueService/latest/SQSGettingStartedGuide/CreatingQueue.html) queue for unprocessable-messages to be dumped into (aka a DeadLetterQueue).

### 1.4 Primary Resources
The following five are the primary resources that Distributed-OMEZARRCreator interacts with.
After you have finished preparing for Distributed-OMEZARRCreator (this guide), you do not need to directly interact with any of these services outside of Distributed-OMEZARRCreator.
If you would like a granular view of [what Distributed-OMEZARRCreator is doing while it runs](overview_2.md), you can open each console in a separate tab in your browser and watch their individual behaviors, though this is not necessary, especially if you run the [monitor command](step_4_monitor.md) and/or enable auto-Dashboard creation in your [configuration](step_1_configuration.md).
* [S3 Console](https://console.aws.amazon.com/s3)
* [EC2 Console](https://console.aws.amazon.com/ec2/)
* [ECS Console](https://console.aws.amazon.com/ecs/)
* [SQS Console](https://console.aws.amazon.com/sqs/)
* [CloudWatch Console](https://console.aws.amazon.com/cloudwatch/)

### 1.5 Spot Limits
AWS initially [limits the number of spot instances](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/using-spot-limits.html) you can use at one time.
You can request more through a process in the linked documentation.
Add permissions to your bucket so that [logs can be exported to it](https://docs.aws.amazon.com/AmazonCloudWatch/latest/logs/S3ExportTasksConsole.html) (Step 3, first code block).

### 1.4 Increase Spot Limits
AWS initially [limits the number of spot instances](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/using-spot-limits.html) you can use at one time; you can request more through a process in the linked documentation.
Depending on your workflow (your scale and how you group your jobs), this may not be necessary.

## 2. The Control Node
The control node can be your local machine if it is configured properly, or it can also be a small instance in AWS.
## The Control Node
The control node is a machine that is used for running the Distributed-OMEZARRCreator scripts.
It can be your local machine, if it is configured properly, or it can also be a small instance in AWS.
We prefer to have a small EC2 instance dedicated to controlling our Distributed-OMEZARRCreator workflows for simplicity of access and configuration.
To login in an EC2 machine you need an ssh key that can be generated in the web console.
To login in an EC2 machine you need an SSH key that can be generated in the web console.
Each time you launch an EC2 instance you have to confirm having this key (which is a .pem file).
This machine is needed only for submitting jobs, and does not have any special computational requirements, so you can use a micro instance to run basic scripts to proceed.
(Though we recommend each user has their own control node, further control nodes can be created from an AMI after this guide has been followed to completion once.)

The control node needs the following tools to successfully run Distributed-OMEZARRCreator.
Here we assume you are using the command line in a Linux machine, but you are free to try other operating systems too.
These instructions assume you are using the command line in a Linux machine, but you are free to try other operating systems too.

### 2.1 Make your own
### Create Control Node from Scratch
#### 2.1 Install Python 3.8 or higher and pip
Most scripts are written in Python and support Python 3.8 and 3.9.
Follow installation instructions for your platform to install Python.
pip should be included with the installation of Python 3.8 or 3.9, but if you do not have it installed, install pip.

#### 2.1.1 Clone this repo
#### 2.2 Clone this repository and install requirements
You will need the scripts in Distributed-OMEZARRCreator locally available in your control node.
<pre>
sudo apt-get install git
git clone https://github.com/DistributedScience/Distributed-OMEZARRCreator.git
cd Distributed-OMEZARRCreator/
git pull
</pre>

#### 2.1.2 Python 3.8 or higher and pip
Most scripts are written in Python and support Python 3.8 and 3.9.
Follow installation instructions for your platform to install python and, if needed, pip.
After Python has been installed, you need to install the requirements for Distributed-Something following this steps:

<pre>
cd Distributed-OMEZARRCreator/files
# install requirements
cd files
sudo pip install -r requirements.txt
</pre>

#### 2.1.3 AWS CLI
#### 2.3 Install AWS CLI
The command line interface is the main mode of interaction between the local node and the resources in AWS.
You need to install [awscli](http://docs.aws.amazon.com/cli/latest/userguide/installing.html) for Distributed-Something to work properly:
You need to install [awscli](http://docs.aws.amazon.com/cli/latest/userguide/installing.html) for Distributed-OMEZARRCreator to work properly:

<pre>
sudo pip install awscli --ignore-installed six
sudo pip install --upgrade awscli
aws configure
</pre>

When running the last step, you will need to enter your AWS credentials.
When running the last step (`aws configure`), you will need to enter your AWS credentials.
Make sure to set the region correctly (i.e. us-west-1 or eu-east-1, not eu-west-2a), and set the default file type to json.

#### 2.1.4 s3fs-fuse (optional)
[s3fs-fuse](https://github.com/s3fs-fuse/s3fs-fuse) allows you to mount your s3 bucket as a pseudo-file system.
It does not have all the performance of a real file system, but allows you to easily access all the files in your s3 bucket.
Follow the instructions at the link to mount your bucket.

#### 2.1.5 Create Control Node AMI (optional)
### Create Control Node from AMI (optional)
Once you've set up the other software (and gotten a job running, so you know everything is set up correctly), you can use Amazon's web console to set this up as an Amazon Machine Instance, or AMI, to replicate the current state of the hard drive.
Create future control nodes using this AMI so that you don't need to repeat the above installation.

## Removing long-term infrastructure
If you decide that you never want to run Distributed-OMEZARRCreator again and would like to remove the long-term infrastructure, follow these steps.

### Remove Roles, Lambda Monitor, and Monitor SNS
<pre>
python setup_AWS.py destroy
</pre>

### Remove EC2 Control node
If you made your control node as an EC2 instance, while in the AWS console, select the instance.
Select `Instance state` => `Terminate instance`.
13 changes: 13 additions & 0 deletions documentation/DOZC-documentation/step_1_configuration.md
Original file line number Diff line number Diff line change
Expand Up @@ -54,10 +54,23 @@ We recommend setting this to slightly longer than the average amount of time it
See [SQS_QUEUE_information](SQS_QUEUE_information) for more information.
* **SQS_DEAD_LETTER_QUEUE:** The name of the queue to send jobs to if they fail to process correctly multiple times.
This keeps a single bad job (such as one where a single file has been corrupted) from keeping your cluster active indefinitely.
This queue will be automatically made if it doesn't exist already.
See [Step 0: Prep](step_0_prep.med) for more information.

***

### MONITORING
* **AUTO_MONITOR:** Whether or not to have Auto-Monitor automatically monitor your jobs.

***

### CLOUDWATCH DASHBOARD CREATION

* **CREATE_DASHBOARD:** Create a Cloudwatch Dashboard that plots run metrics?
* **CLEAN_DASHBOARD:** Automatically clean up the Cloudwatch Dashboard at the end of the run?

***

### REDUNDANCY CHECKS

* **CHECK_IF_DONE_BOOL:** Whether or not to check the output folder before proceeding.
Expand Down
Loading

0 comments on commit 452e2ee

Please sign in to comment.