Skip to content
Bradley Griffith edited this page Dec 6, 2017 · 2 revisions

Below is an explaination of how this application uses various AWS applications to complete the transcoding process of Story media.

S3

We transcode uploaded Story media to mp4 and webm and use separate buckets for the pre-transcoded and transcoded files. When files are initially uploaded, they go to the appName-input bucket. After transcoding the transcoded files and the original uploaded file are placed in the appName-output bucket.

Input Bucket Name: appName-input
Output Bucket Name: appName-output

NOTE: For a full description of how our client app and our server handle Story media uploading, please see the Story Upload Flow documentation.

Access Control

Files within the bucket appName-output are publicly available individually, but the bucket itself, as a directory, is not publicly traversable. The way this is achieved is as follows:

  1. The Bucket has a configured "Access Control List" that specifically allows only our admins to access the bucket at its root level: appName output access control list

  2. The Bucket has a configured "Bucket Policy" that grants "GetObject" permission to all users, authenticated or otherwise, to all files within the bucket: appName output bucket policy

Together, these two steps ensure that individual files within the appName_output bucket are reachable, while the bucket itself, at root, is not.

As for appName-input, the bucket is locked down so that only our admins have access to either root or the files contained within it: appName input access control list

Transcoding

We transcode uploaded Story media to mp4 and webm. The pipeline works as follows:

  1. To upload the Story media, the client app first requests a referenceId for a story from the server using the StoryJob#create endpoint.
  2. The client app uploads the Story media to S3 under appName-input/<referenceId>.<media-format>.
  3. An AWS lambda function listening to the appName-input bucket is triggered and tells our Elastic Transcoder Pipeline (ETP) setup to begin transcoding.
  4. The ETP pulls the uploaded Story media from appName-input and begins transcoding.
  5. The ETP finishes transcoding and places the transcoded files into appName-output/<media-format>-<s3_reference_id>.<media-format>.
  6. The ETP triggers Amazon's Simple Notification System (SNS), alerting SNS to its completion event.
  7. Another Lambda function is triggered by the ETP completion SNS and the Lambda function moves the original file from the input bucket to the output bucket. This process is a little less straightforward because the process for "moving" a file is to first copy it from one bucket to another, and then to delete the original copied file.
  8. The Lambda function, after moving the original file to the output bucket, constructs a message containing details about the transcoded and original file and triggers an SNS with this payload.
  9. This second SNS messages our server application, notifying it that the transcode job for the referenceId has completed. From this point on the server handles cleanup of its references to existing StoryJob objects for the user associated with the StoryJob for the given referenceId.

Transcoding Overview

NOTE: For a full description of how our client app and our Server handle story uploading, please see the Story Upload Flow documentation.

Clone this wiki locally