Support reading the credentials from the ~/.aws/credentials file within the Spark cluster #123
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
Currently, the users of the migrator have to provide their AWS credentials through the
config.yaml
file. According to the description of #122, in some cases it is desirable to also read the AWS credentials from the user profile (~/.aws/credentials
).This PR addresses this need by using a capability of the Hadoop connector to set a custom AWS credentials provider. We set it to
com.amazonaws.auth.profile.ProfileCredentialsProvider
, which is the standard profile credentials provider from the AWS SDK.Ultimately, if necessary we could make this configurable and allow our users to supply which credentials provider to use. This would give them full control on that.
According to #122 (comment), this PR fixes #122.