Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Using AWS signature for authentication #69

Open
dexter-mh-lee opened this issue Aug 12, 2021 · 14 comments
Open

Using AWS signature for authentication #69

dexter-mh-lee opened this issue Aug 12, 2021 · 14 comments
Assignees
Labels
research Needs research

Comments

@dexter-mh-lee
Copy link

Hi.

I use IAM service accounts in k8s to give correct permissions to each pod talking to AWS systems. I could not find a documentation for how to do this for Glue schema registry. Is there a plan to add this functionality?

Currently, I need to add the permission to the worker node's IAM role, which is not ideal.

@hhkkxxx133 hhkkxxx133 self-assigned this Aug 30, 2021
@blacktooth blacktooth added the research Needs research label Aug 30, 2021
@liko9
Copy link

liko9 commented Sep 10, 2021

@dexter-mh-lee What sort of Kubernetes are you using? What process are you using to apply the policies to your IAM service accounts?

@dexter-mh-lee
Copy link
Author

We are using AWS EKS, and usually for our other use cases (Elasticsearch, MySQL, and so on), we can create a IAM service account with the permissions to access the AWS resources, and the AWS Signature v4 APIs run on each pod automatically picks up these policies (from the service account attached to the pod) and decides whether the pod has permissions to do certain actions. We couldn't find anything like this for Glue schema registry, so we had to give such permissions to the worker nodes of our EKS cluster directly, which is not ideal.

@liko9
Copy link

liko9 commented Sep 13, 2021

@dexter-mh-lee Did you see these? https://docs.aws.amazon.com/glue/latest/dg/using-identity-based-policies.html#access-policy-examples-aws-managed (specifically the AWSGlueSchemaRegistryReadonlyAccess managed policy, which feels like a good fit for your use case)

@dexter-mh-lee
Copy link
Author

We are already using this. Problem is that I have to give such permission to our EKS worker nodes, meaning every single pod in the cluster has access to the schema registry. This is against the principles used by the rest of the AWS ecosystem.

@liko9
Copy link

liko9 commented Sep 20, 2021

I apologize, I am not an expert in IAM - do you have a link to the process/documentation you are using for your other use cases where you are able to apply this at a pod level? I'm struggling to understand why you cannot apply this only to specifically labeled pods rather than the entire node.

@dexter-mh-lee
Copy link
Author

dexter-mh-lee commented Sep 20, 2021 via email

@liko9
Copy link

liko9 commented Dec 8, 2021

My humblest apologies for not responding sooner. I think it might be best for you to open a support case so we can investigate what is happening on the service side of both EKS and Glue Schema Registry to see if there isn't a solution.

@blacktooth
Copy link
Contributor

As suggested by Liko, you can get better help on this by reaching out to EKS AWS support team. Please let us know if there is something specific to GSR is not working. Closing this for now, please feel free to re-open or open a new issue if required.

Thanks!

@dexter-mh-lee
Copy link
Author

Please don't close this. This has nothing to do with EKS. This is basic auth to access schema registry that is analogous with other AWS components. This hast to be implemented inside the schema registry. I don't understand what EKS team will do about this.

@blacktooth
Copy link
Contributor

Apologies for closing this. Looking at the documentation, are you facing issues creating IAM role with GSR policies in them? Or creating a service account. If you can share the replication steps for this problem, that would be very helpful. Please also consider opening a support case if you prefer to share information privately.

@blacktooth blacktooth reopened this Jan 13, 2022
@tveon
Copy link
Contributor

tveon commented Jan 13, 2022

FWIW: I've asked our dev-ops team to escalate this with AWS support and the EKS team, as this is a blocker for adopting EKS.
To my understanding, there needs to be made changes on both sides. For the Glue Schema Registry we need support for STS to make it work.

@ali-raza-rizvi
Copy link

@dexter-mh-lee Have you found the work around for this ? I am facing the similar issue

@singhbaljit
Copy link
Contributor

I've the same issue. The serdes classes try to connect with the node user/ARN instead of the EKS service account role.

@singhbaljit
Copy link
Contributor

singhbaljit commented Oct 12, 2023

This issue is may be probably caused by #120. Basically, the WebIdentityTokenFileCredentialsProvider fails silently when both the URL and Apache HTTP clients are present. The default credentials provider falls to the InstanceProfileCredentialsProvider and ends up using the EC2/node role. Trying enabling SDK logs to see if that is the case, it is for me.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
research Needs research
Projects
None yet
Development

No branches or pull requests

7 participants