You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
From the logs, it appears that vttablet attempts to read the backup S3 bucket to see it it needs to restore from the latest backup. As such, we need to provide AWS credentials to allow the pod access to read the S3 bucket.
Using IRSA, when a pod is created using this annotated service account, AWS_* environment variables are added to the pod so the pod can authenticate and use the the IAM role.
The issue here is that vitess-operator seems to see these AWS_ environment variables on the pod and attempts to $patch: delete these variables.
How are you adding the environment variables? Are you adding them to the pod directly?
Did you try adding these environment variables to the tabletPools config as extraEnv? You can specify extra environnment variables there and these are added to all the mysqld and vttablet pods.
EKS automatically injects AWS_* environment variables into pods that are using the service account.
As these AWS_* values are dynamic, I cannot add these as an extraEnv.
I did try add fake/placeholder values for these environment variables in extraEnv however vitess-operator still saw a difference and attempted to re-create the pods.
It would be nice if we could tell vitess-operator to ignore specific environment variables from the diff.
We're trying to use vitess-operator with an S3 backup spec defined in our VitessCluster manifest. Example as follows:
From the logs, it appears that
vttablet
attempts to read the backup S3 bucket to see it it needs to restore from the latest backup. As such, we need to provide AWS credentials to allow the pod access to read the S3 bucket.Typically, we use IAM Roles for Service Accounts (IRSA) which allows annotating a Service Account with an IAM Role arn: https://docs.aws.amazon.com/eks/latest/userguide/associate-service-account-role.html
AWS_*
environment variables are added to the pod so the pod can authenticate and use the the IAM role.The issue here is that vitess-operator seems to see these
AWS_
environment variables on the pod and attempts to$patch: delete
these variables.Example log taken from vitess-operator:
Is there any way to force vitess-operator to ignore the
AWS_*
environment variable discrepancies?For context, these tests were performed on an AWS-hosted EKS cluster using vitess-operator
v2.9.0-rc1
The text was updated successfully, but these errors were encountered: