-
Notifications
You must be signed in to change notification settings - Fork 41
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
cannot use awsIAM roles instead of aws accessKey #486
Comments
I don't have direct experience with failures relating to IAM roles, but my first guess is the issue may lie in IAM role(s)/permissions set up/used by the cluster... Perhaps double-check that using the same role(s)/perms works properly elsewhere (say via |
I ran into this also, I realized it's failing during the build because builder generates a pod spec that doesn't have the IAM role annotation. The dockerbuilder/slugbuilder podspec would need to inherit the AWS role annotation use for kube2iam. Maybe something like this would work, though it would need to be conditional, etc: index 8418cc6..d68771f 100644
--- a/pkg/gitreceive/k8s_util.go
+++ b/pkg/gitreceive/k8s_util.go
@@ -28,6 +28,7 @@ const (
builderStorage = "BUILDER_STORAGE"
objectStorePath = "/var/run/secrets/deis/objectstore/creds"
envRoot = "/tmp/env"
+ iamRole = "IAM_ROLE"
)
func dockerBuilderPodName(appName, shortSha string) string {
@@ -166,6 +167,9 @@ func buildPod(
Labels: map[string]string{
"heritage": name,
},
+ Annotations: map[string]string{
+ "iam.amazonaws.com/role": iamRole,
+ },
},
} |
@Akshaykapoor @blakebarnett @vdice Any news on this? We are experiencing the same problem |
Sorry, we had to stop using deis workflow because of this and a few other reasons. |
@blakebarnett, may I ask what the other reasons were? Just interested in learning what else you ran into that was a deal breaker for you and your team. |
Here's the non-sugar-coated list.
|
Thank you for the list @blakebarnett . We were aware with most of these. Agreed that it would be nice to have kube2iam integration for the pods and RBAC is being added in the next version it seems like. The single app per-namespace is annoying as well. What did you guys end up using instead or did you just go with some kind of custom kubernetes setup? |
We're just building everything using CI and Helm charts for now, in hope that at some later point everything will play nicely and we can provide PaaS features. |
This issue was moved to teamhephy/builder#18 |
I upgraded my cluster from
workflow v2.10.0
to2.11.0
. For this upgrade i changed storage backend to beoff-cluster
on s3.My values.yaml looks something like below. I've also given full S3 access to the nodes. Nothing failed during installation, except that my registry, builder components are in CrashLoopBackoff with the following erros,
registry-logs
Builder-logs
Is there a way that i can explicitly tell to not use
accessKey
andsecretKey
when installing in values.yaml file.The yaml file mentions, if you leave it blank it will use IAM roles. I'm not sure it's using the IAM roles because the registry-logs seems to open the dir for creds.
Is it that i'm missing something, or the only way to go about this is to provide
accessKey
andsecretKey
values.yaml
The text was updated successfully, but these errors were encountered: