Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Default clusterrole "admin", "view", "edit" missing? #4924

Open
khteh opened this issue Mar 7, 2025 · 0 comments
Open

Default clusterrole "admin", "view", "edit" missing? #4924

khteh opened this issue Mar 7, 2025 · 0 comments

Comments

@khteh
Copy link
Contributor

khteh commented Mar 7, 2025

Summary

 $ k get clusterrolebinding
NAME                      ROLE                                  AGE
calico-kube-controllers   ClusterRole/calico-kube-controllers   641d
calico-node               ClusterRole/calico-node               641d
coredns                   ClusterRole/coredns                   641d
microk8s-hostpath         ClusterRole/microk8s-hostpath         638d
crb-apache-spark          ClusterRole/edit                      41m
$ k get clusterrole
NAME                      CREATED AT
calico-kube-controllers   2023-06-05T06:14:41Z
calico-node               2023-06-05T06:14:41Z
coredns                   2023-06-05T06:14:43Z
microk8s-hostpath         2023-06-08T07:20:02Z

I am not sure if this is the root cause as I am not able to run spark-submit:

$ spark-submit --master k8s://https://127.0.0.1:16443 --conf spark.executor.instances=1 --conf spark.kubernetes.container.image=spark:python3 --conf spark.kubernetes.file.upload.path=/tmp --conf spark.kubernetes.authenticate.driver.serviceAccountName=sa-apache-spark --deploy-mode cluster TextFile.py 
25/03/07 16:20:57 WARN NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
25/03/07 16:20:57 INFO SparkKubernetesClientFactory: Auto-configuring K8S client using current context from users K8S config file
25/03/07 16:20:58 INFO KerberosConfDriverFeatureStep: You have not specified a krb5.conf file locally or via a ConfigMap. Make sure that you have the krb5.conf locally on the driver image.
25/03/07 16:20:58 INFO KubernetesUtils: Uploading file: /usr/src/Python/PySpark/TextFile.py to dest: /tmp/spark-upload-a551a9f4-6433-4674-bdc2-d117401c50ee/TextFile.py...
25/03/07 16:21:39 ERROR Client: Please check "kubectl auth can-i create pod" first. It should be yes.
Exception in thread "main" io.fabric8.kubernetes.client.KubernetesClientException: An error has occurred.
	at io.fabric8.kubernetes.client.KubernetesClientException.launderThrowable(KubernetesClientException.java:129)
	at io.fabric8.kubernetes.client.KubernetesClientException.launderThrowable(KubernetesClientException.java:122)
	at io.fabric8.kubernetes.client.dsl.internal.CreateOnlyResourceOperation.create(CreateOnlyResourceOperation.java:44)
	at io.fabric8.kubernetes.client.dsl.internal.BaseOperation.create(BaseOperation.java:1108)
	at io.fabric8.kubernetes.client.dsl.internal.BaseOperation.create(BaseOperation.java:92)
	at org.apache.spark.deploy.k8s.submit.Client.run(KubernetesClientApplication.scala:153)
	at org.apache.spark.deploy.k8s.submit.KubernetesClientApplication.$anonfun$run$6(KubernetesClientApplication.scala:256)
	at org.apache.spark.deploy.k8s.submit.KubernetesClientApplication.$anonfun$run$6$adapted(KubernetesClientApplication.scala:250)
	at org.apache.spark.util.SparkErrorUtils.tryWithResource(SparkErrorUtils.scala:48)
	at org.apache.spark.util.SparkErrorUtils.tryWithResource$(SparkErrorUtils.scala:46)
	at org.apache.spark.util.Utils$.tryWithResource(Utils.scala:94)
	at org.apache.spark.deploy.k8s.submit.KubernetesClientApplication.run(KubernetesClientApplication.scala:250)
	at org.apache.spark.deploy.k8s.submit.KubernetesClientApplication.start(KubernetesClientApplication.scala:223)
	at org.apache.spark.deploy.SparkSubmit.org$apache$spark$deploy$SparkSubmit$$runMain(SparkSubmit.scala:1034)
	at org.apache.spark.deploy.SparkSubmit.doRunMain$1(SparkSubmit.scala:199)
	at org.apache.spark.deploy.SparkSubmit.submit(SparkSubmit.scala:222)
	at org.apache.spark.deploy.SparkSubmit.doSubmit(SparkSubmit.scala:91)
	at org.apache.spark.deploy.SparkSubmit$$anon$2.doSubmit(SparkSubmit.scala:1125)
	at org.apache.spark.deploy.SparkSubmit$.main(SparkSubmit.scala:1134)
	at org.apache.spark.deploy.SparkSubmit.main(SparkSubmit.scala)
Caused by: java.io.IOException: timeout
	at io.fabric8.kubernetes.client.dsl.internal.OperationSupport.waitForResult(OperationSupport.java:515)
	at io.fabric8.kubernetes.client.dsl.internal.OperationSupport.handleResponse(OperationSupport.java:535)
	at io.fabric8.kubernetes.client.dsl.internal.OperationSupport.handleCreate(OperationSupport.java:340)
	at io.fabric8.kubernetes.client.dsl.internal.BaseOperation.handleCreate(BaseOperation.java:703)
	at io.fabric8.kubernetes.client.dsl.internal.BaseOperation.handleCreate(BaseOperation.java:92)
	at io.fabric8.kubernetes.client.dsl.internal.CreateOnlyResourceOperation.create(CreateOnlyResourceOperation.java:42)
	... 17 more
Caused by: java.io.InterruptedIOException: timeout
	at okhttp3.RealCall.timeoutExit(RealCall.java:108)
	at okhttp3.RealCall$AsyncCall.execute(RealCall.java:205)
	at okhttp3.internal.NamedRunnable.run(NamedRunnable.java:32)
	at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1144)
	at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:642)
	at java.base/java.lang.Thread.run(Thread.java:1583)
Caused by: java.net.SocketException: Socket closed
	at java.base/sun.nio.ch.NioSocketImpl.endConnect(NioSocketImpl.java:531)
	at java.base/sun.nio.ch.NioSocketImpl.connect(NioSocketImpl.java:604)
	at java.base/java.net.SocksSocketImpl.connect(SocksSocketImpl.java:327)
	at java.base/java.net.Socket.connect(Socket.java:751)
	at okhttp3.internal.platform.Platform.connectSocket(Platform.java:129)
	at okhttp3.internal.connection.RealConnection.connectSocket(RealConnection.java:247)
	at okhttp3.internal.connection.RealConnection.connect(RealConnection.java:167)
	at okhttp3.internal.connection.StreamAllocation.findConnection(StreamAllocation.java:258)
	at okhttp3.internal.connection.StreamAllocation.findHealthyConnection(StreamAllocation.java:135)
	at okhttp3.internal.connection.StreamAllocation.newStream(StreamAllocation.java:114)
	at okhttp3.internal.connection.ConnectInterceptor.intercept(ConnectInterceptor.java:42)
	at okhttp3.internal.http.RealInterceptorChain.proceed(RealInterceptorChain.java:147)
	at okhttp3.internal.http.RealInterceptorChain.proceed(RealInterceptorChain.java:121)
	at okhttp3.internal.cache.CacheInterceptor.intercept(CacheInterceptor.java:93)
	at okhttp3.internal.http.RealInterceptorChain.proceed(RealInterceptorChain.java:147)
	at okhttp3.internal.http.RealInterceptorChain.proceed(RealInterceptorChain.java:121)
	at okhttp3.internal.http.BridgeInterceptor.intercept(BridgeInterceptor.java:93)
	at okhttp3.internal.http.RealInterceptorChain.proceed(RealInterceptorChain.java:147)
	at okhttp3.internal.http.RetryAndFollowUpInterceptor.intercept(RetryAndFollowUpInterceptor.java:127)
	at okhttp3.internal.http.RealInterceptorChain.proceed(RealInterceptorChain.java:147)
	at okhttp3.internal.http.RealInterceptorChain.proceed(RealInterceptorChain.java:121)
	at okhttp3.RealCall.getResponseWithInterceptorChain(RealCall.java:257)
	at okhttp3.RealCall$AsyncCall.execute(RealCall.java:201)
	... 4 more
25/03/07 16:21:39 INFO ShutdownHookManager: Shutdown hook called
25/03/07 16:21:39 INFO ShutdownHookManager: Deleting directory /tmp/spark-36873a8d-eb37-43e5-bacc-032e639773eb
$ kubectl auth can-i create pod
yes

What Should Happen Instead?

The default clusterrole admin, view and edit should show up and ready to be used with clusterrolebinding. https://kubernetes.io/docs/reference/access-authn-authz/rbac/

Reproduction Steps

  1. ...
  2. ...

Introspection Report

inspection-report-20250307_170606.tar.gz

Can you suggest a fix?

Are you interested in contributing with a fix?

No

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant