Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

BUG [token] creating token : timed out waiting for the condition #2394

Closed
syn-ua opened this issue Feb 19, 2021 · 10 comments
Closed

BUG [token] creating token : timed out waiting for the condition #2394

syn-ua opened this issue Feb 19, 2021 · 10 comments
Labels
kind/support Categorizes issue or PR as a support question.

Comments

@syn-ua
Copy link

syn-ua commented Feb 19, 2021

I was trying to install Kubernetes cluster use Kubspray and get an error timed out waiting for the condition on the stet create a token

FAILED - RETRYING: Create kubeadm token for joining nodes with 24h expiration (default) (5 retries left).
FAILED - RETRYING: Create kubeadm token for joining nodes with 24h expiration (default) (5 retries left).
FAILED - RETRYING: Create kubeadm token for joining nodes with 24h expiration (default) (5 retries left).
FAILED - RETRYING: Create kubeadm token for joining nodes with 24h expiration (default) (4 retries left).
FAILED - RETRYING: Create kubeadm token for joining nodes with 24h expiration (default) (4 retries left).
FAILED - RETRYING: Create kubeadm token for joining nodes with 24h expiration (default) (4 retries left).
FAILED - RETRYING: Create kubeadm token for joining nodes with 24h expiration (default) (3 retries left).
FAILED - RETRYING: Create kubeadm token for joining nodes with 24h expiration (default) (3 retries left).
FAILED - RETRYING: Create kubeadm token for joining nodes with 24h expiration (default) (3 retries left).
FAILED - RETRYING: Create kubeadm token for joining nodes with 24h expiration (default) (2 retries left).
FAILED - RETRYING: Create kubeadm token for joining nodes with 24h expiration (default) (2 retries left).
FAILED - RETRYING: Create kubeadm token for joining nodes with 24h expiration (default) (2 retries left).
FAILED - RETRYING: Create kubeadm token for joining nodes with 24h expiration (default) (1 retries left).
FAILED - RETRYING: Create kubeadm token for joining nodes with 24h expiration (default) (1 retries left).
FAILED - RETRYING: Create kubeadm token for joining nodes with 24h expiration (default) (1 retries left).

TASK [kubernetes/control-plane : Create kubeadm token for joining nodes with 24h expiration (default)] ************************************************************************************************************************************************
fatal: [master2 -> 10.0.4.101]: FAILED! => {"attempts": 5, "changed": false, "cmd": ["/usr/local/bin/kubeadm", "--kubeconfig", "/etc/kubernetes/admin.conf", "token", "create"], "delta": "0:01:15.149534", "end": "2021-02-19 18:43:22.419216", "msg": "non-zero return code", "rc": 1, "start": "2021-02-19 18:42:07.269682", "stderr": "timed out waiting for the condition\nTo see the stack trace of this error execute with --v=5 or higher", "stderr_lines": ["timed out waiting for the condition", "To see the stack trace of this error execute with --v=5 or higher"], "stdout": "", "stdout_lines": []}
fatal: [master1 -> 10.0.4.101]: FAILED! => {"attempts": 5, "changed": false, "cmd": ["/usr/local/bin/kubeadm", "--kubeconfig", "/etc/kubernetes/admin.conf", "token", "create"], "delta": "0:01:15.177863", "end": "2021-02-19 18:43:23.093810", "msg": "non-zero return code", "rc": 1, "start": "2021-02-19 18:42:07.915947", "stderr": "timed out waiting for the condition\nTo see the stack trace of this error execute with --v=5 or higher", "stderr_lines": ["timed out waiting for the condition", "To see the stack trace of this error execute with --v=5 or higher"], "stdout": "", "stdout_lines": []}
fatal: [master3 -> 10.0.4.101]: FAILED! => {"attempts": 5, "changed": false, "cmd": ["/usr/local/bin/kubeadm", "--kubeconfig", "/etc/kubernetes/admin.conf", "token", "create"], "delta": "0:01:15.169796", "end": "2021-02-19 18:43:23.085743", "msg": "non-zero return code", "rc": 1, "start": "2021-02-19 18:42:07.915947", "stderr": "timed out waiting for the condition\nTo see the stack trace of this error execute with --v=5 or higher", "stderr_lines": ["timed out waiting for the condition", "To see the stack trace of this error execute with --v=5 or higher"], "stdout": "", "stdout_lines": []}

Versions

kubeadm version (use kubeadm vernsinon):kubeadm version: &version.Info{Major:"1", Minor:"20", GitVersion:"v1.20.2", GitCommit:"faecb196815e248d3ecfb03c680a4507229c2a56", GitTreeState:"clean", BuildDate:"2021-01-13T13:25:59Z", GoVersion:"go1.15.5", Compiler:"gc", Platform:"linux/arm64"}

Environment:

  • Kubernetes version (use kubectl version):Client Version: version.Info{Major:"1", Minor:"20", GitVersion:"v1.20.2", GitCommit:"faecb196815e248d3ecfb03c680a4507229c2a56", GitTreeState:"clean", BuildDate:"2021-01-13T13:28:09Z", GoVersion:"go1.15.5", Compiler:"gc", Platform:"linux/arm64"}

  • Cloud provider or hardware configuration: Raspbeery Pi 4 4gb

  • OS (e.g. from /etc/os-release): Ubuntu 20.04.2

  • Kernel (e.g. uname -a):Linux master1 5.4.0-1028-raspi The product_uuid and the hostname should be unique across nodes #31-Ubuntu SMP PREEMPT Wed Jan 20 11:30:45 UTC 2021 aarch64 aarch64 aarch64 GNU/Linux

What happened?

Fail install Kubernetes to Raspberry cluster

What you expected to happen?

Time out when create token

"timed out waiting for the condition"

How to reproduce it (as minimally and precisely as possible)?

Raspberry Pi 4 4gb
Use kubespray and try on console

ansible-playbook -i ./test/inventory.ini cluster.yml

Anything else we need to know?

image
image

@neolit123
Copy link
Member

neolit123 commented Feb 19, 2021

we have e2e tests and production uses for tokens. this ensures they can be created and work.
please open with the kubespray project first and see what they think.

/kind support

@k8s-ci-robot k8s-ci-robot added the kind/support Categorizes issue or PR as a support question. label Feb 19, 2021
@syn-ua
Copy link
Author

syn-ua commented Feb 20, 2021

@neolit123

Found a solution.
Need to add

cgroup_enable=cpuset cgroup_memory=1 cgroup_enable=memory

to a file : /boot/firmware/cmdline.txt
and then

sudo apt-get update -y && sudo apt-get --with-new-pkgs upgrade -y

And after restart all working

@jan-hudec
Copy link

The error is very non-specific. It means something didn't start, but does not tell you why. I had the same symptoms trying to upgrade a cluster after the internal certificates have expired (which kubespray failed to fix, but kubeadm certs renew all eventually did).

@dalfos
Copy link

dalfos commented Aug 24, 2021

I faced with this issue.
I found that in my case kubeconfig is outdated on master:

kubectl get nodes
error: You must be logged in to the server (Unauthorized)

Updating kubeconfig fixed issue.

@mnasruul
Copy link

I faced with this issue. I found that in my case kubeconfig is outdated on master:

kubectl get nodes
error: You must be logged in to the server (Unauthorized)

Updating kubeconfig fixed issue.

hellow, how to Updating this kubeconfig? i have same issue

@wangxn2015
Copy link

I have the same issue, please help

@ledroide
Copy link

Found an other cause for this error : check that the etcd service is running on the host as a daemon OR as a container ; but not twice. Check what process binds to port 2380. If etcd is configured twice, choose one only, and disable the other one.

@ffxxe
Copy link

ffxxe commented Sep 12, 2022

hellow, how to Updating this kubeconfig? i have same issue

@mnasruul, You should copy the config from the k8s folder to the user home directory:
cp /etc/kubernetes/admin.conf /root/.kube/config

You make sure which config used kubectl by the debugging command above:
kubeadm token create --v=5

I understood that when performing this command:

kubeadm token list
failed to list bootstrap tokens: Unauthorized

@krishnakc1
Copy link

Pretty old topic - But I want to post my experience with this in case someone is stuck -
if someone is facing this still as a part of cluster upgrade or cluster control plane addition process, Check the following

  1. Check whether you have defined no proxy if you are in corporate setting. In proxy define the IP addresses of the control plane nodes.
  2. If above step is validated, check if API server is accessible in the first place(only in case of cluster upgrade or control plane add). If not accessible check why?(Container runtime or kubelet failing). In my case kubelet certs expired on the master node. They had to be renewed and later it worked well.

@btschwertfeger
Copy link

Thank you a lot @krishnakc1!

I had to renew the certificates manually using /usr/local/bin/kubeadm --kubeconfig /etc/kubernetes/admin.conf token create --v=5.

This did not worked before I did kubectl uncordon node1, as the playbook made the node unschedulable.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
kind/support Categorizes issue or PR as a support question.
Projects
None yet
Development

No branches or pull requests