You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I deleted a cluster/context/user from my kubeconfig, but still had it as the current context. beibootctl version yielded following output:
Traceback (most recent call last):
File "cli.__main__", line 29, in cli
File "kubernetes.config.kube_config", line 813, in load_kube_config
File "kubernetes.config.kube_config", line 773, in _get_kube_config_loader
File "kubernetes.config.kube_config", line 206, in __init__
File "kubernetes.config.kube_config", line 259, in set_active_context
File "kubernetes.config.kube_config", line 660, in get_with_name
kubernetes.config.config_exception.ConfigException: Invalid kube-config file. Expected object with name deleted-cluster in /home/gutschi/.kube/config/contexts list
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "<string>", line 1, in <module>
File "cli.__main__", line 67, in main
File "click.core", line 1130, in __call__
File "click.core", line 1055, in main
File "click.core", line 1654, in invoke
File "click.core", line 1404, in invoke
File "click.core", line 760, in invoke
File "click.decorators", line 26, in new_func
File "cli.__main__", line 31, in cli
RuntimeError: Could not load KUBECONFIG: Invalid kube-config file. Expected object with name deleted-cluster in /home/gutschi/.kube/config/contexts list
It's certainly not a big deal, however I'm a little bit surprised that beibootctl needs a valid kubeconfig to print out its version.
The text was updated successfully, but these errors were encountered:
tschale
changed the title
beibootctl version crashed when invalid kubeconfig is set
beibootctl crashes when invalid kubeconfig is set
Apr 4, 2023
This comes from the method cli in cli.__main__.py, which serves as a base for the other commands. As most of the other commands need a kubeconfig, this currently stays at it is. We might think about not raising the complete stacktrace though. In my opinion that seems like something broke and not like an expected error.
I deleted a cluster/context/user from my kubeconfig, but still had it as the current context.
beibootctl version
yielded following output:It's certainly not a big deal, however I'm a little bit surprised that beibootctl needs a valid kubeconfig to print out its version.
The text was updated successfully, but these errors were encountered: