Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[BUG] standalone mongo cluster vscale error: cluster role is none #5537

Closed
JashBook opened this issue Oct 19, 2023 · 0 comments · Fixed by #5588
Closed

[BUG] standalone mongo cluster vscale error: cluster role is none #5537

JashBook opened this issue Oct 19, 2023 · 0 comments · Fixed by #5588
Assignees
Labels
bug kind/bug Something isn't working severity/major Great chance user will encounter the same problem
Milestone

Comments

@JashBook
Copy link
Collaborator

Describe the bug
standalone mongo cluster vscale error: cluster role is none.

kbcli 

To Reproduce
Steps to reproduce the behavior:

  1. kbcli cluster create mongodb mongo-test --termination-policy WipeOut --mode standalone
kbcli version 
Kubernetes: v1.26.3
KubeBlocks: 0.7.0-beta.7
kbcli: 0.7.0-beta.7
  1. kbcli cluster vscale mongo-test --auto-approve --components mongodb --cpu 600m --memory 0.6Gi
  2. See error
➜  ~ kubectl get ops
NAME                               TYPE              CLUSTER      STATUS    PROGRESS   AGE
mongo-test-verticalscaling-v4w8l   VerticalScaling   mongo-test   Running   0/1        4m39s
➜  ~ kubectl get cluster
NAME         CLUSTER-DEFINITION   VERSION          TERMINATION-POLICY   STATUS    AGE
mongo-test   mongodb              mongodb-5.0.20   WipeOut              Running   35m
➜  ~ kubectl get pod
NAME                                            READY   STATUS    RESTARTS   AGE
mongo-test-mongodb-0                            3/3     Running   0          4m46s

connect error

➜  ~ kbcli cluster connect mongo-test
error: failed to find the instance to connect, please check cluster status

describe pod

 kubectl describe pod mongo-test-mongodb-0
Name:             mongo-test-mongodb-0
Namespace:        default
Priority:         0
Service Account:  kb-mongo-test
Node:             minikube/192.168.49.2
Start Time:       Thu, 19 Oct 2023 15:26:26 +0800
Labels:           app.kubernetes.io/component=mongodb
                  app.kubernetes.io/instance=mongo-test
                  app.kubernetes.io/managed-by=kubeblocks
                  app.kubernetes.io/name=mongodb
                  app.kubernetes.io/version=mongodb-5.0.20
                  apps.kubeblocks.io/component-name=mongodb
                  apps.kubeblocks.io/workload-type=Consensus
                  controller-revision-hash=mongo-test-mongodb-6d5b5ff6d9
                  statefulset.kubernetes.io/pod-name=mongo-test-mongodb-0
Annotations:      apps.kubeblocks.io/component-replicas: 1
Status:           Running
IP:               10.244.3.230
IPs:
  IP:           10.244.3.230
Controlled By:  StatefulSet/mongo-test-mongodb
Containers:
  mongodb:
    Container ID:  docker://a78153cc112639d9ee1b5d981c3cb5895f87fb7a21c5a757e7a6a090f8ffd396
    Image:         infracreate-registry.cn-zhangjiakou.cr.aliyuncs.com/apecloud/mongo:5.0.20
    Image ID:      docker-pullable://infracreate-registry.cn-zhangjiakou.cr.aliyuncs.com/apecloud/mongo@sha256:99b73d07d8041b529f869414bffa11e47857469e2b31fee65960ef07cf45b8ee
    Port:          27017/TCP
    Host Port:     0/TCP
    Command:
      /scripts/replicaset-setup.sh
    State:          Running
      Started:      Thu, 19 Oct 2023 15:26:37 +0800
    Ready:          True
    Restart Count:  0
    Limits:
      cpu:     600m
      memory:  644245094400m
    Requests:
      cpu:     600m
      memory:  644245094400m
    Environment Variables from:
      mongo-test-mongodb-env      ConfigMap  Optional: false
      mongo-test-mongodb-rsm-env  ConfigMap  Optional: false
    Environment:
      KB_POD_NAME:               mongo-test-mongodb-0 (v1:metadata.name)
      KB_POD_UID:                 (v1:metadata.uid)
      KB_NAMESPACE:              default (v1:metadata.namespace)
      KB_SA_NAME:                 (v1:spec.serviceAccountName)
      KB_NODENAME:                (v1:spec.nodeName)
      KB_HOST_IP:                 (v1:status.hostIP)
      KB_POD_IP:                  (v1:status.podIP)
      KB_POD_IPS:                 (v1:status.podIPs)
      KB_HOSTIP:                  (v1:status.hostIP)
      KB_PODIP:                   (v1:status.podIP)
      KB_PODIPS:                  (v1:status.podIPs)
      KB_CLUSTER_NAME:           mongo-test
      KB_COMP_NAME:              mongodb
      KB_CLUSTER_COMP_NAME:      mongo-test-mongodb
      KB_CLUSTER_UID_POSTFIX_8:  4032754e
      KB_POD_FQDN:               $(KB_POD_NAME).$(KB_CLUSTER_COMP_NAME)-headless.$(KB_NAMESPACE).svc
      MONGODB_ROOT_USER:         <set to the key 'username' in secret 'mongo-test-conn-credential'>  Optional: false
      MONGODB_ROOT_PASSWORD:     <set to the key 'password' in secret 'mongo-test-conn-credential'>  Optional: false
    Mounts:
      /data/mongodb from data (rw)
      /etc/mongodb/keyfile from mongodb-config (rw,path="keyfile")
      /etc/mongodb/mongodb.conf from mongodb-config (rw,path="mongodb.conf")
      /scripts from scripts (rw)
      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-s569w (ro)
  metrics:
    Container ID:  docker://75e3ce01cfcf54ce88c04431791c7541243499e3dd4ccee5231a9a09d03051bd
    Image:         infracreate-registry.cn-zhangjiakou.cr.aliyuncs.com/apecloud/agamotto:0.1.2-beta.1
    Image ID:      docker-pullable://infracreate-registry.cn-zhangjiakou.cr.aliyuncs.com/apecloud/agamotto@sha256:cbab349b90490807a8d5039bf01bc7e37334f20c98c7dd75bc7fc4cf9e5b10ee
    Port:          9216/TCP
    Host Port:     0/TCP
    Command:
      /bin/agamotto
      --config=/opt/conf/metrics-config.yaml
    State:          Running
      Started:      Thu, 19 Oct 2023 15:26:37 +0800
    Ready:          True
    Restart Count:  0
    Limits:
      cpu:     0
      memory:  0
    Requests:
      cpu:     0
      memory:  0
    Environment Variables from:
      mongo-test-mongodb-env      ConfigMap  Optional: false
      mongo-test-mongodb-rsm-env  ConfigMap  Optional: false
    Environment:
      KB_POD_NAME:               mongo-test-mongodb-0 (v1:metadata.name)
      KB_POD_UID:                 (v1:metadata.uid)
      KB_NAMESPACE:              default (v1:metadata.namespace)
      KB_SA_NAME:                 (v1:spec.serviceAccountName)
      KB_NODENAME:                (v1:spec.nodeName)
      KB_HOST_IP:                 (v1:status.hostIP)
      KB_POD_IP:                  (v1:status.podIP)
      KB_POD_IPS:                 (v1:status.podIPs)
      KB_HOSTIP:                  (v1:status.hostIP)
      KB_PODIP:                   (v1:status.podIP)
      KB_PODIPS:                  (v1:status.podIPs)
      KB_CLUSTER_NAME:           mongo-test
      KB_COMP_NAME:              mongodb
      KB_CLUSTER_COMP_NAME:      mongo-test-mongodb
      KB_CLUSTER_UID_POSTFIX_8:  4032754e
      KB_POD_FQDN:               $(KB_POD_NAME).$(KB_CLUSTER_COMP_NAME)-headless.$(KB_NAMESPACE).svc
      MONGODB_ROOT_USER:         <set to the key 'username' in secret 'mongo-test-conn-credential'>  Optional: false
      MONGODB_ROOT_PASSWORD:     <set to the key 'password' in secret 'mongo-test-conn-credential'>  Optional: false
    Mounts:
      /opt/conf from mongodb-metrics-config (rw)
      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-s569w (ro)
  kb-checkrole:
    Container ID:  docker://3c8e5ae497c58b5bba90f525da117af3e72dbf2e5c4fb3b204aa94cca6c9c29f
    Image:         infracreate-registry.cn-zhangjiakou.cr.aliyuncs.com/apecloud/kubeblocks-tools:0.7.0-beta.7
    Image ID:      docker-pullable://infracreate-registry.cn-zhangjiakou.cr.aliyuncs.com/apecloud/kubeblocks-tools@sha256:27e7c32cbe2b50aebedb33b9769d6fd6954ce75ce32b1ace42b56d8b45df35d4
    Port:          3501/TCP
    Host Port:     0/TCP
    Command:
      lorry
      --port
      3501
    State:          Running
      Started:      Thu, 19 Oct 2023 15:26:38 +0800
    Ready:          True
    Restart Count:  0
    Limits:
      cpu:     0
      memory:  0
    Requests:
      cpu:      0
      memory:   0
    Readiness:  http-get http://:3501/v1.0/bindings/mongodb%3Foperation=checkRole&workloadType=Consensus delay=0s timeout=2s period=2s #success=1 #failure=3
    Startup:    tcp-socket :3501 delay=0s timeout=1s period=10s #success=1 #failure=3
    Environment Variables from:
      mongo-test-mongodb-env      ConfigMap  Optional: false
      mongo-test-mongodb-rsm-env  ConfigMap  Optional: false
    Environment:
      KB_POD_NAME:                mongo-test-mongodb-0 (v1:metadata.name)
      KB_POD_UID:                  (v1:metadata.uid)
      KB_NAMESPACE:               default (v1:metadata.namespace)
      KB_SA_NAME:                  (v1:spec.serviceAccountName)
      KB_NODENAME:                 (v1:spec.nodeName)
      KB_HOST_IP:                  (v1:status.hostIP)
      KB_POD_IP:                   (v1:status.podIP)
      KB_POD_IPS:                  (v1:status.podIPs)
      KB_HOSTIP:                   (v1:status.hostIP)
      KB_PODIP:                    (v1:status.podIP)
      KB_PODIPS:                   (v1:status.podIPs)
      KB_CLUSTER_NAME:            mongo-test
      KB_COMP_NAME:               mongodb
      KB_CLUSTER_COMP_NAME:       mongo-test-mongodb
      KB_CLUSTER_UID_POSTFIX_8:   4032754e
      KB_POD_FQDN:                $(KB_POD_NAME).$(KB_CLUSTER_COMP_NAME)-headless.$(KB_NAMESPACE).svc
      KB_SERVICE_USER:            <set to the key 'username' in secret 'mongo-test-conn-credential'>  Optional: false
      KB_SERVICE_PASSWORD:        <set to the key 'password' in secret 'mongo-test-conn-credential'>  Optional: false
      KB_SERVICE_PORT:            27017
      KB_DATA_PATH:               /data/mongodb
      KB_SERVICE_CHARACTER_TYPE:  mongodb
      KB_WORKLOAD_TYPE:           Consensus
      KB_SERVICE_USER:            <set to the key 'username' in secret 'mongo-test-conn-credential'>  Optional: false
      KB_SERVICE_PASSWORD:        <set to the key 'password' in secret 'mongo-test-conn-credential'>  Optional: false
    Mounts:
      /data/mongodb from data (rw)
      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-s569w (ro)
Conditions:
  Type              Status
  Initialized       True 
  Ready             True 
  ContainersReady   True 
  PodScheduled      True 
Volumes:
  data:
    Type:       PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
    ClaimName:  data-mongo-test-mongodb-0
    ReadOnly:   false
  mongodb-config:
    Type:      ConfigMap (a volume populated by a ConfigMap)
    Name:      mongo-test-mongodb-mongodb-config
    Optional:  false
  mongodb-metrics-config:
    Type:      ConfigMap (a volume populated by a ConfigMap)
    Name:      mongo-test-mongodb-mongodb-metrics-config-new
    Optional:  false
  scripts:
    Type:      ConfigMap (a volume populated by a ConfigMap)
    Name:      mongo-test-mongodb-mongodb-scripts
    Optional:  false
  kube-api-access-s569w:
    Type:                     Projected (a volume that contains injected data from multiple sources)
    TokenExpirationSeconds:   3607
    ConfigMapName:            kube-root-ca.crt
    ConfigMapOptional:        <nil>
    DownwardAPI:              true
QoS Class:                    Burstable
Node-Selectors:               <none>
Tolerations:                  kb-data=true:NoSchedule
                              node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
                              node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Topology Spread Constraints:  kubernetes.io/hostname:ScheduleAnyway when max skew 1 is exceeded for selector app.kubernetes.io/instance=mongo-test,apps.kubeblocks.io/component-name=mongodb
Events:
  Type    Reason     Age    From               Message
  ----    ------     ----   ----               -------
  Normal  Scheduled  5m37s  default-scheduler  Successfully assigned default/mongo-test-mongodb-0 to minikube
  Normal  Pulled     5m26s  kubelet            Container image "infracreate-registry.cn-zhangjiakou.cr.aliyuncs.com/apecloud/mongo:5.0.20" already present on machine
  Normal  Created    5m26s  kubelet            Created container mongodb
  Normal  Started    5m26s  kubelet            Started container mongodb
  Normal  Pulled     5m26s  kubelet            Container image "infracreate-registry.cn-zhangjiakou.cr.aliyuncs.com/apecloud/agamotto:0.1.2-beta.1" already present on machine
  Normal  Created    5m26s  kubelet            Created container metrics
  Normal  Started    5m26s  kubelet            Started container metrics
  Normal  Pulled     5m26s  kubelet            Container image "infracreate-registry.cn-zhangjiakou.cr.aliyuncs.com/apecloud/kubeblocks-tools:0.7.0-beta.7" already present on machine
  Normal  Created    5m25s  kubelet            Created container kb-checkrole
  Normal  Started    5m25s  kubelet            Started container kb-checkrole

logs pod check role

kubectl logs mongo-test-mongodb-0 kb-checkrole
{"level":"error","ts":1697700398.1279008,"caller":"binding/operation_volume_protection.go:182","msg":"unmarshal volume protection spec error","raw spec":"","error":"unexpected end of JSON input","stacktrace":"github.com/apecloud/kubeblocks/lorry/binding.(*operationVolumeProtection).initVolumes\n\t/src/lorry/binding/operation_volume_protection.go:182\ngithub.com/apecloud/kubeblocks/lorry/binding.init.1\n\t/src/lorry/binding/operation_volume_protection.go:103\nruntime.doInit1\n\t/usr/local/go/src/runtime/proc.go:6740\nruntime.doInit\n\t/usr/local/go/src/runtime/proc.go:6707\nruntime.main\n\t/usr/local/go/src/runtime/proc.go:249"}
{"level":"info","ts":1697700398.1281536,"caller":"binding/operation_volume_protection.go:104","msg":"init volumes to monitor failed","error":"unexpected end of JSON input"}
{"level":"info","ts":1697700398.1281798,"caller":"binding/operation_volume_protection.go:106","msg":"succeed to init volume protection, pod: mongo-test-mongodb-0, spec: {\"highWatermark\":\"0\",\"volumes\":[]}"}
2023-10-19T07:26:38Z	INFO	Mongo	Initializing MongoDB binding
2023-10-19T07:26:38Z	INFO	HA	HA starting
2023-10-19T07:26:38Z	INFO	HA	pod selector: app.kubernetes.io/instance=mongo-test,app.kubernetes.io/managed-by=kubeblocks,apps.kubeblocks.io/component-name=mongodb
2023-10-19T07:26:38Z	INFO	HA	podlist: 1
2023-10-19T07:26:38Z	INFO	HA	lock expired: map[acquire-time:1697698895 extra: leader:mongo-test-mongodb-0 renew-time:1697698895 ttl:5], now: 1697700398
2023-10-19T07:26:38Z	INFO	HA	no switchOver [mongo-test-mongodb-switchover] setting
2023-10-19T07:26:38Z	ERROR	HA	new pinger failed	{"error": "lookup mongo-test-mongodb-0.mongo-test-mongodb-headless.default.svc on 10.96.0.10:53: no such host"}
github.com/apecloud/kubeblocks/lorry/highavailability.(*Ha).IsPodReady
	/src/lorry/highavailability/ha.go:417
github.com/apecloud/kubeblocks/lorry/highavailability.(*Ha).Start
	/src/lorry/highavailability/ha.go:216
2023-10-19T07:26:38Z	INFO	HA	Waiting for dns resolution to be ready
2023-10-19T07:26:41Z	ERROR	HA	new pinger failed	{"error": "lookup mongo-test-mongodb-0.mongo-test-mongodb-headless.default.svc on 10.96.0.10:53: no such host"}
github.com/apecloud/kubeblocks/lorry/highavailability.(*Ha).IsPodReady
	/src/lorry/highavailability/ha.go:417
github.com/apecloud/kubeblocks/lorry/highavailability.(*Ha).Start
	/src/lorry/highavailability/ha.go:220
2023-10-19T07:26:41Z	INFO	HA	Waiting for dns resolution to be ready
2023-10-19T07:26:44Z	ERROR	HA	new pinger failed	{"error": "lookup mongo-test-mongodb-0.mongo-test-mongodb-headless.default.svc on 10.96.0.10:53: no such host"}
github.com/apecloud/kubeblocks/lorry/highavailability.(*Ha).IsPodReady
	/src/lorry/highavailability/ha.go:417
github.com/apecloud/kubeblocks/lorry/highavailability.(*Ha).Start
	/src/lorry/highavailability/ha.go:220
2023-10-19T07:26:44Z	INFO	HA	Waiting for dns resolution to be ready
2023-10-19T07:26:47Z	ERROR	HA	new pinger failed	{"error": "lookup mongo-test-mongodb-0.mongo-test-mongodb-headless.default.svc on 10.96.0.10:53: no such host"}
github.com/apecloud/kubeblocks/lorry/highavailability.(*Ha).IsPodReady
	/src/lorry/highavailability/ha.go:417
github.com/apecloud/kubeblocks/lorry/highavailability.(*Ha).Start
	/src/lorry/highavailability/ha.go:220
2023-10-19T07:26:47Z	INFO	HA	Waiting for dns resolution to be ready
2023-10-19T07:26:47Z	INFO	middleware	receive request	{"request": "/v1.0/bindings/mongodb?operation=checkRole&workloadType=Consensus"}
2023-10-19T07:26:47Z	ERROR	Mongo	DB is not ready	{"error": "server selection error: context deadline exceeded, current topology: { Type: Single, Servers: [{ Addr: localhost:27017, Type: Unknown }, ] }"}
github.com/apecloud/kubeblocks/lorry/component/mongodb.(*Manager).IsDBStartupReady
	/src/lorry/component/mongodb/manager.go:273
github.com/apecloud/kubeblocks/lorry/binding/mongodb.(*MongoDBOperations).Init.(*BaseOperations).RegisterOperationOnDBReady.StartupCheckWraper.func2
	/src/lorry/binding/utils.go:333
github.com/apecloud/kubeblocks/lorry/binding.(*BaseOperations).Invoke
	/src/lorry/binding/base.go:194
github.com/apecloud/kubeblocks/lorry/middleware/probe.route
	/src/lorry/middleware/probe/router.go:161
main.main.GetRouter.func4
	/src/lorry/middleware/probe/router.go:128
main.main.SetMiddleware.func5
	/src/lorry/middleware/probe/checks_middleware.go:105
net/http.HandlerFunc.ServeHTTP
	/usr/local/go/src/net/http/server.go:2136
net/http.(*ServeMux).ServeHTTP
	/usr/local/go/src/net/http/server.go:2514
net/http.serverHandler.ServeHTTP
	/usr/local/go/src/net/http/server.go:2938
net/http.(*conn).serve
	/usr/local/go/src/net/http/server.go:2009
2023-10-19T07:26:47Z	INFO	Mongo	operation called	{"operation": "checkRole", "result": {"event":"Failed","message":"db not ready"}}
2023-10-19T07:26:47Z	INFO	middleware	request routed	{"request": {"data":null,"metadata":{"workloadType":"Consensus"},"operation":"checkRole"}, "response": {"data":"eyJldmVudCI6IkZhaWxlZCIsIm1lc3NhZ2UiOiJkYiBub3QgcmVhZHkifQ==","metadata":{"duration":"500.30975ms","end-time":"2023-10-19T07:26:47.973227679Z","operation":"checkRole","start-time":"2023-10-19T07:26:47.472917929Z"}}}
2023-10-19T07:26:47Z	INFO	middleware	response has no statusCodeHeader
2023-10-19T07:26:49Z	INFO	middleware	receive request	{"request": "/v1.0/bindings/mongodb?operation=checkRole&workloadType=Consensus"}
2023-10-19T07:26:49Z	ERROR	Mongo	DB is not ready	{"error": "server selection error: context deadline exceeded, current topology: { Type: Single, Servers: [{ Addr: localhost:27017, Type: Unknown }, ] }"}
github.com/apecloud/kubeblocks/lorry/component/mongodb.(*Manager).IsDBStartupReady
	/src/lorry/component/mongodb/manager.go:273
github.com/apecloud/kubeblocks/lorry/binding/mongodb.(*MongoDBOperations).Init.(*BaseOperations).RegisterOperationOnDBReady.StartupCheckWraper.func2
	/src/lorry/binding/utils.go:333
github.com/apecloud/kubeblocks/lorry/binding.(*BaseOperations).Invoke
	/src/lorry/binding/base.go:194
github.com/apecloud/kubeblocks/lorry/middleware/probe.route
	/src/lorry/middleware/probe/router.go:161
main.main.GetRouter.func4
	/src/lorry/middleware/probe/router.go:128
main.main.SetMiddleware.func5
	/src/lorry/middleware/probe/checks_middleware.go:105
net/http.HandlerFunc.ServeHTTP
	/usr/local/go/src/net/http/server.go:2136
net/http.(*ServeMux).ServeHTTP
	/usr/local/go/src/net/http/server.go:2514
net/http.serverHandler.ServeHTTP
	/usr/local/go/src/net/http/server.go:2938
net/http.(*conn).serve
	/usr/local/go/src/net/http/server.go:2009
...
2023-10-19T07:32:29Z	INFO	Mongo	operation called	{"operation": "checkRole", "result": {"event":"Failed","message":"db not ready"}}
2023-10-19T07:32:29Z	INFO	middleware	request routed	{"request": {"data":null,"metadata":{"workloadType":"Consensus"},"operation":"checkRole"}, "response": {"data":"eyJldmVudCI6IkZhaWxlZCIsIm1lc3NhZ2UiOiJkYiBub3QgcmVhZHkifQ==","metadata":{"duration":"501.284334ms","end-time":"2023-10-19T07:32:29.933460254Z","operation":"checkRole","start-time":"2023-10-19T07:32:29.432175962Z"}}}
2023-10-19T07:32:29Z	INFO	middleware	response has no statusCodeHeader
2023-10-19T07:32:31Z	INFO	middleware	receive request	{"request": "/v1.0/bindings/mongodb?operation=checkRole&workloadType=Consensus"}
2023-10-19T07:32:31Z	ERROR	Mongo	DB is not ready	{"error": "server selection error: context deadline exceeded, current topology: { Type: Single, Servers: [{ Addr: localhost:27017, Type: Unknown }, ] }"}
github.com/apecloud/kubeblocks/lorry/component/mongodb.(*Manager).IsDBStartupReady
	/src/lorry/component/mongodb/manager.go:273
github.com/apecloud/kubeblocks/lorry/binding/mongodb.(*MongoDBOperations).Init.(*BaseOperations).RegisterOperationOnDBReady.StartupCheckWraper.func2
	/src/lorry/binding/utils.go:333
github.com/apecloud/kubeblocks/lorry/binding.(*BaseOperations).Invoke
	/src/lorry/binding/base.go:194
github.com/apecloud/kubeblocks/lorry/middleware/probe.route
	/src/lorry/middleware/probe/router.go:161
main.main.GetRouter.func4
	/src/lorry/middleware/probe/router.go:128
main.main.SetMiddleware.func5
	/src/lorry/middleware/probe/checks_middleware.go:105
net/http.HandlerFunc.ServeHTTP
	/usr/local/go/src/net/http/server.go:2136
net/http.(*ServeMux).ServeHTTP
	/usr/local/go/src/net/http/server.go:2514
net/http.serverHandler.ServeHTTP
	/usr/local/go/src/net/http/server.go:2938
net/http.(*conn).serve
	/usr/local/go/src/net/http/server.go:2009
2023-10-19T07:32:31Z	INFO	Mongo	operation called	{"operation": "checkRole", "result": {"event":"Failed","message":"db not ready"}}
2023-10-19T07:32:31Z	INFO	middleware	request routed	{"request": {"data":null,"metadata":{"workloadType":"Consensus"},"operation":"checkRole"}, "response": {"data":"eyJldmVudCI6IkZhaWxlZCIsIm1lc3NhZ2UiOiJkYiBub3QgcmVhZHkifQ==","metadata":{"duration":"501.007375ms","end-time":"2023-10-19T07:32:31.933505588Z","operation":"checkRole","start-time":"2023-10-19T07:32:31.432498213Z"}}}
2023-10-19T07:32:31Z	INFO	middleware	response has no statusCodeHeader
2023-10-19T07:32:33Z	INFO	middleware	receive request	{"request": "/v1.0/bindings/mongodb?operation=checkRole&workloadType=Consensus"}
2023-10-19T07:32:33Z	ERROR	Mongo	DB is not ready	{"error": "server selection error: context deadline exceeded, current topology: { Type: Single, Servers: [{ Addr: localhost:27017, Type: Unknown }, ] }"}
github.com/apecloud/kubeblocks/lorry/component/mongodb.(*Manager).IsDBStartupReady
	/src/lorry/component/mongodb/manager.go:273
github.com/apecloud/kubeblocks/lorry/binding/mongodb.(*MongoDBOperations).Init.(*BaseOperations).RegisterOperationOnDBReady.StartupCheckWraper.func2
	/src/lorry/binding/utils.go:333
github.com/apecloud/kubeblocks/lorry/binding.(*BaseOperations).Invoke
	/src/lorry/binding/base.go:194
github.com/apecloud/kubeblocks/lorry/middleware/probe.route
	/src/lorry/middleware/probe/router.go:161
main.main.GetRouter.func4
	/src/lorry/middleware/probe/router.go:128
main.main.SetMiddleware.func5
	/src/lorry/middleware/probe/checks_middleware.go:105
net/http.HandlerFunc.ServeHTTP
	/usr/local/go/src/net/http/server.go:2136
net/http.(*ServeMux).ServeHTTP
	/usr/local/go/src/net/http/server.go:2514
net/http.serverHandler.ServeHTTP
	/usr/local/go/src/net/http/server.go:2938
net/http.(*conn).serve
	/usr/local/go/src/net/http/server.go:2009
2023-10-19T07:32:33Z	INFO	Mongo	operation called	{"operation": "checkRole", "result": {"event":"Failed","message":"db not ready"}}
2023-10-19T07:32:33Z	INFO	middleware	request routed	{"request": {"data":null,"metadata":{"workloadType":"Consensus"},"operation":"checkRole"}, "response": {"data":"eyJldmVudCI6IkZhaWxlZCIsIm1lc3NhZ2UiOiJkYiBub3QgcmVhZHkifQ==","metadata":{"duration":"500.398417ms","end-time":"2023-10-19T07:32:33.937139631Z","operation":"checkRole","start-time":"2023-10-19T07:32:33.436741131Z"}}}
2023-10-19T07:32:33Z	INFO	middleware	response has no statusCodeHeader
2023-10-19T07:32:34Z	ERROR	Mongo	Get replSet status failed	{"error": "replSetGetStatus: server selection error: context deadline exceeded, current topology: { Type: ReplicaSetNoPrimary, Servers: [{ Addr: mongo-test-mongodb-0.mongo-test-mongodb-headless:27017, Type: RSGhost, Average RTT: 737750 }, ] }", "errorVerbose": "server selection error: context deadline exceeded, current topology: { Type: ReplicaSetNoPrimary, Servers: [{ Addr: mongo-test-mongodb-0.mongo-test-mongodb-headless:27017, Type: RSGhost, Average RTT: 737750 }, ] }\nreplSetGetStatus\ngithub.com/apecloud/kubeblocks/lorry/component/mongodb.GetReplSetStatus\n\t/src/lorry/component/mongodb/replset.go:35\ngithub.com/apecloud/kubeblocks/lorry/component/mongodb.(*Manager).IsClusterInitialized\n\t/src/lorry/component/mongodb/manager.go:151\ngithub.com/apecloud/kubeblocks/lorry/highavailability.(*Ha).Start\n\t/src/lorry/highavailability/ha.go:238\nruntime.goexit\n\t/usr/local/go/src/runtime/asm_arm64.s:1197"}
github.com/apecloud/kubeblocks/lorry/component/mongodb.(*Manager).IsClusterInitialized
	/src/lorry/component/mongodb/manager.go:155
github.com/apecloud/kubeblocks/lorry/highavailability.(*Ha).Start
	/src/lorry/highavailability/ha.go:238
2023-10-19T07:32:34Z	ERROR	Mongo	Get replSet status with local unauth client failed	{"error": "(Unauthorized) command replSetGetStatus requires authentication"}
github.com/apecloud/kubeblocks/lorry/component/mongodb.(*Manager).IsClusterInitialized
	/src/lorry/component/mongodb/manager.go:177
github.com/apecloud/kubeblocks/lorry/highavailability.(*Ha).Start
	/src/lorry/highavailability/ha.go:238
2023-10-19T07:32:35Z	INFO	middleware	receive request	{"request": "/v1.0/bindings/mongodb?operation=checkRole&workloadType=Consensus"}

Expected behavior
standalone mongo cluster vscale success.

Screenshots
If applicable, add screenshots to help explain your problem.

Desktop (please complete the following information):

  • OS: [e.g. iOS]
  • Browser [e.g. chrome, safari]
  • Version [e.g. 22]

Additional context
Add any other context about the problem here.

@JashBook JashBook added kind/bug Something isn't working severity/major Great chance user will encounter the same problem labels Oct 19, 2023
@JashBook JashBook added this to the Release 0.7.0 milestone Oct 19, 2023
@xuriwuyun xuriwuyun linked a pull request Oct 23, 2023 that will close this issue
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug kind/bug Something isn't working severity/major Great chance user will encounter the same problem
Projects
None yet
Development

Successfully merging a pull request may close this issue.

3 participants