Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Inconsistent Volume Status showing in UI and Grafana dashboards (vs. CLI) #1004

Open
julienlim opened this issue Jul 3, 2018 · 2 comments
Open

Comments

@julienlim
Copy link
Member

julienlim commented Jul 3, 2018

Here's the scenario I went through:

  1. Create cluster (ju_cluster) with no volume
  2. Added tendrl
  3. Imported Cluster (with no volumes)
  4. Created a volume (vol1) in the cluster

volume is up and running just fine:

# gstatus -al

     Product: Community          Capacity: 728.00 MiB(raw bricks)
      Status: HEALTHY                       39.00 MiB(raw used)
   Glusterfs: 3.12.9                       243.00 MiB(usable from volumes)
  OverCommit: No                Snapshots:   0

   Nodes       :  3/  3		  Volumes:   1 Up
   Self Heal   :  3/  3		             0 Up(Degraded)
   Bricks      :  3/  3		             0 Up(Partial)
   Connections :  3/   9                     0 Down

Volume Information
	vol1             UP - 3/3 bricks up - Replicate
	                 Capacity: (5% used) 13.00 MiB/243.00 MiB (used/total)
	                 Snapshots: 0
	                 Self Heal:  3/ 3
	                 Tasks Active: None
	                 Protocols: glusterfs:on  NFS:off  SMB:on
	                 Gluster Connectivty: 3 hosts, 9 tcp connections

	vol1------------ +
	                 |
                Replicated (afr)
                         |
                         +-- Replica Set0 (afr)
                               |
                               +--tendrl-node-1:/gluster/brick1/brick1(UP) 13.00 MiB/243.00 MiB 
                               |
                               +--tendrl-node-2:/gluster/brick1/brick1(UP) 13.00 MiB/243.00 MiB 
                               |
                               +--tendrl-node-3:/gluster/brick1/brick1(UP) 13.00 MiB/243.00 MiB 

# gluster volume info vol1

Volume Name: vol1
Type: Replicate
Volume ID: 63c33318-5789-46c8-9cb7-9f96bafcba8f
Status: Started
Snapshot Count: 0
Number of Bricks: 1 x 3 = 3
Transport-type: tcp
Bricks:
Brick1: tendrl-node-1:/gluster/brick1/brick1
Brick2: tendrl-node-2:/gluster/brick1/brick1
Brick3: tendrl-node-3:/gluster/brick1/brick1
Options Reconfigured:
transport.address-family: inet
nfs.disable: on
performance.client-io-threads: off

tendrl UI (Volumes page) shows the status of the Volume as Unknown (?) even though it active and running:
screen shot 2018-07-03 at 12 58 09 pm

The cluster dashboard shows that my volume is down:
screen shot 2018-07-03 at 12 56 12 pm

The volume dashboard shows N/A for the volume:
screen shot 2018-07-03 at 12 56 38 pm

Release information:
# rpm -qa | grep tendrl | sort

tendrl-ansible-1.6.3-2.el7.centos.noarch
tendrl-api-1.6.3-20180626T110501.5a1c79e.noarch
tendrl-api-httpd-1.6.3-20180626T110501.5a1c79e.noarch
tendrl-commons-1.6.3-20180628T114340.d094568.noarch
tendrl-grafana-plugins-1.6.3-20180622T070617.1f84bc8.noarch
tendrl-grafana-selinux-1.5.4-20180227T085901.984600c.noarch
tendrl-monitoring-integration-1.6.3-20180622T070617.1f84bc8.noarch
tendrl-node-agent-1.6.3-20180618T083110.ba580e6.noarch
tendrl-notifier-1.6.3-20180618T083117.fd7bddb.noarch
tendrl-selinux-1.5.4-20180227T085901.984600c.noarch
tendrl-ui-1.6.3-20180625T085228.23f862a.noarch

@Tendrl/qe @nthomas-redhat @gnehapk @cloudbehl @shirshendu

@julienlim julienlim changed the title Inconsistent Volume Status showing in UI and Grafana dashboards Inconsistent Volume Status showing in UI and Grafana dashboards (vs. CLI) Jul 3, 2018
@gnehapk
Copy link
Member

gnehapk commented Jul 4, 2018

@julienlim Can you please share the API response for \volumes

@julienlim
Copy link
Member Author

@gnehapk I unfortunately deleted my environment so I can't get the API response for \volumes. That being said, if you follow the sequence I provided, i.e. create cluster without volume, install WA, create volume, you should be able to see the same results.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants