Skip to content
This repository was archived by the owner on Jun 1, 2020. It is now read-only.

Ring ownership after start #36

Open
kakoni opened this issue Jan 10, 2015 · 1 comment
Open

Ring ownership after start #36

kakoni opened this issue Jan 10, 2015 · 1 comment

Comments

@kakoni
Copy link

kakoni commented Jan 10, 2015

So starting riak cluster as per documention
`DOCKER_RIAK_AUTOMATIC_CLUSTERING=1 DOCKER_RIAK_CLUSTER_SIZE=5 DOCKER_RIAK_BACKEND=leveldb make start-cluster``

After stabilization I'll check for ring ownership;
"ring_ownership": "[{'[email protected]',64}]",

Thats uncool.

So docker-enter into one of the nodes to see what riak-admin cluster plan shows

================================= Membership ==================================
Status     Ring    Pending    Node
-------------------------------------------------------------------------------
valid      20.3%     20.3%    '[email protected]'
valid      20.3%     20.3%    '[email protected]'
valid      20.3%     20.3%    '[email protected]'
valid      20.3%     20.3%    '[email protected]'
valid      18.8%     18.8%    '[email protected]'
-------------------------------------------------------------------------------
Valid:5 / Leaving:0 / Exiting:0 / Joining:0 / Down:0

Outstading changes, I'll commit them and check the status again

root@5ec57bbf8295:~# riak-admin cluster commit
Cluster changes committed
root@5ec57bbf8295:~# riak-admin cluster plan
There are no staged changes

All good. And obviously ring ownership also cool now
"ring_ownership": "[{'[email protected]',13},\n {'[email protected]',13},\n {'[email protected]',13},\n {'[email protected]',13},\n {'[email protected]',12}]",

So something with automatic_clustering fails here.

@kazarena
Copy link

kazarena commented May 4, 2015

@kakoni, I had similar issues with unfinished cluster configuration. After digging into it I figured out that sometimes automatic_clustering.sh is executed too soon and 'cluster join' command returns "Node not found!" message. I wasn't able to come up with a quick fix for this issue and chose an alternative option: instead of joining the cluster from inside the container I'm doing it explicitly in start-cluster.sh, see bdb49dd and 83ff81f.

These changes give me consistently stable behaviour.

Hope this helps.

Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants