-
Notifications
You must be signed in to change notification settings - Fork 4
/
README
260 lines (214 loc) · 9.44 KB
/
README
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
USING THE HIBARI "CLUSTER" TOOL
"Cluster" is a simple tool for installing, configuring, and
bootstrapping a cluster of Hibari nodes. Before using this tool, you
must:
- Download and build a Hibari package from source.
- Ensure that you have the required third party software on your "installer"
node and on your target Hibari nodes.
- Set up proper user privileges on the "installer" node and on your target
Hibari nodes.
For guidance on these tasks, see the "Getting Started" chapter in your Hibari
Application Developer's Guide.
(http://hibari.github.com/hibari-doc/hibari-app-developer-guide.en.html)
This README describes how to configure the Cluster tool, and how to use
it to install, start, and stop a Hibari cluster. The information in this
README is also available in the "Getting Started" chapter in your Hibari
Application Developer's Guide.
NOTE: The Cluster tool should meet the needs of most users. However,
this tool's "target node" recipe is currently Linux-centric
(e.g. useradd, userdel, ...). Patches and contributions for other OS
and platforms are welcome. For non-Linux deployments, the Cluster
tool is rather simple so installation can be done manually by
following the tool's recipe.
==== Configuring the Cluster Installer Tool
The Cluster tool requires some basic configuration information that
indicates how you want your Hibari cluster to be set up. You will
create a simple text file that specifies your desired configuration,
and then later use the file as input when you run the Cluster tool.
It's simplest to create the file in the same working directory in
which you downloaded the cluster tool. You can give the file any name
that you want; for purposes of these instructions we will use the file
name hibari.config.
Below is a sample hibari.config file. The file that you create must
include all of these parameters, and the values must be formatted in
the same way as in this example (with parentheses and quotation marks
as shown). Parameter descriptions follow the example file.
------
ADMIN_NODES=(dev1 dev2 dev3)
BRICK_NODES=(dev1 dev2 dev3)
BRICKS_PER_CHAIN=2
ALL_NODES=(dev1 dev2 dev3)
ALL_NETA_ADDRS=("10.181.165.230" "10.181.165.231" "10.181.165.232")
ALL_NETB_ADDRS=("10.181.165.230" "10.181.165.231" "10.181.165.232")
ALL_NETA_BCAST="10.181.165.255"
ALL_NETB_BCAST="10.181.165.255"
ALL_NETA_TIEBREAKER="10.181.165.1"
ALL_HEART_UDP_PORT="63099"
ALL_HEART_XMIT_UDP_PORT="63100"
------
- ADMIN_NODES
* Host names of the nodes that will be eligible to run the Hibari
Admin Server. For complete information on the Admin Server, see
"The Admin Server Application" in the Hibari System Administrator's Guide.
- BRICK_NODES
* Host names of the nodes that will serve as Hibari storage
bricks. Note that in the sample configuration file above there are
three storage brick nodes (dev1, dev2, and dev3), and these three
nodes are each eligible to run the Admin Server.
- BRICKS_PER_CHAIN
* Number of bricks per replication chain. For example, with two
bricks per chain there will be two copies of the data stored in
the chain (one copy on each brick); with three bricks per chain
there will be three copies, and so on. For an overview of chain
replication, see "Chain Replication for High Availability and Strong
Consistency" in the Hibari Application Developer's Guide. For
chain replication detail, see the Hibari System Administrator's
Guide.
- ALL_NODES
* This list of all Hibari nodes is the union of ADMIN_NODES and
BRICK_NODES.
- ALL_NETA_ADDRS
* As described in "The Partition Detector Application" in the Hibari System
Administrator's guide, the nodes in a multi-node Hibari cluster
should be connected by two networks, Network A and Network B, in
order to detect and manage network partitions. The
ALL_NETA_ADDRS parameter specifies the IP addresses of each
Hibari node within Network A, which is the network through which
data replication and other Erlang communications will take
place. The list of the IP addresses should correspond in order to
host names you listed in the ALL_NODES setting.
- ALL_NETB_ADDRS
* IP addresses of each Hibari node within Network B. Network B is
used only for heartbeat broadcasts that help to detect network
partitions. The list of the IP addresses should correspond in
order to host names you listed in the ALL_NODES setting.
- ALL_NETA_BCAST
* IP broadcast address for Network A.
- ALL_NETB_BCAST
* IP broadcast address for Network B.
- ALL_NETA_TIEBREAKER
* Within Network A, the IP address for the network monitoring
application to use as a "tiebreaker" in the event of a
partition. If the network monitoring application on a Hibari node
determines that Network A is partitioned and Network B is not
partitioned, then if the Network A tiebreaker IP address responds
to a ping, then the local node is on the "correct" side of the
partition. Ideally the tiebreaker should be the address of the
Layer 2 switch or Layer 3 router that all Erlang network
distribution communications flow through.
- ALL_HEART_UDP_PORT
* UDP port for heartbeat listener.
- ALL_HEART_XMIT_UDP_PORT
* UDP port for heartbeat transmitter.
For more detail on network monitoring configuration settings, see the
partition-detector's OTP application source file
(https://github.com/hibari/partition-detector/raw/master/src/partition_detector.app.src).
CAUTION: In a production setting, Network A and Network B should be
physically different networks and network interfaces. However, for
testing and development purposes the same physical network can be used
for Network A and Network B (as in the sample configuration file
above).
As final configuration steps, on *each Hibari node*:
- Make sure that the /etc/hosts file has entries for all Hibari nodes
in the cluster. For example:
------
10.181.165.230 dev1.your-domain.com dev1
10.181.165.231 dev2.your-domain.com dev2
10.181.165.232 dev3.your-domain.com dev3
------
- In the system's /etc/sysctl.conf file, set vm.swappiness=0. Swappiness
is not desirable for an Erlang VM.
==== Installing Hibari
From your installer node, logged in as the installer user, take these
steps to create your Hibari cluster:
1. In the working directory in which you downloaded the Cluster tool and
created your cluster configuration file, place
a copy of the Hibari tarball package and md5sum file:
------
$ cd working-directory
$ ls -1
clus
hibari-X.Y.Z-DIST-ARCH-WORDSIZE-md5sum.txt
hibari-X.Y.Z-DIST-ARCH-WORDSIZE.tgz
hibari.config
$
------
2. Create the "hibari" user on all Hibari nodes:
------
$ for i in dev1 dev2 dev3 ; do ./clus/priv/clus.sh -f init hibari $i ; done
hibari@dev1
hibari@dev2
hibari@dev3
------
NOTE: If the "hibari" user already exists on the target nodes, the -f
option will forcefully delete and then re-create the "hibari" user.
3. Install the Hibari package on all Hibari nodes, via the newly
created "hibari" user:
------
$ ./clus/priv/clus-hibari.sh -f init hibari hibari.config hibari-X.Y.Z-DIST-ARCH-WORDSIZE.tgz
hibari@dev1
hibari@dev2
hibari@dev3
------
NOTE: By default the Cluster tool installs Hibari into
/usr/local/var/lib on the target nodes. If you prefer a different
location, before doing the install open the clus.sh script (in your
working directory, under /clus/priv/) and edit the CT_HOMEBASEDIR
variable.
=== Starting and Stopping a Multi-Node Hibari Cluster
You can use the Cluster installer tool to start and stop your
multi-node Hibari cluster, working from the same node from which you
managed the installation process. Note that in each of the Hibari
commands in this section you'll be referencing the name of the
Cluster tool configuration file that you created during the installation
procedure.
==== Starting and Bootstrapping the Hibari Cluster
1. Change to the working directory in which you downloaded the Cluster
tool, then start Hibari on all Hibari nodes via the "hibari" user:
------
$ cd working-directory
$ ./clus/priv/clus-hibari.sh -f start hibari hibari.config
hibari@dev1
hibari@dev2
hibari@dev3
------
2. If this is the first time you've started Hibari, bootstrap the
system via the "hibari" user:
------
$ ./clus/priv/clus-hibari.sh -f bootstrap hibari hibari.config
hibari@dev1 => hibari@dev1 hibari@dev2 hibari@dev3
------
The Hibari bootstrap process starts Hibari's Admin Server on the first
eligible admin node and creates a single table "tab1" serving as Hibari's
default table. For information about creating additional tables, see
"Creating New Tables" in the Hibari Application Developer's Guide.
NOTE: If bootstrapping fails due to "another_admin_server_running"
error, please stop the other Hibari cluster(s) running on the network;
or reconfigure the Cluster tool to assign Hibari heartbeat listener ports
that are not in use by another Hibari cluster or other applications and then
repeat the cluster installation procedure.
==== Verifying the Hibari Cluster
Do these simple checks to verify that Hibari is up and running.
1. Confirm that you can open the "Hibari Web Administration" page:
------
$ your-favorite-browser http://dev1:23080
------
2. Confirm that you can successfully ping each of your Hibari nodes:
------
$ ./clus/priv/clus-hibari.sh -f ping hibari hibari.config
hibari@dev1 ... pong
hibari@dev2 ... pong
hibari@dev3 ... pong
------
==== Stopping the Hibari Cluster
Stop Hibari on all Hibari nodes via the "hibari" user:
------
$ cd working-directory
$ ./clus/priv/clus-hibari.sh -f stop hibari hibari.config
ok
ok
ok
hibari@dev1
hibari@dev2
hibari@dev3
------