The environment for this exercise consists of three cassandra nodes: node1, node2 and node3
✅ Open /workspace/ds201-lab11/node1/conf/cassandra.yaml
in a nano or the text editor of your choice and view the endpoint_snitch
setting:
nano /workspace/ds201-lab11/node1/conf/cassandra.yaml
You should see GossipingPropertyFileSnitch. The default snitch, SimpleSnitch is only appropriate for single datacenter deployments.
Note: GossipingPropertyFileSnitch should be your go-to snitch for production use. The rack and datacenter for the local node are defined in cassandra-rackdc.properties and propagated to other nodes via Replication.
Verify that the cassandra.yml
files for node2 and node3 all use the same snitch.
✅ Open /workspace/ds201-lab11/node1/conf/cassandra-rackdc.properties
in a nano or the text editor of your choice and find the dc
and rack
settings:
nano /workspace/ds201-lab11/node1/conf/cassandra-rackdc.properties
✅ You should see these values:
dc=dc-seattle
rack=rack-red
This is the file that the GossipingPropertyFileSnitch uses to determine the rack and data center this particular node belongs to.
Racks and datacenters are purely logical assignments to Cassandra. You will want to ensure that your logical racks and data centers align with your physical failure zones.
Examine cassandra-rackdc.properties
for node2 and node3.
Properties for node2 should be the same as for node1, they are in the same datacenter and on the same rack.
dc=dc-seattle
rack=rack-red
Properties for node3 should be different since it is in a different datacenter:
dc=dc-atlanta
rack=rack-green
✅ Check on the cluster status:
Solution
nodetool status
You should now see that the nodes are in different datacenters.