Skip to content
bumara5 edited this page Feb 14, 2020 · 3 revisions

Docker Swarm

Following https://docs.docker.com/engine/swarm/swarm-tutorial/ to deploy Kafka in a swarm cluster on a single machine

Multi-node Swarm cluster on single Machine

This has been tested on Mac OSX High Sierra with Docker 1.17.

NOTE: It is not possible to use Docker for Mac with multi-node swarm clusters. You need to use docker-machine.

Create n-nodes:

docker-machine create <name>

# i.e.
docker-machine create manager
docker-machine create worker

To switch to that nodes context:

eval $(docker-machine env <name>)

Start the manager

docker-machine ssh manager
docker@manager:~$ docker swarm init --advertise-addr=192.168.99.100

To add a worker to this swarm, run the following command:

    docker swarm join --token SWMTKN-1-3fzauu1a999bid7z7rvteujkfedg9t5fbzr3ivjanuaroxs6vm-4gtl6ban1x2nvad6wsucmlxgv 192.168.99.100:2377

NOTE: it is assumed that your manager node has the IP 192.168.99.100. This can be determined with docker-machine ip manager

Join from other nodes

Use the output of the docker swarm init command to join the cluster as a worker

docker-machine ssh worker
docker@worker:~$ docker swarm join --token <token> 192.168.99.100:2377

Deploy the Swarm stack

docker-machine ssh manager
docker stack deploy -c docker-compose-swarm.yml kafka

Check that service is running

docker service ls

Scale the service

docker service scale kafka_kafka=2

NOTE We can only scale to two instances because there are only two nodes and the docker compose yaml defines ports are published in host mode.

Validating connectivity

As explained in the Connectivity Guide - you will need to be able to talk to the running brokers. If you have followed the instructions above, they will be advertising themselves as worker:9094 and manager:9094. To test connectivity from an application outside the docker network you will need to configure your local DNS or /etc/hosts file to resolve those addresses to where Swarm has bound the published ports, e.g.

# /etc/hosts
192.168.99.100  manager
192.168.99.101  worker

Then using kafkacat, you should be able to produce and consume from these nodes

$ kafkacat -b manager:9094,worker:9094 -P -t test
foo
bar
^C

$ kafkacat -b manager:9094,worker:9094 -C -t test
foo
bar
^C