A project to provide plumbing of an Apache Kafka Cluster and empty projects for writing your own producer and consumer
The provided Apache Kafka Cluster is a 3 Broker / 1 Zookeeper cluster through the use of Confluent's Docker Images. In addition to the core cluster, a schema registry and monitoring tools are also provided.
-
Starting the Apache Kafka Cluster
./gradlew cluster:composeUp
ordocker-compose up -d
-
Stopping the Apache Kafka Cluster
./gradlew cluster:composeDown
ordocker-compose down
-
Monitoring Cluster through Grafana
./gradlew cluster:grafana
orhttp://localhost:3000
- Dashboard
Shows Health of Cluster
[Dashboard](./doc/grafana_dashboard.png)
- Topic
Shows Activity of a given Topic
-
Kafka Visualization through Web Based Tool
./gradlew cluster:akhq
orhttp://localhost:8080
If port 8080 conflicts with other services you have running on your machine, changing port mapping, e.g. "9091:8080" works.
-
Create Topic
kafka-topics \ --bootstrap-server localhost:19092 \ --replication-factor 3 \ --partitions 10 \ --create \ --topic isr2
-
Create Topic with different settings
kafka-topics \ --bootstrap-server localhost:19092 \ --replication-factor 3 \ --partitions 10 \ --create \ --config min.insync.replicas=3 \ --topic isr3
-
Producer
kafka-console-producer \ --bootstrap-server localhost:19092 \ --property parse.key=true \ --property key.separator=\| \ --topic foo
-
Consumer
kafka-console-consumer \ --bootstrap-server localhost:19092 \ --from-beginning \ --property print.key=true \ --property key.separator=" | " \ --topic foo
-
Produce
kafka-console-producer \ --broker-list localhost:19092 \ --property parse.key=true \ --property key.separator=\| \ --topic ${topic}
-
Consume
kafka-console-consumer \ --bootstrap-server localhost:19092 \ --property print.key=true \ --property key.separator=\| \ --from-beginning \ --topic ${topic}
- This is out of scope for current provided content.
- Produce
Producing is tricky, in that you have to provide the Avro Schema and you must provide message in a JSON format that the command line tool can properly convert into Avro. Here is an example as a bourne shell script.
```
#!/bin/sh
value_schema=$(cat <<EOF
{
"type": "record",
"name": "IdAndName",
"namespace": "com.buesing",
"fields": [
{
"name": "id",
"type": "string"
},
{
"name": "name",
"type": "string"
}
]
}
EOF
)
PAYLOAD=$(cat <<EOF
ABC|{"id" : "ABC", "name" : "John Doe"}
EOF
)
echo $PAYLOAD |
kafka-avro-console-producer \
--broker-list localhost:19092 \
--property schema.registry.url="http://localhost:8081" \
--property key.serializer=org.apache.kafka.common.serialization.StringSerializer \
--property value.schema="${value_schema}" \
--property parse.key=true \
--property key.separator=\| \
--topic $1
```
- Consume
```
kafka-avro-console-consumer \
--bootstrap-server localhost:19092 \
--from-beginning \
--topic ${topic} \
--property print.key=true \
--property key.separator=\| \
--key-deserializer=org.apache.kafka.common.serialization.BytesDeserializer
```