Skip to content

Use Kafdrop to Manage AutoMQ

lyx edited this page Jan 17, 2025 · 1 revision

Preface

Kafdrop-ui [1] is a simple, intuitive, and powerful web UI tool designed for Kafka. It allows developers and administrators to easily view and manage key metadata of Kafka clusters, including Topics, partitions, Consumer Groups, and their offsets. By providing a user-friendly interface, Kafdrop greatly simplifies the monitoring and management of Kafka clusters, enabling users to quickly obtain cluster status information without relying on complex command-line tools.

Thanks to AutoMQ's full compatibility with Kafka, it can seamlessly integrate with Kafdrop. By utilizing Kafdrop, AutoMQ users can also benefit from an intuitive user interface for real-time monitoring of Kafka cluster status, including Topics, partitions, Consumer Groups, and their offsets. This monitoring capability not only enhances problem diagnosis efficiency but also helps optimize cluster performance and resource utilization.

This tutorial will teach you how to start the Kafdrop service and integrate it with an AutoMQ cluster to monitor and manage the cluster state.

Prerequisites

  • Kafdrop Environment: AutoMQ cluster, JDK17, and Maven 3.6.3 or above.

  • Kafdrop can be run through JAR files, Docker deployment, or protobuf deployment. Refer to the Kafdrop-UI [3].

  • Prepare 5 hosts to deploy the AutoMQ cluster. It is recommended to choose Linux amd64 hosts with 2 CPUs and 16GB of RAM and prepare two virtual storage volumes. An example is as follows:

    Roles
    IP
    Node ID
    System Volume
    Data Volume
    CONTROLLER
    192.168.0.1
    0
    EBS 20GB
    EBS 20GB
    CONTROLLER
    192.168.0.2
    1
    EBS 20GB
    EBS 20GB
    CONTROLLER
    192.168.0.3
    2
    EBS 20GB
    EBS 20GB
    BROKER
    192.168.0.4
    3
    EBS 20GB
    EBS 20GB
    BROKER
    192.168.0.5
    4
    EBS 20GB
    EBS 20GB

    Tips:

    • Ensure that these machines are in the same subnet and can communicate with each other.

    • For non-production environments, you can deploy only one Controller. By default, this Controller also serves as a broker.

  • Download the latest official binary installation package from Kafdrop-UI to install AutoMQ.

Below, I will first set up the AutoMQ cluster and then start Kafdrop.

Install and start the AutoMQ cluster.

Configure S3URL.

Step 1: Generate the S3 URL.

AutoMQ provides the automq-kafka-admin.sh tool to quickly start AutoMQ. Simply provide the S3 URL containing the required S3 access points and authentication information to start AutoMQ with one click, without the need to manually generate cluster IDs or perform storage formatting.


### 命令行使用示例
bin/automq-kafka-admin.sh generate-s3-url \
--s3-access-key=xxx \
--s3-secret-key=yyy \
--s3-region=cn-northwest-1 \
--s3-endpoint=s3.cn-northwest-1.amazonaws.com.cn \
--s3-data-bucket=automq-data \
--s3-ops-bucket=automq-ops

Note: You need to pre-configure an AWS S3 bucket. If you encounter errors, please ensure the parameters and format are correct.

Output Result

After executing this command, the process will automatically proceed through the following stages:

  1. Based on the provided accessKey and secret Key, test the core features of S3 to verify the compatibility between AutoMQ and S3.

  2. Generate the s3url based on identity information and access point information.

  3. Obtain the startup command for AutoMQ using the s3url. In the command, replace --controller-list and --broker-list with the actual CONTROLLER and BROKER that need to be deployed.

Example results are as follows:


############  Ping s3 ########################

[ OK ] Write s3 object
[ OK ] Read s3 object
[ OK ] Delete s3 object
[ OK ] Write s3 object
[ OK ] Upload s3 multipart object
[ OK ] Read s3 multipart object
[ OK ] Delete s3 object
############  String of s3url ################

Your s3url is:

s3://s3.cn-northwest-1.amazonaws.com.cn?s3-access-key=xxx&s3-secret-key=yyy&s3-region=cn-northwest-1&s3-endpoint-protocol=https&s3-data-bucket=automq-data&s3-path-style=false&s3-ops-bucket=automq-ops&cluster-id=40ErA_nGQ_qNPDz0uodTEA


############  Usage of s3url  ################
To start AutoMQ, generate the start commandline using s3url.
bin/automq-kafka-admin.sh generate-start-command \
--s3-url="s3://s3.cn-northwest-1.amazonaws.com.cn?s3-access-key=XXX&s3-secret-key=YYY&s3-region=cn-northwest-1&s3-endpoint-protocol=https&s3-data-bucket=automq-data&s3-path-style=false&s3-ops-bucket=automq-ops&cluster-id=40ErA_nGQ_qNPDz0uodTEA" \
--controller-list="192.168.0.1:9093;192.168.0.2:9093;192.168.0.3:9093"  \
--broker-list="192.168.0.4:9092;192.168.0.5:9092"

TIPS: Please replace the controller-list and broker-list with your actual IP addresses.

Step 2: Generate a list of startup commands

Replace the --controller-list and --broker-list in the commands generated in the previous step with your host information. Specifically, substitute them with the IP addresses of the 3 CONTROLLERs and 2 BROKERs mentioned in the environment setup, and use the default ports 9092 and 9093.


bin/automq-kafka-admin.sh generate-start-command \
--s3-url="s3://s3.cn-northwest-1.amazonaws.com.cn?s3-access-key=XXX&s3-secret-key=YYY&s3-region=cn-northwest-1&s3-endpoint-protocol=https&s3-data-bucket=automq-data&s3-path-style=false&s3-ops-bucket=automq-ops&cluster-id=40ErA_nGQ_qNPDz0uodTEA" \
--controller-list="192.168.0.1:9093;192.168.0.2:9093;192.168.0.3:9093"  \
--broker-list="192.168.0.4:9092;192.168.0.5:9092"

Parameter Description

Parameter Name
Required
Description
--s3-url
is
Generated by the `bin/automq-kafka-admin.sh generate-s3-url` command-line tool, including authentication, cluster ID, and other information.
--controller-list
is
At least one address is required for the CONTROLLER host's IP and port list. The format should be IP1:PORT1; IP2:PORT2; IP3:PORT3.
--broker-list
is
At least one address is required for the BROKER host's IP and port list. The format should be IP1:PORT1; IP2:PORT2; IP3:PORT3.
--controller-only-mode
No
Determine whether the CONTROLLER node exclusively assumes the CONTROLLER role. The default is false, meaning the deployed CONTROLLER node also functions as a BROKER role.

Output Result

Upon executing the command, a startup command for AutoMQ will be generated.


############  Start Commandline ##############
To start an AutoMQ Kafka server, please navigate to the directory where your AutoMQ tgz file is located and run the following command.

Before running the command, make sure that Java 17 is installed on your host. You can verify the Java version by executing 'java -version'.

bin/kafka-server-start.sh --s3-url="s3://s3.cn-northwest-1.amazonaws.com.cn?s3-access-key=XXX&s3-secret-key=YYY&s3-region=cn-northwest-1&s3-endpoint-protocol=https&s3-data-bucket=automq-data&s3-path-style=false&s3-ops-bucket=automq-ops&cluster-id=40ErA_nGQ_qNPDz0uodTEA" --override process.roles=broker,controller --override node.id=0 --override [email protected]:9093,[email protected]:9093,[email protected]:9093 --override listeners=PLAINTEXT://192.168.0.1:9092,CONTROLLER://192.168.0.1:9093 --override advertised.listeners=PLAINTEXT://192.168.0.1:9092

bin/kafka-server-start.sh --s3-url="s3://s3.cn-northwest-1.amazonaws.com.cn?s3-access-key=XXX&s3-secret-key=YYY&s3-region=cn-northwest-1&s3-endpoint-protocol=https&s3-data-bucket=automq-data&s3-path-style=false&s3-ops-bucket=automq-ops&cluster-id=40ErA_nGQ_qNPDz0uodTEA" --override process.roles=broker,controller --override node.id=1 --override [email protected]:9093,[email protected]:9093,[email protected]:9093 --override listeners=PLAINTEXT://192.168.0.2:9092,CONTROLLER://192.168.0.2:9093 --override advertised.listeners=PLAINTEXT://192.168.0.2:9092

bin/kafka-server-start.sh --s3-url="s3://s3.cn-northwest-1.amazonaws.com.cn?s3-access-key=XXX&s3-secret-key=YYY&s3-region=cn-northwest-1&s3-endpoint-protocol=https&s3-data-bucket=automq-data&s3-path-style=false&s3-ops-bucket=automq-ops&cluster-id=40ErA_nGQ_qNPDz0uodTEA" --override process.roles=broker,controller --override node.id=2 --override [email protected]:9093,[email protected]:9093,[email protected]:9093 --override listeners=PLAINTEXT://192.168.0.3:9092,CONTROLLER://192.168.0.3:9093 --override advertised.listeners=PLAINTEXT://192.168.0.3:9092

bin/kafka-server-start.sh --s3-url="s3://s3.cn-northwest-1.amazonaws.com.cn?s3-access-key=XXX&s3-secret-key=YYY&s3-region=cn-northwest-1&s3-endpoint-protocol=https&s3-data-bucket=automq-data&s3-path-style=false&s3-ops-bucket=automq-ops&cluster-id=40ErA_nGQ_qNPDz0uodTEA" --override process.roles=broker --override node.id=3 --override [email protected]:9093,[email protected]:9093,[email protected]:9093 --override listeners=PLAINTEXT://192.168.0.4:9092 --override advertised.listeners=PLAINTEXT://192.168.0.4:9092

bin/kafka-server-start.sh --s3-url="s3://s3.cn-northwest-1.amazonaws.com.cn?s3-access-key=XXX&s3-secret-key=YYY&s3-region=cn-northwest-1&s3-endpoint-protocol=https&s3-data-bucket=automq-data&s3-path-style=false&s3-ops-bucket=automq-ops&cluster-id=40ErA_nGQ_qNPDz0uodTEA" --override process.roles=broker --override node.id=4 --override [email protected]:9093,[email protected]:9093,[email protected]:9093 --override listeners=PLAINTEXT://192.168.0.5:9092 --override advertised.listeners=PLAINTEXT://192.168.0.5:9092


TIPS: Start controllers first and then the brokers.

Note: The node.id is automatically generated starting from 0 by default.

Step 3: Start AutoMQ

To start the cluster, sequentially execute the list of commands generated in the previous step on the pre-specified CONTROLLER or BROKER hosts. For instance, to start the first CONTROLLER process on 192.168.0.1, execute the first command template from the generated startup command list.


bin/kafka-server-start.sh --s3-url="s3://s3.cn-northwest-1.amazonaws.com.cn?s3-access-key=XXX&s3-secret-key=YYY&s3-region=cn-northwest-1&s3-endpoint-protocol=https&s3-data-bucket=automq-data&s3-path-style=false&s3-ops-bucket=automq-ops&cluster-id=40ErA_nGQ_qNPDz0uodTEA" --override process.roles=broker,controller --override node.id=0 --override [email protected]:9093,[email protected]:9093,[email protected]:9093 --override listeners=PLAINTEXT://192.168.0.1:9092,CONTROLLER://192.168.0.1:9093 --override advertised.listeners=PLAINTEXT://192.168.0.1:9092

Parameter Description

When using the startup command, unspecified parameters will adopt Apache Kafka®'s default configurations. For parameters added by AutoMQ, the default values provided by AutoMQ will be used. To override default configurations, you can append additional --override key=value parameters at the end of the command.

Parameter Name
Required
Description
s3-url
is
Generated by the `bin/automq-kafka-admin.sh generate-s3-url` command-line tool, including authentication, cluster ID, and other information.
process.roles
is
The options are CONTROLLER or BROKER. If a host is both CONTROLLER and BROKER, the configuration value is CONTROLLER, BROKER.
node.id
is
An integer used to uniquely identify a BROKER or CONTROLLER in a Kafka cluster. It must remain unique within the cluster.
controller.quorum.voters
is
Information about the hosts participating in the KRAFT election, including nodeid, IP, and port details, e.g., [email protected]:9093, [email protected]:9093
listeners
is
The listening IP and port
advertised.listeners
is
The BROKER provides the access address for the Client.
log.dirs
No
Directory for storing KRAFT and BROKER metadata.
s3.wal.path
No
In a production environment, it is recommended to store AutoMQ WAL data on a newly mounted, standalone data volume or bare device. This approach can provide better performance as AutoMQ supports writing data directly to bare devices, thereby reducing latency. Ensure that the correct path is configured for storing WAL data.
autobalancer.controller.enable
No
The default value is false, meaning self-balancing is not enabled. When self-balancing is automatically enabled, AutoMQ's auto balancer component will automatically reassign partitions to ensure overall traffic is balanced.

Tips:

Run in the background

If you need to run in background mode, please add the following code at the end of the command:


command > /dev/null 2>&1 &

Start the Kafdrop service

In the previous process, we set up the AutoMQ cluster and obtained the addresses and ports of all broker nodes. Next, we will start the Kafdrop service.

Note: Ensure that the address where the Kafdrop service is located can access the AutoMQ cluster; otherwise, it will result in connection timeout issues.

In this example, I use the JAR package method to start the Kafdrop service. The steps are as follows:

  • Pull the Kafdrop repository source code: Kafdrop-UI [4]

git clone https://github.com/obsidiandynamics/kafdrop.git

  • Use Maven to locally compile and package Kafdrop to generate the JAR file. Execute the following in the root directory:

mvn clean compile package

  • Start the service, specifying the addresses and ports of the AutoMQ cluster brokers:

java --add-opens=java.base/sun.nio.ch=ALL-UNNAMED \
    -jar target/kafdrop-<version>.jar \
    --kafka.brokerConnect=<host:port,host:port>,...

  1. Replace `kafdrop-<version>.jar` with the specific version, such as `kafdrop-4.0.2-SNAPSHOT.jar`.

  2. --kafka.brokerConnect=<host:port,host:port> requires you to specify the host and port for the specific broker nodes in the cluster.

The console startup output is as follows:

If not specified, kafka.brokerConnect defaults to localhost:9092.

Note: Starting from Kafdrop 3.10.0, a ZooKeeper connection is no longer required. All necessary cluster information is retrieved through the Kafka management API.

Open your browser and navigate to http://localhost:9000. You can override the port by adding the following configuration:


--server.port=<port> --management.server.port=<port>

Final effect

  1. Complete interface

Displays the number of partitions, number of topics, and other cluster state information.

  1. Creating a Topic

  1. Detailed Broker Node Information

  1. Detailed Topic Information

  1. Message Information Under the Topic

Summary

Through this tutorial, we explored the key features and functionalities of Kafdrop, as well as the methods to integrate it with AutoMQ clusters. We demonstrated how to easily monitor and manage AutoMQ clusters. The use of Kafdrop not only helps teams better understand and control their data flow but also enhances development and operational efficiency, ensuring a highly efficient and stable data processing workflow. We hope this tutorial provides you with valuable insights and assistance when using Kafdrop with AutoMQ clusters.

References

[1] Kafdrop:https://github.com/obsidiandynamics/kafdrop

[2] AutoMQ:https://www.automq.com/zh

[3] Kafdrop Deployment: https://github.com/obsidiandynamics/kafdrop/blob/master/README.md\#getting-started

[4] Kafdrop project repository: https://github.com/obsidiandynamics/kafdrop

AutoMQ Wiki Key Pages

What is automq

Getting started

Architecture

Deployment

Migration

Observability

Integrations

Releases

Benchmarks

Reference

Articles

Clone this wiki locally