The scripts in this directory provide various examples of using Confluent's Python client for Kafka:
- adminapi.py: Various AdminClient operations.
- asyncio_example.py: AsyncIO webserver with Kafka producer.
- consumer.py: Read messages from a Kafka topic.
- producer.py: Read lines from stdin and send them to a Kafka topic.
- eos-transactions.py: Transactional producer with exactly once semantics (EOS).
- avro_producer.py: Produce Avro serialized data using AvroSerializer.
- avro_consumer.py: Read Avro serialized data using AvroDeserializer.
- json_producer.py: Produce JSON serialized data using JSONSerializer.
- json_consumer.py: Read JSON serialized data using JSONDeserializer.
- protobuf_producer.py: Produce Protobuf serialized data using ProtobufSerializer.
- protobuf_consumer.py: Read Protobuf serialized data using ProtobufDeserializer.
- sasl_producer.py: Demonstrates SASL Authentication.
- get_watermark_offsets.py: Consumer method for listing committed offsets and consumer lag for group and topics.
- oauth_producer.py: Demonstrates OAuth Authentication (client credentials).
Additional examples for Confluent Cloud:
- confluent_cloud.py: Produce messages to Confluent Cloud and then read them back again.
- confluentinc/examples: Integration with Confluent Cloud and Confluent Cloud Schema Registry
It's usually a good idea to install Python dependencies in a virtual environment to avoid conflicts between projects.
To setup a venv with the latest release version of confluent-kafka and dependencies of all examples installed:
$ python3 -m venv venv_examples
$ source venv_examples/bin/activate
$ pip install confluent_kafka
$ pip install -r requirements/requirements-examples.txt
To setup a venv that uses the current source tree version of confluent_kafka, you need to have a C compiler and librdkafka installed (from a package, or from source). Then:
$ python3 -m venv venv_examples
$ source venv_examples/bin/activate
$ pip install .[examples]
When you're finished with the venv:
$ deactivate