Skip to content

Commit

Permalink
docs: Update README with features, links, demo, support status (#173)
Browse files Browse the repository at this point in the history
  • Loading branch information
astubbs committed Oct 13, 2021
1 parent 20b494b commit 84cf11a
Show file tree
Hide file tree
Showing 3 changed files with 77 additions and 36 deletions.
27 changes: 27 additions & 0 deletions .idea/runConfigurations/README.xml

Some generated files are not rendered by default. Learn more about how customized files appear on GitHub.

43 changes: 25 additions & 18 deletions README.adoc
Original file line number Diff line number Diff line change
Expand Up @@ -10,7 +10,7 @@
//


= parallel-consumer (beta)
= parallel-consumer
:icons:
:toc: macro
:toclevels: 3
Expand Down Expand Up @@ -52,10 +52,12 @@ Parallel Apache Kafka client wrapper with client side queueing, a simpler consum

Confluent's https://www.confluent.io/confluent-accelerators/#parallel-consumer[product page for the project].

WARNING: This is not a Confluent supported product.
It is an experimental alpha stage accelerator.
CAUTION: This is not a Confluent supported product.
See the <<Support and Issues>> section for more information.

IMPORTANT: This project has been stable and reached its initial target feature set in Q1 2021.
It is actively maintained by the CSID team at Confluent.

[[intro]]
This library lets you process messages in parallel via a single Kafka Consumer meaning you can increase consumer parallelism without increasing the number of partitions in the topic you intend to process.
For many use cases this improves both throughput and latency by reducing load on your brokers.
Expand All @@ -71,6 +73,15 @@ It also opens up new use cases like extreme parallelism, external data enrichmen

An overview article to the library can also be found on Confluent's https://www.confluent.io/blog/[blog]: https://www.confluent.io/blog/introducing-confluent-parallel-message-processing-client/[Introducing the Confluent Parallel Consumer].

[[demo]]
.Relative speed demonstration
--
.Click on the animated SVG image to open the https://asciinema.org/a/404299[Asciinema.org player].
image::https://gist.githubusercontent.com/astubbs/26cccaf8b624a53ae26a52dbc00148b1/raw/cbf558b38b0aa624bd7637406579d2a8f00f51db/demo.svg[link="https://asciinema.org/a/404299"]
--

'''

toc::[]

== Motivation
Expand Down Expand Up @@ -197,7 +208,7 @@ The end-to-end latency of the responses to these answers needs to be as low as t
We can break out as described below into the tool for processing that step, then return to the Kafka Streams context.
* Provisioning extra machines (either virtual machines or real machines) to run multiple clients has a cost, using this library instead avoids the need for extra instances to be deployed in any respect.

== Feature List
== Features List

* Have massively parallel consumption processing without running hundreds or thousands of:
** Kafka consumer clients,
Expand All @@ -209,7 +220,10 @@ without operational burden or harming the cluster's performance
* Solution for the https://en.wikipedia.org/wiki/Head-of-line_blocking["head of line"] blocking problem where continued failure of a single message, prevents progress for messages behind it in the queue
* Per `key` concurrent processing, per partition and unordered message processing
* Offsets committed correctly, in order, of only processed messages, regardless of concurrency level or retries
* Vert.x non-blocking library integration (HTTP and any Vert.x future support)
* Vert.x and Reactor.io non-blocking library integration
** Non-blocking I/O work management
** Vert.x's WebClient and general Vert.x Future support
** Reactor.io Publisher (Mono/Flux) and Java's CompletableFuture (through `Mono#fromFuture`)
* Reactor non-blocking library integration
* Fair partition traversal
* Zero~ dependencies (`Slf4j` and `Lombok`) for the core module
Expand Down Expand Up @@ -689,28 +703,21 @@ image::https://lucid.app/publicSegments/view/43f2740c-2a7f-4b7f-909e-434a5bbe3fb
== Roadmap

For released changes, see the link:CHANGELOG.adoc[CHANGELOG].

=== Short Term - What we're working on nowish ⏰

* Depth~ first or breadth first partition traversal
* JavaRX and other streaming modules
For features in development, have a look at the https://github.com/confluentinc/parallel-consumer/issues[GitHub issues].

=== Medium Term - What's up next ⏲

* https://github.com/confluentinc/parallel-consumer/issues/21[Automatic fanout] (automatic selection of concurrency level based on downstream back pressure) (https://github.com/confluentinc/parallel-consumer/pull/22[draft PR])
* Support for general Vert.x Verticles (non-blocking libraries)
* Dead Letter Queue (DLQ) handling
* Non-blocking I/O work management
** More customisable handling of HTTP interactions
** Chance to batch multiple consumer records into a single or multiple http request objects
* https://github.com/confluentinc/parallel-consumer/issues/28[Distributed tracing integration]
* https://github.com/confluentinc/parallel-consumer/issues/24[Distributed rate limiting]
* https://github.com/confluentinc/parallel-consumer/issues/27[Metrics]
* More customisable handling[https://github.com/confluentinc/parallel-consumer/issues/65] of HTTP interactions
* Chance to https://github.com/confluentinc/parallel-consumer/issues/18[batch multiple consumer records] into a single or multiple http request objects

=== Long Term - The future ☁️

* Apache Kafka KIP?
* Call backs only offset has been committed
* https://github.com/confluentinc/parallel-consumer/issues/21[Automatic fanout] (automatic selection of concurrency level based on downstream back pressure) (https://github.com/confluentinc/parallel-consumer/pull/22[draft PR])
* Dead Letter Queue (DLQ) handling
* Call backs only once offset has been committed

== Usage Requirements

Expand Down
43 changes: 25 additions & 18 deletions src/docs/README.adoc
Original file line number Diff line number Diff line change
Expand Up @@ -10,7 +10,7 @@
//


= parallel-consumer (beta)
= parallel-consumer
:icons:
:toc: macro
:toclevels: 3
Expand Down Expand Up @@ -52,10 +52,12 @@ Parallel Apache Kafka client wrapper with client side queueing, a simpler consum

Confluent's https://www.confluent.io/confluent-accelerators/#parallel-consumer[product page for the project].

WARNING: This is not a Confluent supported product.
It is an experimental alpha stage accelerator.
CAUTION: This is not a Confluent supported product.
See the <<Support and Issues>> section for more information.

IMPORTANT: This project has been stable and reached its initial target feature set in Q1 2021.
It is actively maintained by the CSID team at Confluent.

[[intro]]
This library lets you process messages in parallel via a single Kafka Consumer meaning you can increase consumer parallelism without increasing the number of partitions in the topic you intend to process.
For many use cases this improves both throughput and latency by reducing load on your brokers.
Expand All @@ -69,6 +71,15 @@ include::{project_root}/parallel-consumer-examples/parallel-consumer-example-cor

An overview article to the library can also be found on Confluent's https://www.confluent.io/blog/[blog]: https://www.confluent.io/blog/introducing-confluent-parallel-message-processing-client/[Introducing the Confluent Parallel Consumer].

[[demo]]
.Relative speed demonstration
--
.Click on the animated SVG image to open the https://asciinema.org/a/404299[Asciinema.org player].
image::https://gist.githubusercontent.com/astubbs/26cccaf8b624a53ae26a52dbc00148b1/raw/cbf558b38b0aa624bd7637406579d2a8f00f51db/demo.svg[link="https://asciinema.org/a/404299"]
--

'''

toc::[]

== Motivation
Expand Down Expand Up @@ -195,7 +206,7 @@ The end-to-end latency of the responses to these answers needs to be as low as t
We can break out as described below into the tool for processing that step, then return to the Kafka Streams context.
* Provisioning extra machines (either virtual machines or real machines) to run multiple clients has a cost, using this library instead avoids the need for extra instances to be deployed in any respect.

== Feature List
== Features List

* Have massively parallel consumption processing without running hundreds or thousands of:
** Kafka consumer clients,
Expand All @@ -207,7 +218,10 @@ without operational burden or harming the cluster's performance
* Solution for the https://en.wikipedia.org/wiki/Head-of-line_blocking["head of line"] blocking problem where continued failure of a single message, prevents progress for messages behind it in the queue
* Per `key` concurrent processing, per partition and unordered message processing
* Offsets committed correctly, in order, of only processed messages, regardless of concurrency level or retries
* Vert.x non-blocking library integration (HTTP and any Vert.x future support)
* Vert.x and Reactor.io non-blocking library integration
** Non-blocking I/O work management
** Vert.x's WebClient and general Vert.x Future support
** Reactor.io Publisher (Mono/Flux) and Java's CompletableFuture (through `Mono#fromFuture`)
* Reactor non-blocking library integration
* Fair partition traversal
* Zero~ dependencies (`Slf4j` and `Lombok`) for the core module
Expand Down Expand Up @@ -610,28 +624,21 @@ image::https://lucid.app/publicSegments/view/43f2740c-2a7f-4b7f-909e-434a5bbe3fb
== Roadmap

For released changes, see the link:CHANGELOG.adoc[CHANGELOG].

=== Short Term - What we're working on nowish ⏰

* Depth~ first or breadth first partition traversal
* JavaRX and other streaming modules
For features in development, have a look at the https://github.com/confluentinc/parallel-consumer/issues[GitHub issues].

=== Medium Term - What's up next ⏲

* https://github.com/confluentinc/parallel-consumer/issues/21[Automatic fanout] (automatic selection of concurrency level based on downstream back pressure) (https://github.com/confluentinc/parallel-consumer/pull/22[draft PR])
* Support for general Vert.x Verticles (non-blocking libraries)
* Dead Letter Queue (DLQ) handling
* Non-blocking I/O work management
** More customisable handling of HTTP interactions
** Chance to batch multiple consumer records into a single or multiple http request objects
* https://github.com/confluentinc/parallel-consumer/issues/28[Distributed tracing integration]
* https://github.com/confluentinc/parallel-consumer/issues/24[Distributed rate limiting]
* https://github.com/confluentinc/parallel-consumer/issues/27[Metrics]
* More customisable handling[https://github.com/confluentinc/parallel-consumer/issues/65] of HTTP interactions
* Chance to https://github.com/confluentinc/parallel-consumer/issues/18[batch multiple consumer records] into a single or multiple http request objects

=== Long Term - The future ☁️

* Apache Kafka KIP?
* Call backs only offset has been committed
* https://github.com/confluentinc/parallel-consumer/issues/21[Automatic fanout] (automatic selection of concurrency level based on downstream back pressure) (https://github.com/confluentinc/parallel-consumer/pull/22[draft PR])
* Dead Letter Queue (DLQ) handling
* Call backs only once offset has been committed

== Usage Requirements

Expand Down

0 comments on commit 84cf11a

Please sign in to comment.