diff --git a/.idea/runConfigurations/README.xml b/.idea/runConfigurations/README.xml
new file mode 100644
index 000000000..cb6986814
--- /dev/null
+++ b/.idea/runConfigurations/README.xml
@@ -0,0 +1,27 @@
+
+
+
+
+
+
+
+
+
+
\ No newline at end of file
diff --git a/README.adoc b/README.adoc
index 810dbb1cd..aaa127663 100644
--- a/README.adoc
+++ b/README.adoc
@@ -10,7 +10,7 @@
//
-= parallel-consumer (beta)
+= parallel-consumer
:icons:
:toc: macro
:toclevels: 3
@@ -52,10 +52,12 @@ Parallel Apache Kafka client wrapper with client side queueing, a simpler consum
Confluent's https://www.confluent.io/confluent-accelerators/#parallel-consumer[product page for the project].
-WARNING: This is not a Confluent supported product.
-It is an experimental alpha stage accelerator.
+CAUTION: This is not a Confluent supported product.
See the <> section for more information.
+IMPORTANT: This project has been stable and reached its initial target feature set in Q1 2021.
+It is actively maintained by the CSID team at Confluent.
+
[[intro]]
This library lets you process messages in parallel via a single Kafka Consumer meaning you can increase consumer parallelism without increasing the number of partitions in the topic you intend to process.
For many use cases this improves both throughput and latency by reducing load on your brokers.
@@ -71,6 +73,15 @@ It also opens up new use cases like extreme parallelism, external data enrichmen
An overview article to the library can also be found on Confluent's https://www.confluent.io/blog/[blog]: https://www.confluent.io/blog/introducing-confluent-parallel-message-processing-client/[Introducing the Confluent Parallel Consumer].
+[[demo]]
+.Relative speed demonstration
+--
+.Click on the animated SVG image to open the https://asciinema.org/a/404299[Asciinema.org player].
+image::https://gist.githubusercontent.com/astubbs/26cccaf8b624a53ae26a52dbc00148b1/raw/cbf558b38b0aa624bd7637406579d2a8f00f51db/demo.svg[link="https://asciinema.org/a/404299"]
+--
+
+'''
+
toc::[]
== Motivation
@@ -197,7 +208,7 @@ The end-to-end latency of the responses to these answers needs to be as low as t
We can break out as described below into the tool for processing that step, then return to the Kafka Streams context.
* Provisioning extra machines (either virtual machines or real machines) to run multiple clients has a cost, using this library instead avoids the need for extra instances to be deployed in any respect.
-== Feature List
+== Features List
* Have massively parallel consumption processing without running hundreds or thousands of:
** Kafka consumer clients,
@@ -209,7 +220,10 @@ without operational burden or harming the cluster's performance
* Solution for the https://en.wikipedia.org/wiki/Head-of-line_blocking["head of line"] blocking problem where continued failure of a single message, prevents progress for messages behind it in the queue
* Per `key` concurrent processing, per partition and unordered message processing
* Offsets committed correctly, in order, of only processed messages, regardless of concurrency level or retries
-* Vert.x non-blocking library integration (HTTP and any Vert.x future support)
+* Vert.x and Reactor.io non-blocking library integration
+** Non-blocking I/O work management
+** Vert.x's WebClient and general Vert.x Future support
+** Reactor.io Publisher (Mono/Flux) and Java's CompletableFuture (through `Mono#fromFuture`)
* Reactor non-blocking library integration
* Fair partition traversal
* Zero~ dependencies (`Slf4j` and `Lombok`) for the core module
@@ -689,28 +703,21 @@ image::https://lucid.app/publicSegments/view/43f2740c-2a7f-4b7f-909e-434a5bbe3fb
== Roadmap
For released changes, see the link:CHANGELOG.adoc[CHANGELOG].
-
-=== Short Term - What we're working on nowish ⏰
-
-* Depth~ first or breadth first partition traversal
-* JavaRX and other streaming modules
+For features in development, have a look at the https://github.com/confluentinc/parallel-consumer/issues[GitHub issues].
=== Medium Term - What's up next ⏲
-* https://github.com/confluentinc/parallel-consumer/issues/21[Automatic fanout] (automatic selection of concurrency level based on downstream back pressure) (https://github.com/confluentinc/parallel-consumer/pull/22[draft PR])
-* Support for general Vert.x Verticles (non-blocking libraries)
-* Dead Letter Queue (DLQ) handling
-* Non-blocking I/O work management
-** More customisable handling of HTTP interactions
-** Chance to batch multiple consumer records into a single or multiple http request objects
* https://github.com/confluentinc/parallel-consumer/issues/28[Distributed tracing integration]
* https://github.com/confluentinc/parallel-consumer/issues/24[Distributed rate limiting]
* https://github.com/confluentinc/parallel-consumer/issues/27[Metrics]
+* More customisable handling[https://github.com/confluentinc/parallel-consumer/issues/65] of HTTP interactions
+* Chance to https://github.com/confluentinc/parallel-consumer/issues/18[batch multiple consumer records] into a single or multiple http request objects
=== Long Term - The future ☁️
-* Apache Kafka KIP?
-* Call backs only offset has been committed
+* https://github.com/confluentinc/parallel-consumer/issues/21[Automatic fanout] (automatic selection of concurrency level based on downstream back pressure) (https://github.com/confluentinc/parallel-consumer/pull/22[draft PR])
+* Dead Letter Queue (DLQ) handling
+* Call backs only once offset has been committed
== Usage Requirements
diff --git a/src/docs/README.adoc b/src/docs/README.adoc
index 05542d27d..dcb2de8f8 100644
--- a/src/docs/README.adoc
+++ b/src/docs/README.adoc
@@ -10,7 +10,7 @@
//
-= parallel-consumer (beta)
+= parallel-consumer
:icons:
:toc: macro
:toclevels: 3
@@ -52,10 +52,12 @@ Parallel Apache Kafka client wrapper with client side queueing, a simpler consum
Confluent's https://www.confluent.io/confluent-accelerators/#parallel-consumer[product page for the project].
-WARNING: This is not a Confluent supported product.
-It is an experimental alpha stage accelerator.
+CAUTION: This is not a Confluent supported product.
See the <> section for more information.
+IMPORTANT: This project has been stable and reached its initial target feature set in Q1 2021.
+It is actively maintained by the CSID team at Confluent.
+
[[intro]]
This library lets you process messages in parallel via a single Kafka Consumer meaning you can increase consumer parallelism without increasing the number of partitions in the topic you intend to process.
For many use cases this improves both throughput and latency by reducing load on your brokers.
@@ -69,6 +71,15 @@ include::{project_root}/parallel-consumer-examples/parallel-consumer-example-cor
An overview article to the library can also be found on Confluent's https://www.confluent.io/blog/[blog]: https://www.confluent.io/blog/introducing-confluent-parallel-message-processing-client/[Introducing the Confluent Parallel Consumer].
+[[demo]]
+.Relative speed demonstration
+--
+.Click on the animated SVG image to open the https://asciinema.org/a/404299[Asciinema.org player].
+image::https://gist.githubusercontent.com/astubbs/26cccaf8b624a53ae26a52dbc00148b1/raw/cbf558b38b0aa624bd7637406579d2a8f00f51db/demo.svg[link="https://asciinema.org/a/404299"]
+--
+
+'''
+
toc::[]
== Motivation
@@ -195,7 +206,7 @@ The end-to-end latency of the responses to these answers needs to be as low as t
We can break out as described below into the tool for processing that step, then return to the Kafka Streams context.
* Provisioning extra machines (either virtual machines or real machines) to run multiple clients has a cost, using this library instead avoids the need for extra instances to be deployed in any respect.
-== Feature List
+== Features List
* Have massively parallel consumption processing without running hundreds or thousands of:
** Kafka consumer clients,
@@ -207,7 +218,10 @@ without operational burden or harming the cluster's performance
* Solution for the https://en.wikipedia.org/wiki/Head-of-line_blocking["head of line"] blocking problem where continued failure of a single message, prevents progress for messages behind it in the queue
* Per `key` concurrent processing, per partition and unordered message processing
* Offsets committed correctly, in order, of only processed messages, regardless of concurrency level or retries
-* Vert.x non-blocking library integration (HTTP and any Vert.x future support)
+* Vert.x and Reactor.io non-blocking library integration
+** Non-blocking I/O work management
+** Vert.x's WebClient and general Vert.x Future support
+** Reactor.io Publisher (Mono/Flux) and Java's CompletableFuture (through `Mono#fromFuture`)
* Reactor non-blocking library integration
* Fair partition traversal
* Zero~ dependencies (`Slf4j` and `Lombok`) for the core module
@@ -610,28 +624,21 @@ image::https://lucid.app/publicSegments/view/43f2740c-2a7f-4b7f-909e-434a5bbe3fb
== Roadmap
For released changes, see the link:CHANGELOG.adoc[CHANGELOG].
-
-=== Short Term - What we're working on nowish ⏰
-
-* Depth~ first or breadth first partition traversal
-* JavaRX and other streaming modules
+For features in development, have a look at the https://github.com/confluentinc/parallel-consumer/issues[GitHub issues].
=== Medium Term - What's up next ⏲
-* https://github.com/confluentinc/parallel-consumer/issues/21[Automatic fanout] (automatic selection of concurrency level based on downstream back pressure) (https://github.com/confluentinc/parallel-consumer/pull/22[draft PR])
-* Support for general Vert.x Verticles (non-blocking libraries)
-* Dead Letter Queue (DLQ) handling
-* Non-blocking I/O work management
-** More customisable handling of HTTP interactions
-** Chance to batch multiple consumer records into a single or multiple http request objects
* https://github.com/confluentinc/parallel-consumer/issues/28[Distributed tracing integration]
* https://github.com/confluentinc/parallel-consumer/issues/24[Distributed rate limiting]
* https://github.com/confluentinc/parallel-consumer/issues/27[Metrics]
+* More customisable handling[https://github.com/confluentinc/parallel-consumer/issues/65] of HTTP interactions
+* Chance to https://github.com/confluentinc/parallel-consumer/issues/18[batch multiple consumer records] into a single or multiple http request objects
=== Long Term - The future ☁️
-* Apache Kafka KIP?
-* Call backs only offset has been committed
+* https://github.com/confluentinc/parallel-consumer/issues/21[Automatic fanout] (automatic selection of concurrency level based on downstream back pressure) (https://github.com/confluentinc/parallel-consumer/pull/22[draft PR])
+* Dead Letter Queue (DLQ) handling
+* Call backs only once offset has been committed
== Usage Requirements