Skip to content

Commit 065beae

Browse files
Merge pull request #250 from splunk/prep-for-release
release prep for 2.0
2 parents a9761a0 + 43b6612 commit 065beae

File tree

3 files changed

+9
-8
lines changed

3 files changed

+9
-8
lines changed

README.md

+5-4
Original file line numberDiff line numberDiff line change
@@ -27,7 +27,7 @@ Splunk Connect for Kafka is a Kafka Connect Sink for Splunk with the following f
2727
1. [Start](https://kafka.apache.org/quickstart) your Kafka Cluster and confirm it is running.
2828
2. If this is a new install, create a test topic (eg: `perf`). Inject events into the topic. This can be done using [Kafka data-gen-app](https://github.com/dtregonning/kafka-data-gen) or the Kafka-bundled [kafka-console-producer](https://kafka.apache.org/quickstart#quickstart_send).
2929
3. Within your Kafka Connect deployment adjust the values for `bootstrap.servers` and `plugin.path` inside the `$KAFKA_HOME/config/connect-distributed.properties` file. `bootstrap.servers` should be configured to point to your Kafka Brokers. `plugin.path` should be configured to point to the install directory of your Kafka Connect Sink and Source Connectors. For more information on installing Kafka Connect plugins please refer to the [Confluent Documentation.](https://docs.confluent.io/current/connect/userguide.html#id3)
30-
4. Place the jar file created by `mvn package` (`splunk-kafka-connect-[VERSION].jar`) in or under the location specified in `plugin.path`
30+
4. Place the jar file created by `mvn package` (`splunk-kafka-connect-[VERSION].jar`) in or under the location specified in `plugin.path`
3131
5. Run `.$KAFKA_HOME/bin/connect-distributed.sh $KAFKA_HOME/config/connect-distributed.properties` to start Kafka Connect.
3232
6. Run the following command to create connector tasks. Adjust `topics` to configure the Kafka topic to be ingested, `splunk.indexes` to set the destination Splunk indexes, `splunk.hec.token` to set your Http Event Collector (HEC) token and `splunk.hec.uri` to the URI for your destination Splunk HEC endpoint. For more information on Splunk HEC configuration refer to [Splunk Documentation.](http://docs.splunk.com/Documentation/SplunkCloud/latest/Data/UsetheHTTPEventCollector)
3333

@@ -42,7 +42,7 @@ Splunk Connect for Kafka is a Kafka Connect Sink for Splunk with the following f
4242
"splunk.hec.uri": "<SPLUNK_HEC_URI:SPLUNK_HEC_PORT>",
4343
"splunk.hec.token": "<YOUR_TOKEN>"
4444
}
45-
}'
45+
}'
4646
```
4747

4848
7. Verify that data is flowing into your Splunk platform instance by searching using the index specified in the configuration.
@@ -111,7 +111,7 @@ Use the below schema to configure Splunk Connect for Kafka
111111
"splunk.hec.socket.timeout": "<timeout in seconds>",
112112
"splunk.hec.track.data": "<true|false, tracking data loss and latency, for debugging lagging and data loss>"
113113
"splunk.header.support": "<true|false>",
114-
"splunk.header.custom": "<list-of-custom-headers-to-be-used-from-kafka-headers-separated-by-comma>",
114+
"splunk.header.custom": "<list-of-custom-headers-to-be-used-from-kafka-headers-separated-by-comma>",
115115
"splunk.header.index": "<header-value-to-be-used-as-splunk-index>",
116116
"splunk.header.source": "<header-value-to-be-used-as-splunk-source>",
117117
"splunk.header.sourcetype": "<header-value-to-be-used-as-splunk-sourcetype>",
@@ -154,6 +154,7 @@ Use the below schema to configure Splunk Connect for Kafka
154154
| `splunk.hec.max.outstanding.events` | Maximum amount of un-acknowledged events kept in memory by connector. Will trigger back-pressure event to slow down collection if reached. | `1000000` |
155155
| `splunk.hec.max.retries` | Amount of times a failed batch will attempt to resend before dropping events completely. Warning: This will result in data loss, default is `-1` which will retry indefinitely | `-1` |
156156
| `splunk.hec.backoff.threshhold.seconds` | The amount of time Splunk Connect for Kafka waits to attempt resending after errors from a HEC endpoint." | `60` |
157+
| `splunk.hec.lb.poll.interval` | Specify this parameter(in seconds) to control the polling interval(increase to do less polling, decrease to do more frequent polling) | `120` |
157158
### Acknowledgement Parameters
158159
#### Use Ack
159160
| Name | Description | Default Value |
@@ -193,7 +194,7 @@ See [Splunk Docs](https://docs.splunk.com/Documentation/KafkaConnect/latest/User
193194

194195
## Benchmark Results
195196

196-
See [Splunk Docs](https://docs.splunk.com/Documentation/KafkaConnect/latest/User/Planyourdeployment) for benchmarking results.
197+
See [Splunk Docs](https://docs.splunk.com/Documentation/KafkaConnect/latest/User/Planyourdeployment) for benchmarking results.
197198

198199
## Scale out your environment
199200

pom.xml

+2-2
Original file line numberDiff line numberDiff line change
@@ -6,7 +6,7 @@
66

77
<groupId>com.github.splunk.kafka.connect</groupId>
88
<artifactId>splunk-kafka-connect</artifactId>
9-
<version>v1.3.0-SNAPSHOT</version>
9+
<version>v2.0</version>
1010
<name>splunk-kafka-connect</name>
1111

1212
<properties>
@@ -308,4 +308,4 @@
308308

309309
</plugins>
310310
</build>
311-
</project>
311+
</project>

src/main/resources/version.properties

+2-2
Original file line numberDiff line numberDiff line change
@@ -1,3 +1,3 @@
11
githash=
2-
gitbranch=release/1.3.x
3-
gitversion=v1.3.0-SNAPSHOT
2+
gitbranch=release/2.0.x
3+
gitversion=v2.0

0 commit comments

Comments
 (0)