-
Notifications
You must be signed in to change notification settings - Fork 102
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Getting duplicate events on Kafka connector restart #1682
Comments
It's in memory. It's normal. You need to enable deleteAfterRead to true. |
Is there any other work around, because don't want to delete the message in S3 bucket once its consumed. The payload/message in S3 bucket is used for other functionality also. |
Use a Kafka topic:
https://camel.apache.org/camel-kafka-connector/4.8.x/user-guide/idempotency.html |
Tried to use the Kafka topic approach, but getting below exception while deploying the connector:
|
As far as I remember the kafka topic used for this purpose must be accessed without any authentication. That's the reason why it's failing. |
Hi
I am using Apache camel S3 source connector, with Strimizi Operator. When ever the kafka connector is restarted after some config update duplicate events are coming in Kafka means older payload which are present in S3 bucket that can also be seen.
Below is full connector configuration:
CamelAwss3sourceSourceConnectorConfig values: camel.aggregation.size = 10 camel.aggregation.timeout = 500 camel.beans.aggregate = null camel.error.handler = default camel.error.handler.max.redeliveries = 0 camel.error.handler.redelivery.delay = 1000 camel.idempotency.enabled = true camel.idempotency.expression.header = null camel.idempotency.expression.type = body camel.idempotency.kafka.bootstrap.servers = localhost:9092 camel.idempotency.kafka.max.cache.size = 1000 camel.idempotency.kafka.poll.duration.ms = 100 camel.idempotency.kafka.topic = kafka_idempotent_repository camel.idempotency.memory.dimension = 100 camel.idempotency.repository.type = memory camel.kamelet.aws-s3-source.accessKey = [hidden] camel.kamelet.aws-s3-source.autoCreateBucket = false camel.kamelet.aws-s3-source.bucketNameOrArn = test-bucket camel.kamelet.aws-s3-source.delay = 500 camel.kamelet.aws-s3-source.deleteAfterRead = false camel.kamelet.aws-s3-source.forcePathStyle = false camel.kamelet.aws-s3-source.ignoreBody = false camel.kamelet.aws-s3-source.maxMessagesPerPoll = 10 camel.kamelet.aws-s3-source.overrideEndpoint = false camel.kamelet.aws-s3-source.prefix = schema/feature camel.kamelet.aws-s3-source.region = ap-southeast-1 camel.kamelet.aws-s3-source.secretKey = [hidden] camel.kamelet.aws-s3-source.uriEndpointOverride = null camel.kamelet.aws-s3-source.useDefaultCredentialsProvider = false camel.map.headers = true camel.map.properties = true camel.remove.headers.pattern = camel.source.camelMessageHeaderKey = CamelAwsS3Key camel.source.component = null camel.source.contentLogLevel = OFF camel.source.marshal = null camel.source.maxBatchPollSize = 1000 camel.source.maxNotCommittedRecords = 1024 camel.source.maxPollDuration = 1000 camel.source.pollingConsumerBlockTimeout = 0 camel.source.pollingConsumerBlockWhenFull = true camel.source.pollingConsumerQueueSize = 1000 camel.source.unmarshal = null camel.source.url = null topics = commerce-test
The camel.idempotency.enabled = true is already enabled but still duplicate events coming on every kafka connector restart.
Using 4.4.2 version apache camel lib
The text was updated successfully, but these errors were encountered: