You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
com.hazelcast.sql.HazelcastSqlException: com.hazelcast.jet.JetException: Exception in ProcessorTasklet{0639-28cb-55ce-0001/Kafka[public.t_9019ff9c_358b_4fb2_a42f_f25154bffd1b]#0}: org.apache.kafka.common.errors.SerializationException: Error registering Avro schema: {"type":"record","name":"sql","namespace":"jet","fields":[{"name":"name","type":["null","string"],"default":null}]}
at com.hazelcast.sql.impl.QueryUtils.toPublicException(QueryUtils.java:78)
at com.hazelcast.sql.impl.SqlServiceImpl.execute(SqlServiceImpl.java:232)
at com.hazelcast.sql.impl.SqlServiceImpl.execute(SqlServiceImpl.java:198)
at com.hazelcast.sql.impl.SqlServiceImpl.execute(SqlServiceImpl.java:194)
at com.hazelcast.sql.SqlService.execute(SqlService.java:144)
at com.hazelcast.jet.sql.impl.connector.kafka.SqlAvroTest.assertTopicEventually(SqlAvroTest.java:454)
at com.hazelcast.jet.sql.impl.connector.kafka.SqlAvroTest.test_explicitKeyAndValueSerializers(SqlAvroTest.java:397)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59)
at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56)
at org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
at com.hazelcast.test.FailOnTimeoutStatement$CallableStatement.call(FailOnTimeoutStatement.java:115)
at com.hazelcast.test.FailOnTimeoutStatement$CallableStatement.call(FailOnTimeoutStatement.java:107)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at java.lang.Thread.run(Thread.java:748)
Caused by: java.util.concurrent.CompletionException: com.hazelcast.jet.JetException: Exception in ProcessorTasklet{0639-28cb-55ce-0001/Kafka[public.t_9019ff9c_358b_4fb2_a42f_f25154bffd1b]#0}: org.apache.kafka.common.errors.SerializationException: Error registering Avro schema: {"type":"record","name":"sql","namespace":"jet","fields":[{"name":"name","type":["null","string"],"default":null}]}
at java.util.concurrent.CompletableFuture.reportJoin(CompletableFuture.java:375)
at java.util.concurrent.CompletableFuture.join(CompletableFuture.java:1947)
at com.hazelcast.jet.Job.join(Job.java:148)
at com.hazelcast.jet.sql.impl.JetPlanExecutor.execute(JetPlanExecutor.java:164)
at com.hazelcast.jet.sql.impl.JetPlan$SelectOrSinkPlan.execute(JetPlan.java:341)
at com.hazelcast.jet.sql.impl.JetSqlCoreBackendImpl.execute(JetSqlCoreBackendImpl.java:90)
at com.hazelcast.sql.impl.SqlServiceImpl.executeJet(SqlServiceImpl.java:343)
at com.hazelcast.sql.impl.SqlServiceImpl.execute(SqlServiceImpl.java:332)
at com.hazelcast.sql.impl.SqlServiceImpl.query0(SqlServiceImpl.java:273)
at com.hazelcast.sql.impl.SqlServiceImpl.execute(SqlServiceImpl.java:219)
... 17 more
Caused by: com.hazelcast.jet.JetException: Exception in ProcessorTasklet{0639-28cb-55ce-0001/Kafka[public.t_9019ff9c_358b_4fb2_a42f_f25154bffd1b]#0}: org.apache.kafka.common.errors.SerializationException: Error registering Avro schema: {"type":"record","name":"sql","namespace":"jet","fields":[{"name":"name","type":["null","string"],"default":null}]}
at com.hazelcast.jet.impl.execution.TaskletExecutionService$BlockingWorker.run(TaskletExecutionService.java:305)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
... 1 more
Caused by: org.apache.kafka.common.errors.SerializationException: Error registering Avro schema: {"type":"record","name":"sql","namespace":"jet","fields":[{"name":"name","type":["null","string"],"default":null}]}
Caused by: io.confluent.kafka.schemaregistry.client.rest.exceptions.RestClientException: Register operation timed out; error code: 50002
at io.confluent.kafka.schemaregistry.client.rest.RestService.sendHttpRequest(RestService.java:208)
at io.confluent.kafka.schemaregistry.client.rest.RestService.httpRequest(RestService.java:235)
at io.confluent.kafka.schemaregistry.client.rest.RestService.registerSchema(RestService.java:326)
at io.confluent.kafka.schemaregistry.client.rest.RestService.registerSchema(RestService.java:318)
at io.confluent.kafka.schemaregistry.client.rest.RestService.registerSchema(RestService.java:313)
at io.confluent.kafka.schemaregistry.client.CachedSchemaRegistryClient.registerAndGetId(CachedSchemaRegistryClient.java:115)
at io.confluent.kafka.schemaregistry.client.CachedSchemaRegistryClient.register(CachedSchemaRegistryClient.java:154)
at io.confluent.kafka.serializers.AbstractKafkaAvroSerializer.serializeImpl(AbstractKafkaAvroSerializer.java:79)
at io.confluent.kafka.serializers.KafkaAvroSerializer.serialize(KafkaAvroSerializer.java:53)
at org.apache.kafka.common.serialization.Serializer.serialize(Serializer.java:60)
at org.apache.kafka.clients.producer.KafkaProducer.doSend(KafkaProducer.java:876)
at org.apache.kafka.clients.producer.KafkaProducer.send(KafkaProducer.java:846)
at com.hazelcast.jet.kafka.impl.WriteKafkaP.process(WriteKafkaP.java:128)
at com.hazelcast.jet.impl.execution.ProcessorTasklet.processInbox(ProcessorTasklet.java:421)
at com.hazelcast.jet.impl.execution.ProcessorTasklet.stateMachineStep(ProcessorTasklet.java:296)
at com.hazelcast.jet.impl.execution.ProcessorTasklet.stateMachineStep(ProcessorTasklet.java:291)
at com.hazelcast.jet.impl.execution.ProcessorTasklet.stateMachineStep(ProcessorTasklet.java:273)
at com.hazelcast.jet.impl.execution.ProcessorTasklet.call(ProcessorTasklet.java:247)
at com.hazelcast.jet.impl.execution.TaskletExecutionService$BlockingWorker.run(TaskletExecutionService.java:294)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)
Standard output:
2021-05-22 10:38:53,185 INFO [test_explicitKeyAndValueSerializers] admin.AdminUtils$ (Logging.scala:info(66)) - Topic creation {"version":1,"partitions":{"2":[0],"1":[0],"3":[0],"0":[0]}}
2021-05-22 10:38:53,264 INFO [controller-event-thread] controller.KafkaController (Logging.scala:info(66)) - [Controller id=0] New topics: [Set(t_9019ff9c_358b_4fb2_a42f_f25154bffd1b)], deleted topics: [Set()], new partition replica assignment [Map(t_9019ff9c_358b_4fb2_a42f_f25154bffd1b-2 -> Vector(0), t_9019ff9c_358b_4fb2_a42f_f25154bffd1b-1 -> Vector(0), t_9019ff9c_358b_4fb2_a42f_f25154bffd1b-3 -> Vector(0), t_9019ff9c_358b_4fb2_a42f_f25154bffd1b-0 -> Vector(0))]
2021-05-22 10:38:53,286 INFO [controller-event-thread] controller.KafkaController (Logging.scala:info(66)) - [Controller id=0] New partition creation callback for t_9019ff9c_358b_4fb2_a42f_f25154bffd1b-2,t_9019ff9c_358b_4fb2_a42f_f25154bffd1b-1,t_9019ff9c_358b_4fb2_a42f_f25154bffd1b-3,t_9019ff9c_358b_4fb2_a42f_f25154bffd1b-0
2021-05-22 10:38:53,291 [DEBUG] [test_explicitKeyAndValueSerializers] [c.h.j.s.i.JetSqlBackend]: [127.0.0.1]:5701 [jet] [4.5.1-SNAPSHOT] Before logical opt:
LogicalTableModify(table=[[hazelcast, public, t_9019ff9c_358b_4fb2_a42f_f25154bffd1b[projects=[0, 1, 2, 3]]]], operation=[INSERT], flattened=[false])
LogicalProject(key_name=[$1], __key=[null:OBJECT], value_name=[$0], this=[null:OBJECT])
LogicalValues(tuples=[[{ _UTF-16LE'Bob', _UTF-16LE'Alice' }]])
2021-05-22 10:38:53,317 [DEBUG] [test_explicitKeyAndValueSerializers] [c.h.j.s.i.JetSqlBackend]: [127.0.0.1]:5701 [jet] [4.5.1-SNAPSHOT] After logical opt:
InsertLogicalRel(table=[[hazelcast, public, t_9019ff9c_358b_4fb2_a42f_f25154bffd1b[projects=[0, 1, 2, 3]]]], operation=[INSERT], flattened=[false])
ProjectLogicalRel(key_name=[$1], __key=[null:OBJECT], value_name=[$0], this=[null:OBJECT])
ValuesLogicalRel(tuples=[[{ Bob, Alice }]])
2021-05-22 10:38:53,408 [DEBUG] [test_explicitKeyAndValueSerializers] [c.h.j.s.i.JetSqlBackend]: [127.0.0.1]:5701 [jet] [4.5.1-SNAPSHOT] After physical opt:
InsertPhysicalRel(table=[[hazelcast, public, t_9019ff9c_358b_4fb2_a42f_f25154bffd1b[projects=[0, 1, 2, 3]]]], operation=[INSERT], flattened=[false])
ProjectPhysicalRel(key_name=[$1], __key=[null:OBJECT], value_name=[$0], this=[null:OBJECT])
ValuesPhysicalRel(tuples=[[{ Bob, Alice }]])
2021-05-22 10:38:53,421 [ INFO] [hz.naughty_borg.cached.thread-4] [c.h.j.i.JobCoordinationService]: [127.0.0.1]:5701 [jet] [4.5.1-SNAPSHOT] Starting job 0639-28cb-55ce-0001 based on submit request
2021-05-22 10:38:53,423 [ INFO] [hz.naughty_borg.cached.thread-4] [c.h.j.i.MasterJobContext]: [127.0.0.1]:5701 [jet] [4.5.1-SNAPSHOT] Didn't find any snapshot to restore for job '0639-28cb-55ce-0001', execution 0639-28cb-55cf-0001
2021-05-22 10:38:53,423 [ INFO] [hz.naughty_borg.cached.thread-4] [c.h.j.i.MasterJobContext]: [127.0.0.1]:5701 [jet] [4.5.1-SNAPSHOT] Start executing job '0639-28cb-55ce-0001', execution 0639-28cb-55cf-0001, execution graph in DOT format:
digraph DAG {
"Values" [localParallelism=1];
"Project" [localParallelism=72];
"Project(Kafka[public.t_9019ff9c_358b_4fb2_a42f_f25154bffd1b])" [localParallelism=72];
"Kafka[public.t_9019ff9c_358b_4fb2_a42f_f25154bffd1b]" [localParallelism=1];
"Values" -> "Project" [queueSize=1024];
"Project" -> "Project(Kafka[public.t_9019ff9c_358b_4fb2_a42f_f25154bffd1b])" [queueSize=1024];
"Project(Kafka[public.t_9019ff9c_358b_4fb2_a42f_f25154bffd1b])" -> "Kafka[public.t_9019ff9c_358b_4fb2_a42f_f25154bffd1b]" [queueSize=1024];
}
HINT: You can use graphviz or http://viz-js.com to visualize the printed graph.
2021-05-22 10:38:53,423 [DEBUG] [hz.naughty_borg.cached.thread-4] [c.h.j.i.MasterJobContext]: [127.0.0.1]:5701 [jet] [4.5.1-SNAPSHOT] Building execution plan for job '0639-28cb-55ce-0001', execution 0639-28cb-55cf-0001
2021-05-22 10:38:53,424 [DEBUG] [hz.naughty_borg.cached.thread-4] [c.h.j.i.MasterJobContext]: [127.0.0.1]:5701 [jet] [4.5.1-SNAPSHOT] Built execution plans for job '0639-28cb-55ce-0001', execution 0639-28cb-55cf-0001
2021-05-22 10:38:53,426 [DEBUG] [hz.naughty_borg.cached.thread-4] [c.h.j.i.o.InitExecutionOperation]: [127.0.0.1]:5701 [jet] [4.5.1-SNAPSHOT] Initializing execution plan for job 0639-28cb-55ce-0001, execution 0639-28cb-55cf-0001 from [127.0.0.1]:5701
2021-05-22 10:38:53,536 [ INFO] [hz.naughty_borg.cached.thread-4] [c.h.j.i.JobExecutionService]: [127.0.0.1]:5701 [jet] [4.5.1-SNAPSHOT] Execution plan for jobId=0639-28cb-55ce-0001, jobName='0639-28cb-55ce-0001', executionId=0639-28cb-55cf-0001 initialized
2021-05-22 10:38:53,536 [DEBUG] [hz.naughty_borg.cached.thread-4] [c.h.j.i.MasterJobContext]: [127.0.0.1]:5701 [jet] [4.5.1-SNAPSHOT] Init of job '0639-28cb-55ce-0001', execution 0639-28cb-55cf-0001 was successful
2021-05-22 10:38:53,536 [DEBUG] [hz.naughty_borg.cached.thread-4] [c.h.j.i.MasterJobContext]: [127.0.0.1]:5701 [jet] [4.5.1-SNAPSHOT] Executing job '0639-28cb-55ce-0001', execution 0639-28cb-55cf-0001
2021-05-22 10:38:53,536 [ INFO] [hz.naughty_borg.cached.thread-4] [c.h.j.i.JobExecutionService]: [127.0.0.1]:5701 [jet] [4.5.1-SNAPSHOT] Start execution of job '0639-28cb-55ce-0001', execution 0639-28cb-55cf-0001 from coordinator [127.0.0.1]:5701
2021-05-22 10:38:53,547 [DEBUG] [hz.naughty_borg.jet.blocking.thread-1] [c.h.j.k.i.WriteKafkaP]: [127.0.0.1]:5701 [jet] [4.5.1-SNAPSHOT] [0639-28cb-55ce-0001/Kafka[public.t_9019ff9c_358b_4fb2_a42f_f25154bffd1b]#0] Actual pool size used: 1
2021-05-22 10:38:53,548 INFO [hz.naughty_borg.jet.blocking.thread-1] producer.ProducerConfig (AbstractConfig.java:logAll(279)) - ProducerConfig values:
acks = 1
batch.size = 16384
bootstrap.servers = [127.0.0.1:34342]
buffer.memory = 33554432
client.dns.lookup = default
client.id =
compression.type = none
connections.max.idle.ms = 540000
delivery.timeout.ms = 120000
enable.idempotence = false
interceptor.classes = []
key.serializer = class io.confluent.kafka.serializers.KafkaAvroSerializer
linger.ms = 0
max.block.ms = 60000
max.in.flight.requests.per.connection = 5
max.request.size = 1048576
metadata.max.age.ms = 300000
metric.reporters = []
metrics.num.samples = 2
metrics.recording.level = INFO
metrics.sample.window.ms = 30000
partitioner.class = class org.apache.kafka.clients.producer.internals.DefaultPartitioner
receive.buffer.bytes = 32768
reconnect.backoff.max.ms = 1000
reconnect.backoff.ms = 50
request.timeout.ms = 30000
retries = 2147483647
retry.backoff.ms = 100
sasl.client.callback.handler.class = null
sasl.jaas.config = null
sasl.kerberos.kinit.cmd = /usr/bin/kinit
sasl.kerberos.min.time.before.relogin = 60000
sasl.kerberos.service.name = null
sasl.kerberos.ticket.renew.jitter = 0.05
sasl.kerberos.ticket.renew.window.factor = 0.8
sasl.login.callback.handler.class = null
sasl.login.class = null
sasl.login.refresh.buffer.seconds = 300
sasl.login.refresh.min.period.seconds = 60
sasl.login.refresh.window.factor = 0.8
sasl.login.refresh.window.jitter = 0.05
sasl.mechanism = GSSAPI
security.protocol = PLAINTEXT
send.buffer.bytes = 131072
ssl.cipher.suites = null
ssl.enabled.protocols = [TLSv1.2, TLSv1.1, TLSv1]
ssl.endpoint.identification.algorithm = https
ssl.key.password = null
ssl.keymanager.algorithm = SunX509
ssl.keystore.location = null
ssl.keystore.password = null
ssl.keystore.type = JKS
ssl.protocol = TLS
ssl.provider = null
ssl.secure.random.implementation = null
ssl.trustmanager.algorithm = PKIX
ssl.truststore.location = null
ssl.truststore.password = null
ssl.truststore.type = JKS
transaction.timeout.ms = 60000
transactional.id = null
value.serializer = class io.confluent.kafka.serializers.KafkaAvroSerializer
2021-05-22 10:38:53,549 INFO [hz.naughty_borg.jet.blocking.thread-1] serializers.KafkaAvroSerializerConfig (AbstractConfig.java:logAll(179)) - KafkaAvroSerializerConfig values:
schema.registry.url = [http://0.0.0.0:37400/]
auto.register.schemas = true
max.schemas.per.subject = 1000
basic.auth.credentials.source = URL
schema.registry.basic.auth.user.info = [hidden]
value.subject.name.strategy = class io.confluent.kafka.serializers.subject.TopicNameStrategy
key.subject.name.strategy = class io.confluent.kafka.serializers.subject.TopicNameStrategy
2021-05-22 10:38:53,549 INFO [hz.naughty_borg.jet.blocking.thread-1] serializers.KafkaAvroSerializerConfig (AbstractConfig.java:logAll(179)) - KafkaAvroSerializerConfig values:
schema.registry.url = [http://0.0.0.0:37400/]
auto.register.schemas = true
max.schemas.per.subject = 1000
basic.auth.credentials.source = URL
schema.registry.basic.auth.user.info = [hidden]
value.subject.name.strategy = class io.confluent.kafka.serializers.subject.TopicNameStrategy
key.subject.name.strategy = class io.confluent.kafka.serializers.subject.TopicNameStrategy
2021-05-22 10:38:53,566 WARN [hz.naughty_borg.jet.blocking.thread-1] producer.ProducerConfig (AbstractConfig.java:logUnused(287)) - The configuration 'key.deserializer' was supplied but isn't a known config.
2021-05-22 10:38:53,566 WARN [hz.naughty_borg.jet.blocking.thread-1] producer.ProducerConfig (AbstractConfig.java:logUnused(287)) - The configuration 'value.deserializer' was supplied but isn't a known config.
2021-05-22 10:38:53,566 WARN [hz.naughty_borg.jet.blocking.thread-1] producer.ProducerConfig (AbstractConfig.java:logUnused(287)) - The configuration 'auto.offset.reset' was supplied but isn't a known config.
2021-05-22 10:38:53,566 INFO [hz.naughty_borg.jet.blocking.thread-1] utils.AppInfoParser (AppInfoParser.java:<init>(109)) - Kafka version: 2.2.2
2021-05-22 10:38:53,566 INFO [hz.naughty_borg.jet.blocking.thread-1] utils.AppInfoParser (AppInfoParser.java:<init>(110)) - Kafka commitId: 1d348535a0a747d1
2021-05-22 10:38:53,724 INFO [data-plane-kafka-request-handler-3] server.ReplicaFetcherManager (Logging.scala:info(66)) - [ReplicaFetcherManager on broker 0] Removed fetcher for partitions Set(t_9019ff9c_358b_4fb2_a42f_f25154bffd1b-3, t_9019ff9c_358b_4fb2_a42f_f25154bffd1b-0, t_9019ff9c_358b_4fb2_a42f_f25154bffd1b-1, t_9019ff9c_358b_4fb2_a42f_f25154bffd1b-2)
2021-05-22 10:38:53,859 INFO [data-plane-kafka-request-handler-3] log.Log (Logging.scala:info(66)) - [Log partition=t_9019ff9c_358b_4fb2_a42f_f25154bffd1b-3, dir=/tmp/kafka-2941044051333788778] Loading producer state till offset 0 with message format version 2
2021-05-22 10:38:53,859 INFO [data-plane-kafka-request-handler-3] log.Log (Logging.scala:info(66)) - [Log partition=t_9019ff9c_358b_4fb2_a42f_f25154bffd1b-3, dir=/tmp/kafka-2941044051333788778] Completed load of log with 1 segments, log start offset 0 and log end offset 0 in 0 ms
2021-05-22 10:38:53,861 INFO [data-plane-kafka-request-handler-3] log.LogManager (Logging.scala:info(66)) - Created log for partition t_9019ff9c_358b_4fb2_a42f_f25154bffd1b-3 in /tmp/kafka-2941044051333788778 with properties {compression.type -> producer, message.format.version -> 2.2-IV1, file.delete.delay.ms -> 60000, max.message.bytes -> 1000012, min.compaction.lag.ms -> 0, message.timestamp.type -> CreateTime, message.downconversion.enable -> true, min.insync.replicas -> 1, segment.jitter.ms -> 0, preallocate -> false, min.cleanable.dirty.ratio -> 0.5, index.interval.bytes -> 4096, unclean.leader.election.enable -> false, retention.bytes -> -1, delete.retention.ms -> 86400000, cleanup.policy -> [delete], flush.ms -> 9223372036854775807, segment.ms -> 604800000, segment.bytes -> 1073741824, retention.ms -> 604800000, message.timestamp.difference.max.ms -> 9223372036854775807, segment.index.bytes -> 10485760, flush.messages -> 9223372036854775807}.
2021-05-22 10:38:53,862 INFO [data-plane-kafka-request-handler-3] cluster.Partition (Logging.scala:info(66)) - [Partition t_9019ff9c_358b_4fb2_a42f_f25154bffd1b-3 broker=0] No checkpointed highwatermark is found for partition t_9019ff9c_358b_4fb2_a42f_f25154bffd1b-3
2021-05-22 10:38:53,862 INFO [data-plane-kafka-request-handler-3] cluster.Replica (Logging.scala:info(66)) - Replica loaded for partition t_9019ff9c_358b_4fb2_a42f_f25154bffd1b-3 with initial high watermark 0
2021-05-22 10:38:53,862 INFO [data-plane-kafka-request-handler-3] cluster.Partition (Logging.scala:info(66)) - [Partition t_9019ff9c_358b_4fb2_a42f_f25154bffd1b-3 broker=0] t_9019ff9c_358b_4fb2_a42f_f25154bffd1b-3 starts at Leader Epoch 0 from offset 0. Previous Leader Epoch was: -1
2021-05-22 10:38:53,917 WARN [kafka-producer-network-thread | producer-6] clients.NetworkClient (NetworkClient.java:handleCompletedMetadataResponse(1031)) - [Producer clientId=producer-6] Error while fetching metadata with correlation id 1 : {t_9019ff9c_358b_4fb2_a42f_f25154bffd1b=LEADER_NOT_AVAILABLE}
2021-05-22 10:38:53,917 INFO [kafka-producer-network-thread | producer-6] clients.Metadata (Metadata.java:update(379)) - Cluster ID: JS3oSsq1QlafyXqblpJeKQ
2021-05-22 10:38:53,932 INFO [data-plane-kafka-request-handler-3] log.Log (Logging.scala:info(66)) - [Log partition=t_9019ff9c_358b_4fb2_a42f_f25154bffd1b-0, dir=/tmp/kafka-2941044051333788778] Loading producer state till offset 0 with message format version 2
2021-05-22 10:38:53,933 INFO [data-plane-kafka-request-handler-3] log.Log (Logging.scala:info(66)) - [Log partition=t_9019ff9c_358b_4fb2_a42f_f25154bffd1b-0, dir=/tmp/kafka-2941044051333788778] Completed load of log with 1 segments, log start offset 0 and log end offset 0 in 0 ms
2021-05-22 10:38:53,937 INFO [data-plane-kafka-request-handler-3] log.LogManager (Logging.scala:info(66)) - Created log for partition t_9019ff9c_358b_4fb2_a42f_f25154bffd1b-0 in /tmp/kafka-2941044051333788778 with properties {compression.type -> producer, message.format.version -> 2.2-IV1, file.delete.delay.ms -> 60000, max.message.bytes -> 1000012, min.compaction.lag.ms -> 0, message.timestamp.type -> CreateTime, message.downconversion.enable -> true, min.insync.replicas -> 1, segment.jitter.ms -> 0, preallocate -> false, min.cleanable.dirty.ratio -> 0.5, index.interval.bytes -> 4096, unclean.leader.election.enable -> false, retention.bytes -> -1, delete.retention.ms -> 86400000, cleanup.policy -> [delete], flush.ms -> 9223372036854775807, segment.ms -> 604800000, segment.bytes -> 1073741824, retention.ms -> 604800000, message.timestamp.difference.max.ms -> 9223372036854775807, segment.index.bytes -> 10485760, flush.messages -> 9223372036854775807}.
2021-05-22 10:38:53,937 INFO [data-plane-kafka-request-handler-3] cluster.Partition (Logging.scala:info(66)) - [Partition t_9019ff9c_358b_4fb2_a42f_f25154bffd1b-0 broker=0] No checkpointed highwatermark is found for partition t_9019ff9c_358b_4fb2_a42f_f25154bffd1b-0
2021-05-22 10:38:53,938 INFO [data-plane-kafka-request-handler-3] cluster.Replica (Logging.scala:info(66)) - Replica loaded for partition t_9019ff9c_358b_4fb2_a42f_f25154bffd1b-0 with initial high watermark 0
2021-05-22 10:38:53,938 INFO [data-plane-kafka-request-handler-3] cluster.Partition (Logging.scala:info(66)) - [Partition t_9019ff9c_358b_4fb2_a42f_f25154bffd1b-0 broker=0] t_9019ff9c_358b_4fb2_a42f_f25154bffd1b-0 starts at Leader Epoch 0 from offset 0. Previous Leader Epoch was: -1
2021-05-22 10:38:53,987 INFO [data-plane-kafka-request-handler-3] log.Log (Logging.scala:info(66)) - [Log partition=t_9019ff9c_358b_4fb2_a42f_f25154bffd1b-1, dir=/tmp/kafka-2941044051333788778] Loading producer state till offset 0 with message format version 2
2021-05-22 10:38:53,988 INFO [data-plane-kafka-request-handler-3] log.Log (Logging.scala:info(66)) - [Log partition=t_9019ff9c_358b_4fb2_a42f_f25154bffd1b-1, dir=/tmp/kafka-2941044051333788778] Completed load of log with 1 segments, log start offset 0 and log end offset 0 in 0 ms
2021-05-22 10:38:53,989 INFO [data-plane-kafka-request-handler-3] log.LogManager (Logging.scala:info(66)) - Created log for partition t_9019ff9c_358b_4fb2_a42f_f25154bffd1b-1 in /tmp/kafka-2941044051333788778 with properties {compression.type -> producer, message.format.version -> 2.2-IV1, file.delete.delay.ms -> 60000, max.message.bytes -> 1000012, min.compaction.lag.ms -> 0, message.timestamp.type -> CreateTime, message.downconversion.enable -> true, min.insync.replicas -> 1, segment.jitter.ms -> 0, preallocate -> false, min.cleanable.dirty.ratio -> 0.5, index.interval.bytes -> 4096, unclean.leader.election.enable -> false, retention.bytes -> -1, delete.retention.ms -> 86400000, cleanup.policy -> [delete], flush.ms -> 9223372036854775807, segment.ms -> 604800000, segment.bytes -> 1073741824, retention.ms -> 604800000, message.timestamp.difference.max.ms -> 9223372036854775807, segment.index.bytes -> 10485760, flush.messages -> 9223372036854775807}.
2021-05-22 10:38:53,991 INFO [data-plane-kafka-request-handler-3] cluster.Partition (Logging.scala:info(66)) - [Partition t_9019ff9c_358b_4fb2_a42f_f25154bffd1b-1 broker=0] No checkpointed highwatermark is found for partition t_9019ff9c_358b_4fb2_a42f_f25154bffd1b-1
2021-05-22 10:38:53,991 INFO [data-plane-kafka-request-handler-3] cluster.Replica (Logging.scala:info(66)) - Replica loaded for partition t_9019ff9c_358b_4fb2_a42f_f25154bffd1b-1 with initial high watermark 0
2021-05-22 10:38:53,992 INFO [data-plane-kafka-request-handler-3] cluster.Partition (Logging.scala:info(66)) - [Partition t_9019ff9c_358b_4fb2_a42f_f25154bffd1b-1 broker=0] t_9019ff9c_358b_4fb2_a42f_f25154bffd1b-1 starts at Leader Epoch 0 from offset 0. Previous Leader Epoch was: -1
2021-05-22 10:38:54,014 INFO [data-plane-kafka-request-handler-3] log.Log (Logging.scala:info(66)) - [Log partition=t_9019ff9c_358b_4fb2_a42f_f25154bffd1b-2, dir=/tmp/kafka-2941044051333788778] Loading producer state till offset 0 with message format version 2
2021-05-22 10:38:54,015 INFO [data-plane-kafka-request-handler-3] log.Log (Logging.scala:info(66)) - [Log partition=t_9019ff9c_358b_4fb2_a42f_f25154bffd1b-2, dir=/tmp/kafka-2941044051333788778] Completed load of log with 1 segments, log start offset 0 and log end offset 0 in 0 ms
2021-05-22 10:38:54,016 INFO [data-plane-kafka-request-handler-3] log.LogManager (Logging.scala:info(66)) - Created log for partition t_9019ff9c_358b_4fb2_a42f_f25154bffd1b-2 in /tmp/kafka-2941044051333788778 with properties {compression.type -> producer, message.format.version -> 2.2-IV1, file.delete.delay.ms -> 60000, max.message.bytes -> 1000012, min.compaction.lag.ms -> 0, message.timestamp.type -> CreateTime, message.downconversion.enable -> true, min.insync.replicas -> 1, segment.jitter.ms -> 0, preallocate -> false, min.cleanable.dirty.ratio -> 0.5, index.interval.bytes -> 4096, unclean.leader.election.enable -> false, retention.bytes -> -1, delete.retention.ms -> 86400000, cleanup.policy -> [delete], flush.ms -> 9223372036854775807, segment.ms -> 604800000, segment.bytes -> 1073741824, retention.ms -> 604800000, message.timestamp.difference.max.ms -> 9223372036854775807, segment.index.bytes -> 10485760, flush.messages -> 9223372036854775807}.
2021-05-22 10:38:54,018 INFO [data-plane-kafka-request-handler-3] cluster.Partition (Logging.scala:info(66)) - [Partition t_9019ff9c_358b_4fb2_a42f_f25154bffd1b-2 broker=0] No checkpointed highwatermark is found for partition t_9019ff9c_358b_4fb2_a42f_f25154bffd1b-2
2021-05-22 10:38:54,018 INFO [data-plane-kafka-request-handler-3] cluster.Replica (Logging.scala:info(66)) - Replica loaded for partition t_9019ff9c_358b_4fb2_a42f_f25154bffd1b-2 with initial high watermark 0
2021-05-22 10:38:54,018 INFO [data-plane-kafka-request-handler-3] cluster.Partition (Logging.scala:info(66)) - [Partition t_9019ff9c_358b_4fb2_a42f_f25154bffd1b-2 broker=0] t_9019ff9c_358b_4fb2_a42f_f25154bffd1b-2 starts at Leader Epoch 0 from offset 0. Previous Leader Epoch was: -1
2021-05-22 10:38:54,124 WARN [kafka-producer-network-thread | producer-6] clients.NetworkClient (NetworkClient.java:handleCompletedMetadataResponse(1031)) - [Producer clientId=producer-6] Error while fetching metadata with correlation id 3 : {t_9019ff9c_358b_4fb2_a42f_f25154bffd1b=LEADER_NOT_AVAILABLE}
2021-05-22 10:38:54,380 INFO [qtp1270743145-2045] storage.KafkaStore (KafkaStore.java:waitUntilKafkaReaderReachesLastOffset(277)) - Wait to catch up until the offset of the last message at 7
2021-05-22 10:38:54,922 [DEBUG] [hz.naughty_borg.cached.thread-25] [c.h.j.i.JobRepository]: [127.0.0.1]:5701 [jet] [4.5.1-SNAPSHOT] Job cleanup took 69ms
2021-05-22 10:38:54,941 WARN [qtp1270743145-2045] server.HttpChannel (HttpChannel.java:handleException(677)) - /subjects/t_9019ff9c_358b_4fb2_a42f_f25154bffd1b-key/versions
java.lang.NoSuchMethodError: org.eclipse.jetty.server.Request.getUri()Lorg/eclipse/jetty/http/HttpURI;
at io.confluent.rest.logging.AbstractNCSARequestLog.log(AbstractNCSARequestLog.java:159)
at org.eclipse.jetty.server.HttpChannel.onCompleted(HttpChannel.java:812)
at org.eclipse.jetty.server.HttpChannel.handle(HttpChannel.java:368)
at org.eclipse.jetty.server.HttpConnection.onFillable(HttpConnection.java:279)
at org.eclipse.jetty.io.AbstractConnection$ReadCallback.succeeded(AbstractConnection.java:311)
at org.eclipse.jetty.io.FillInterest.fillable(FillInterest.java:105)
at org.eclipse.jetty.io.ChannelEndPoint$1.run(ChannelEndPoint.java:104)
at org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.runTask(EatWhatYouKill.java:336)
at org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.doProduce(EatWhatYouKill.java:313)
at org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.tryProduce(EatWhatYouKill.java:171)
at org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.run(EatWhatYouKill.java:129)
at org.eclipse.jetty.util.thread.ReservedThreadExecutor$ReservedThread.run(ReservedThreadExecutor.java:383)
at org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:882)
at org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1036)
at java.lang.Thread.run(Thread.java:748)
2021-05-22 10:38:54,942 WARN [qtp1270743145-2045] strategy.EatWhatYouKill (EatWhatYouKill.java:runTask(340)) -
java.lang.IllegalStateException: s=IDLE rs=COMPLETED os=ABORTED is=IDLE awp=false se=false i=false al=1
at org.eclipse.jetty.server.HttpChannelState.unhandle(HttpChannelState.java:416)
at org.eclipse.jetty.server.HttpChannel.handle(HttpChannel.java:555)
at org.eclipse.jetty.server.HttpConnection.onFillable(HttpConnection.java:279)
at org.eclipse.jetty.io.AbstractConnection$ReadCallback.succeeded(AbstractConnection.java:311)
at org.eclipse.jetty.io.FillInterest.fillable(FillInterest.java:105)
at org.eclipse.jetty.io.ChannelEndPoint$1.run(ChannelEndPoint.java:104)
at org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.runTask(EatWhatYouKill.java:336)
at org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.doProduce(EatWhatYouKill.java:313)
at org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.tryProduce(EatWhatYouKill.java:171)
at org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.run(EatWhatYouKill.java:129)
at org.eclipse.jetty.util.thread.ReservedThreadExecutor$ReservedThread.run(ReservedThreadExecutor.java:383)
at org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:882)
at org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1036)
at java.lang.Thread.run(Thread.java:748)
2021-05-22 10:38:54,978 [ WARN] [hz.naughty_borg.jet.blocking.thread-1] [c.h.j.i.e.TaskletExecutionService]: [127.0.0.1]:5701 [jet] [4.5.1-SNAPSHOT] Exception in ProcessorTasklet{0639-28cb-55ce-0001/Kafka[public.t_9019ff9c_358b_4fb2_a42f_f25154bffd1b]#0}
org.apache.kafka.common.errors.SerializationException: Error registering Avro schema: {"type":"record","name":"sql","namespace":"jet","fields":[{"name":"name","type":["null","string"],"default":null}]}
Caused by: io.confluent.kafka.schemaregistry.client.rest.exceptions.RestClientException: Register operation timed out; error code: 50002
at io.confluent.kafka.schemaregistry.client.rest.RestService.sendHttpRequest(RestService.java:208) ~[kafka-schema-registry-client-4.1.4.jar:?]
at io.confluent.kafka.schemaregistry.client.rest.RestService.httpRequest(RestService.java:235) ~[kafka-schema-registry-client-4.1.4.jar:?]
at io.confluent.kafka.schemaregistry.client.rest.RestService.registerSchema(RestService.java:326) ~[kafka-schema-registry-client-4.1.4.jar:?]
at io.confluent.kafka.schemaregistry.client.rest.RestService.registerSchema(RestService.java:318) ~[kafka-schema-registry-client-4.1.4.jar:?]
at io.confluent.kafka.schemaregistry.client.rest.RestService.registerSchema(RestService.java:313) ~[kafka-schema-registry-client-4.1.4.jar:?]
at io.confluent.kafka.schemaregistry.client.CachedSchemaRegistryClient.registerAndGetId(CachedSchemaRegistryClient.java:115) ~[kafka-schema-registry-client-4.1.4.jar:?]
at io.confluent.kafka.schemaregistry.client.CachedSchemaRegistryClient.register(CachedSchemaRegistryClient.java:154) ~[kafka-schema-registry-client-4.1.4.jar:?]
at io.confluent.kafka.serializers.AbstractKafkaAvroSerializer.serializeImpl(AbstractKafkaAvroSerializer.java:79) ~[kafka-avro-serializer-4.1.4.jar:?]
at io.confluent.kafka.serializers.KafkaAvroSerializer.serialize(KafkaAvroSerializer.java:53) ~[kafka-avro-serializer-4.1.4.jar:?]
at org.apache.kafka.common.serialization.Serializer.serialize(Serializer.java:60) ~[kafka-clients-2.2.2.jar:?]
at org.apache.kafka.clients.producer.KafkaProducer.doSend(KafkaProducer.java:876) ~[kafka-clients-2.2.2.jar:?]
at org.apache.kafka.clients.producer.KafkaProducer.send(KafkaProducer.java:846) ~[kafka-clients-2.2.2.jar:?]
at com.hazelcast.jet.kafka.impl.WriteKafkaP.process(WriteKafkaP.java:128) ~[hazelcast-jet-kafka-4.5.1-SNAPSHOT.jar:4.5.1-SNAPSHOT]
at com.hazelcast.jet.impl.execution.ProcessorTasklet.processInbox(ProcessorTasklet.java:421) ~[hazelcast-jet-core-4.5.1-SNAPSHOT.jar:4.5.1-SNAPSHOT]
at com.hazelcast.jet.impl.execution.ProcessorTasklet.stateMachineStep(ProcessorTasklet.java:296) ~[hazelcast-jet-core-4.5.1-SNAPSHOT.jar:4.5.1-SNAPSHOT]
at com.hazelcast.jet.impl.execution.ProcessorTasklet.stateMachineStep(ProcessorTasklet.java:291) ~[hazelcast-jet-core-4.5.1-SNAPSHOT.jar:4.5.1-SNAPSHOT]
at com.hazelcast.jet.impl.execution.ProcessorTasklet.stateMachineStep(ProcessorTasklet.java:273) ~[hazelcast-jet-core-4.5.1-SNAPSHOT.jar:4.5.1-SNAPSHOT]
at com.hazelcast.jet.impl.execution.ProcessorTasklet.call(ProcessorTasklet.java:247) ~[hazelcast-jet-core-4.5.1-SNAPSHOT.jar:4.5.1-SNAPSHOT]
at com.hazelcast.jet.impl.execution.TaskletExecutionService$BlockingWorker.run(TaskletExecutionService.java:294) [hazelcast-jet-core-4.5.1-SNAPSHOT.jar:4.5.1-SNAPSHOT]
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) [?:1.8.0_251]
at java.util.concurrent.FutureTask.run(FutureTask.java:266) [?:1.8.0_251]
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) [?:1.8.0_251]
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) [?:1.8.0_251]
at java.lang.Thread.run(Thread.java:748) [?:1.8.0_251]
2021-05-22 10:38:54,980 [DEBUG] [hz.naughty_borg.jet.cooperative.thread-31] [c.h.j.i.JobExecutionService]: [127.0.0.1]:5701 [jet] [4.5.1-SNAPSHOT] Execution of job '0639-28cb-55ce-0001', execution 0639-28cb-55cf-0001 completed with failure
java.util.concurrent.CompletionException: com.hazelcast.jet.JetException: Exception in ProcessorTasklet{0639-28cb-55ce-0001/Kafka[public.t_9019ff9c_358b_4fb2_a42f_f25154bffd1b]#0}: org.apache.kafka.common.errors.SerializationException: Error registering Avro schema: {"type":"record","name":"sql","namespace":"jet","fields":[{"name":"name","type":["null","string"],"default":null}]}
at java.util.concurrent.CompletableFuture.encodeThrowable(CompletableFuture.java:292) ~[?:1.8.0_251]
at java.util.concurrent.CompletableFuture.completeThrowable(CompletableFuture.java:308) ~[?:1.8.0_251]
at java.util.concurrent.CompletableFuture.uniApply(CompletableFuture.java:607) ~[?:1.8.0_251]
at java.util.concurrent.CompletableFuture$UniApply.tryFire(CompletableFuture.java:591) ~[?:1.8.0_251]
at java.util.concurrent.CompletableFuture.postComplete(CompletableFuture.java:488) ~[?:1.8.0_251]
at java.util.concurrent.CompletableFuture.completeExceptionally(CompletableFuture.java:1990) ~[?:1.8.0_251]
at com.hazelcast.jet.impl.util.NonCompletableFuture.internalCompleteExceptionally(NonCompletableFuture.java:59) ~[hazelcast-jet-core-4.5.1-SNAPSHOT.jar:4.5.1-SNAPSHOT]
at com.hazelcast.jet.impl.execution.TaskletExecutionService$ExecutionTracker.taskletDone(TaskletExecutionService.java:468) ~[hazelcast-jet-core-4.5.1-SNAPSHOT.jar:4.5.1-SNAPSHOT]
at com.hazelcast.jet.impl.execution.TaskletExecutionService$CooperativeWorker.dismissTasklet(TaskletExecutionService.java:399) ~[hazelcast-jet-core-4.5.1-SNAPSHOT.jar:4.5.1-SNAPSHOT]
at com.hazelcast.jet.impl.execution.TaskletExecutionService$CooperativeWorker.runTasklet(TaskletExecutionService.java:385) ~[hazelcast-jet-core-4.5.1-SNAPSHOT.jar:4.5.1-SNAPSHOT]
at java.util.concurrent.CopyOnWriteArrayList.forEach(CopyOnWriteArrayList.java:891) [?:1.8.0_251]
at com.hazelcast.jet.impl.execution.TaskletExecutionService$CooperativeWorker.run(TaskletExecutionService.java:347) [hazelcast-jet-core-4.5.1-SNAPSHOT.jar:4.5.1-SNAPSHOT]
at java.lang.Thread.run(Thread.java:748) [?:1.8.0_251]
Caused by: com.hazelcast.jet.JetException: Exception in ProcessorTasklet{0639-28cb-55ce-0001/Kafka[public.t_9019ff9c_358b_4fb2_a42f_f25154bffd1b]#0}: org.apache.kafka.common.errors.SerializationException: Error registering Avro schema: {"type":"record","name":"sql","namespace":"jet","fields":[{"name":"name","type":["null","string"],"default":null}]}
at com.hazelcast.jet.impl.execution.TaskletExecutionService$BlockingWorker.run(TaskletExecutionService.java:305) ~[hazelcast-jet-core-4.5.1-SNAPSHOT.jar:4.5.1-SNAPSHOT]
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) ~[?:1.8.0_251]
at java.util.concurrent.FutureTask.run(FutureTask.java:266) ~[?:1.8.0_251]
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) ~[?:1.8.0_251]
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) ~[?:1.8.0_251]
... 1 more
Caused by: org.apache.kafka.common.errors.SerializationException: Error registering Avro schema: {"type":"record","name":"sql","namespace":"jet","fields":[{"name":"name","type":["null","string"],"default":null}]}
Caused by: io.confluent.kafka.schemaregistry.client.rest.exceptions.RestClientException: Register operation timed out; error code: 50002
at io.confluent.kafka.schemaregistry.client.rest.RestService.sendHttpRequest(RestService.java:208) ~[kafka-schema-registry-client-4.1.4.jar:?]
at io.confluent.kafka.schemaregistry.client.rest.RestService.httpRequest(RestService.java:235) ~[kafka-schema-registry-client-4.1.4.jar:?]
at io.confluent.kafka.schemaregistry.client.rest.RestService.registerSchema(RestService.java:326) ~[kafka-schema-registry-client-4.1.4.jar:?]
at io.confluent.kafka.schemaregistry.client.rest.RestService.registerSchema(RestService.java:318) ~[kafka-schema-registry-client-4.1.4.jar:?]
at io.confluent.kafka.schemaregistry.client.rest.RestService.registerSchema(RestService.java:313) ~[kafka-schema-registry-client-4.1.4.jar:?]
at io.confluent.kafka.schemaregistry.client.CachedSchemaRegistryClient.registerAndGetId(CachedSchemaRegistryClient.java:115) ~[kafka-schema-registry-client-4.1.4.jar:?]
at io.confluent.kafka.schemaregistry.client.CachedSchemaRegistryClient.register(CachedSchemaRegistryClient.java:154) ~[kafka-schema-registry-client-4.1.4.jar:?]
at io.confluent.kafka.serializers.AbstractKafkaAvroSerializer.serializeImpl(AbstractKafkaAvroSerializer.java:79) ~[kafka-avro-serializer-4.1.4.jar:?]
at io.confluent.kafka.serializers.KafkaAvroSerializer.serialize(KafkaAvroSerializer.java:53) ~[kafka-avro-serializer-4.1.4.jar:?]
at org.apache.kafka.common.serialization.Serializer.serialize(Serializer.java:60) ~[kafka-clients-2.2.2.jar:?]
at org.apache.kafka.clients.producer.KafkaProducer.doSend(KafkaProducer.java:876) ~[kafka-clients-2.2.2.jar:?]
at org.apache.kafka.clients.producer.KafkaProducer.send(KafkaProducer.java:846) ~[kafka-clients-2.2.2.jar:?]
at com.hazelcast.jet.kafka.impl.WriteKafkaP.process(WriteKafkaP.java:128) ~[hazelcast-jet-kafka-4.5.1-SNAPSHOT.jar:4.5.1-SNAPSHOT]
at com.hazelcast.jet.impl.execution.ProcessorTasklet.processInbox(ProcessorTasklet.java:421) ~[hazelcast-jet-core-4.5.1-SNAPSHOT.jar:4.5.1-SNAPSHOT]
at com.hazelcast.jet.impl.execution.ProcessorTasklet.stateMachineStep(ProcessorTasklet.java:296) ~[hazelcast-jet-core-4.5.1-SNAPSHOT.jar:4.5.1-SNAPSHOT]
at com.hazelcast.jet.impl.execution.ProcessorTasklet.stateMachineStep(ProcessorTasklet.java:291) ~[hazelcast-jet-core-4.5.1-SNAPSHOT.jar:4.5.1-SNAPSHOT]
at com.hazelcast.jet.impl.execution.ProcessorTasklet.stateMachineStep(ProcessorTasklet.java:273) ~[hazelcast-jet-core-4.5.1-SNAPSHOT.jar:4.5.1-SNAPSHOT]
at com.hazelcast.jet.impl.execution.ProcessorTasklet.call(ProcessorTasklet.java:247) ~[hazelcast-jet-core-4.5.1-SNAPSHOT.jar:4.5.1-SNAPSHOT]
at com.hazelcast.jet.impl.execution.TaskletExecutionService$BlockingWorker.run(TaskletExecutionService.java:294) ~[hazelcast-jet-core-4.5.1-SNAPSHOT.jar:4.5.1-SNAPSHOT]
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) ~[?:1.8.0_251]
at java.util.concurrent.FutureTask.run(FutureTask.java:266) ~[?:1.8.0_251]
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) ~[?:1.8.0_251]
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) ~[?:1.8.0_251]
... 1 more
2021-05-22 10:38:55,008 [DEBUG] [ForkJoinPool.commonPool-worker-33] [c.h.j.i.MasterJobContext]: [127.0.0.1]:5701 [jet] [4.5.1-SNAPSHOT] Execution of job '0639-28cb-55ce-0001', execution 0639-28cb-55cf-0001 has failures: [[127.0.0.1]:5701=com.hazelcast.jet.JetException: Exception in ProcessorTasklet{0639-28cb-55ce-0001/Kafka[public.t_9019ff9c_358b_4fb2_a42f_f25154bffd1b]#0}: org.apache.kafka.common.errors.SerializationException: Error registering Avro schema: {"type":"record","name":"sql","namespace":"jet","fields":[{"name":"name","type":["null","string"],"default":null}]}]
2021-05-22 10:38:55,009 [DEBUG] [hz.naughty_borg.cached.thread-37] [c.h.j.i.MasterJobContext]: [127.0.0.1]:5701 [jet] [4.5.1-SNAPSHOT] Sending CompleteExecutionOperation for job '0639-28cb-55ce-0001', execution 0639-28cb-55cf-0001
2021-05-22 10:38:55,009 [DEBUG] [hz.naughty_borg.cached.thread-37] [c.h.j.i.o.CompleteExecutionOperation]: [127.0.0.1]:5701 [jet] [4.5.1-SNAPSHOT] Completing execution 0639-28cb-55cf-0001 from caller [127.0.0.1]:5701, error=com.hazelcast.jet.JetException: Exception in ProcessorTasklet{0639-28cb-55ce-0001/Kafka[public.t_9019ff9c_358b_4fb2_a42f_f25154bffd1b]#0}: org.apache.kafka.common.errors.SerializationException: Error registering Avro schema: {"type":"record","name":"sql","namespace":"jet","fields":[{"name":"name","type":["null","string"],"default":null}]}
2021-05-22 10:38:55,009 [DEBUG] [hz.naughty_borg.cached.thread-37] [c.h.j.k.i.WriteKafkaP]: [127.0.0.1]:5701 [jet] [4.5.1-SNAPSHOT] [0639-28cb-55ce-0001/Kafka[public.t_9019ff9c_358b_4fb2_a42f_f25154bffd1b]#0] release null
2021-05-22 10:38:55,009 INFO [hz.naughty_borg.cached.thread-37] producer.KafkaProducer (KafkaProducer.java:close(1145)) - [Producer clientId=producer-6] Closing the Kafka producer with timeoutMillis = 9223372036854775807 ms.
2021-05-22 10:38:55,055 [DEBUG] [hz.naughty_borg.cached.thread-37] [c.h.j.i.JobExecutionService]: [127.0.0.1]:5701 [jet] [4.5.1-SNAPSHOT] Completed execution of job '0639-28cb-55ce-0001', execution 0639-28cb-55cf-0001
2021-05-22 10:38:55,056 [ERROR] [hz.naughty_borg.cached.thread-25] [c.h.j.i.MasterJobContext]: [127.0.0.1]:5701 [jet] [4.5.1-SNAPSHOT] Execution of job '0639-28cb-55ce-0001', execution 0639-28cb-55cf-0001 failed
Start time: 2021-05-22T10:38:53.421
Duration: 00:00:01.635
To see additional job metrics enable JobConfig.storeMetricsAfterJobCompletion
com.hazelcast.jet.JetException: Exception in ProcessorTasklet{0639-28cb-55ce-0001/Kafka[public.t_9019ff9c_358b_4fb2_a42f_f25154bffd1b]#0}: org.apache.kafka.common.errors.SerializationException: Error registering Avro schema: {"type":"record","name":"sql","namespace":"jet","fields":[{"name":"name","type":["null","string"],"default":null}]}
at com.hazelcast.jet.impl.execution.TaskletExecutionService$BlockingWorker.run(TaskletExecutionService.java:305) ~[hazelcast-jet-core-4.5.1-SNAPSHOT.jar:4.5.1-SNAPSHOT]
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) ~[?:1.8.0_251]
at java.util.concurrent.FutureTask.run(FutureTask.java:266) ~[?:1.8.0_251]
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) ~[?:1.8.0_251]
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) ~[?:1.8.0_251]
at java.lang.Thread.run(Thread.java:748) ~[?:1.8.0_251]
Caused by: org.apache.kafka.common.errors.SerializationException: Error registering Avro schema: {"type":"record","name":"sql","namespace":"jet","fields":[{"name":"name","type":["null","string"],"default":null}]}
Caused by: io.confluent.kafka.schemaregistry.client.rest.exceptions.RestClientException: Register operation timed out; error code: 50002
at io.confluent.kafka.schemaregistry.client.rest.RestService.sendHttpRequest(RestService.java:208) ~[kafka-schema-registry-client-4.1.4.jar:?]
at io.confluent.kafka.schemaregistry.client.rest.RestService.httpRequest(RestService.java:235) ~[kafka-schema-registry-client-4.1.4.jar:?]
at io.confluent.kafka.schemaregistry.client.rest.RestService.registerSchema(RestService.java:326) ~[kafka-schema-registry-client-4.1.4.jar:?]
at io.confluent.kafka.schemaregistry.client.rest.RestService.registerSchema(RestService.java:318) ~[kafka-schema-registry-client-4.1.4.jar:?]
at io.confluent.kafka.schemaregistry.client.rest.RestService.registerSchema(RestService.java:313) ~[kafka-schema-registry-client-4.1.4.jar:?]
at io.confluent.kafka.schemaregistry.client.CachedSchemaRegistryClient.registerAndGetId(CachedSchemaRegistryClient.java:115) ~[kafka-schema-registry-client-4.1.4.jar:?]
at io.confluent.kafka.schemaregistry.client.CachedSchemaRegistryClient.register(CachedSchemaRegistryClient.java:154) ~[kafka-schema-registry-client-4.1.4.jar:?]
at io.confluent.kafka.serializers.AbstractKafkaAvroSerializer.serializeImpl(AbstractKafkaAvroSerializer.java:79) ~[kafka-avro-serializer-4.1.4.jar:?]
at io.confluent.kafka.serializers.KafkaAvroSerializer.serialize(KafkaAvroSerializer.java:53) ~[kafka-avro-serializer-4.1.4.jar:?]
at org.apache.kafka.common.serialization.Serializer.serialize(Serializer.java:60) ~[kafka-clients-2.2.2.jar:?]
at org.apache.kafka.clients.producer.KafkaProducer.doSend(KafkaProducer.java:876) ~[kafka-clients-2.2.2.jar:?]
at org.apache.kafka.clients.producer.KafkaProducer.send(KafkaProducer.java:846) ~[kafka-clients-2.2.2.jar:?]
at com.hazelcast.jet.kafka.impl.WriteKafkaP.process(WriteKafkaP.java:128) ~[hazelcast-jet-kafka-4.5.1-SNAPSHOT.jar:4.5.1-SNAPSHOT]
at com.hazelcast.jet.impl.execution.ProcessorTasklet.processInbox(ProcessorTasklet.java:421) ~[hazelcast-jet-core-4.5.1-SNAPSHOT.jar:4.5.1-SNAPSHOT]
at com.hazelcast.jet.impl.execution.ProcessorTasklet.stateMachineStep(ProcessorTasklet.java:296) ~[hazelcast-jet-core-4.5.1-SNAPSHOT.jar:4.5.1-SNAPSHOT]
at com.hazelcast.jet.impl.execution.ProcessorTasklet.stateMachineStep(ProcessorTasklet.java:291) ~[hazelcast-jet-core-4.5.1-SNAPSHOT.jar:4.5.1-SNAPSHOT]
at com.hazelcast.jet.impl.execution.ProcessorTasklet.stateMachineStep(ProcessorTasklet.java:273) ~[hazelcast-jet-core-4.5.1-SNAPSHOT.jar:4.5.1-SNAPSHOT]
at com.hazelcast.jet.impl.execution.ProcessorTasklet.call(ProcessorTasklet.java:247) ~[hazelcast-jet-core-4.5.1-SNAPSHOT.jar:4.5.1-SNAPSHOT]
at com.hazelcast.jet.impl.execution.TaskletExecutionService$BlockingWorker.run(TaskletExecutionService.java:294) ~[hazelcast-jet-core-4.5.1-SNAPSHOT.jar:4.5.1-SNAPSHOT]
... 5 more
2021-05-22 10:38:55,058 [DEBUG] [hz.naughty_borg.cached.thread-25] [c.h.j.i.JobCoordinationService]: [127.0.0.1]:5701 [jet] [4.5.1-SNAPSHOT] job '0639-28cb-55ce-0001', execution 0639-28cb-55cf-0001 is completed
2021-05-22 10:38:55,734 [ INFO] [Time-limited test] [c.h.j.SimpleTestInClusterSupport]: Ditching 6 jobs in SimpleTestInClusterSupport.@After: [0639-28cb-55c6-0001, 0639-28cb-55c4-0001, 0639-28cb-55ca-0001, 0639-28cb-55c8-0001, 0639-28cb-55ce-0001, 0639-28cb-55cc-0001]
2021-05-22 10:38:55,739 [ INFO] [Time-limited test] [c.h.j.SimpleTestInClusterSupport]: Destroying 1 distributed objects in SimpleTestInClusterSupport.@After: [hz:impl:replicatedMapService/__sql.catalog]
BuildInfo right after test_explicitKeyAndValueSerializers(com.hazelcast.jet.sql.impl.connector.kafka.SqlAvroTest): BuildInfo{version='4.2', build='20210324', buildNumber=20210324, revision=405cfd1, enterprise=false, serializationVersion=1, jet=JetBuildInfo{version='4.5.1-SNAPSHOT', build='20210522', revision='48d9e46'}}
Hiccups measured while running test 'test_explicitKeyAndValueSerializers(com.hazelcast.jet.sql.impl.connector.kafka.SqlAvroTest):'
10:38:50, accumulated pauses: 2383 ms, max pause: 460 ms, pauses over 1000 ms: 0
10:38:55, accumulated pauses: 339 ms, max pause: 176 ms, pauses over 1000 ms: 0
The text was updated successfully, but these errors were encountered:
4.5-maintenance (commit 48d9e46)
Failed on Oracle JDK 8: http://jenkins.hazelcast.com/view/Jet%20-%20Maintenance/job/jet-oss-maintenance-oracle-jdk8/285/testReport/com.hazelcast.jet.sql.impl.connector.kafka/SqlAvroTest/test_explicitKeyAndValueSerializers/
Stacktrace:
Standard output:
The text was updated successfully, but these errors were encountered: