Skip to content

Remove redundant escape characters, remove unrelated changes #331

Remove redundant escape characters, remove unrelated changes

Remove redundant escape characters, remove unrelated changes #331

Triggered via push November 20, 2023 12:17
Status Failure
Total duration 1h 27m 44s
Artifacts 23

build_main.yml

on: push
Run  /  Check changes
34s
Run / Check changes
Run  /  Breaking change detection with Buf (branch-3.5)
58s
Run / Breaking change detection with Buf (branch-3.5)
Run  /  Run TPC-DS queries with SF=1
50m 7s
Run / Run TPC-DS queries with SF=1
Run  /  Run Docker integration tests
17m 23s
Run / Run Docker integration tests
Run  /  Run Spark on Kubernetes Integration test
54m 48s
Run / Run Spark on Kubernetes Integration test
Matrix: Run / build
Matrix: Run / java-other-versions
Run  /  Build modules: sparkr
25m 21s
Run / Build modules: sparkr
Run  /  Linters, licenses, dependencies and documentation generation
1h 24m
Run / Linters, licenses, dependencies and documentation generation
Matrix: Run / pyspark
Fit to window
Zoom out
Zoom in

Annotations

13 errors and 3 warnings
Run / Run Docker integration tests
Process completed with exit code 15.
Run / Run Spark on Kubernetes Integration test
HashSet() did not contain "decomtest-895f828becc0a282-exec-1".
Run / Run Spark on Kubernetes Integration test
HashSet() did not contain "decomtest-7cd7b98becc17dbd-exec-1".
Run / Run Spark on Kubernetes Integration test
sleep interrupted
Run / Run Spark on Kubernetes Integration test
Task io.fabric8.kubernetes.client.utils.internal.SerialExecutor$$Lambda$679/0x00007fd34c5c3cc8@5a237388 rejected from java.util.concurrent.ThreadPoolExecutor@78ade828[Shutting down, pool size = 2, active threads = 2, queued tasks = 0, completed tasks = 295]
Run / Run Spark on Kubernetes Integration test
sleep interrupted
Run / Run Spark on Kubernetes Integration test
Task io.fabric8.kubernetes.client.utils.internal.SerialExecutor$$Lambda$679/0x00007fd34c5c3cc8@2b51782c rejected from java.util.concurrent.ThreadPoolExecutor@78ade828[Shutting down, pool size = 1, active threads = 1, queued tasks = 0, completed tasks = 296]
Run / Run Spark on Kubernetes Integration test
HashSet() did not contain "decomtest-bc493d8becd24e1e-exec-1".
Run / Run Spark on Kubernetes Integration test
HashSet() did not contain "decomtest-275df58becd331c2-exec-1".
Run / Run Spark on Kubernetes Integration test
HashSet() did not contain "decomtest-a3bdaf8becd6c7c3-exec-1".
Run / Run Spark on Kubernetes Integration test
Status(apiVersion=v1, code=404, details=StatusDetails(causes=[], group=null, kind=pods, name=spark-test-app-c1307bc7263c463f80fda000a303b9c4-driver, retryAfterSeconds=null, uid=null, additionalProperties={}), kind=Status, message=pods "spark-test-app-c1307bc7263c463f80fda000a303b9c4-driver" not found, metadata=ListMeta(_continue=null, remainingItemCount=null, resourceVersion=null, selfLink=null, additionalProperties={}), reason=NotFound, status=Failure, additionalProperties={})..
Run / Build modules: sql - slow tests
Process completed with exit code 18.
RocksDBStateStoreStreamingAggregationSuite.changing schema of state when restarting query - state format version 1 (RocksDBStateStore): RocksDBStateStoreStreamingAggregationSuite#L1
org.scalatest.exceptions.TestFailedException: Timed out waiting for stream: The code passed to failAfter did not complete within 120 seconds. java.base/java.lang.Thread.getStackTrace(Thread.java:1619) org.scalatest.concurrent.TimeLimits$.failAfterImpl(TimeLimits.scala:277) org.scalatest.concurrent.TimeLimits.failAfter(TimeLimits.scala:231) org.scalatest.concurrent.TimeLimits.failAfter$(TimeLimits.scala:230) org.apache.spark.SparkFunSuite.failAfter(SparkFunSuite.scala:69) org.apache.spark.sql.streaming.StreamTest.$anonfun$testStream$7(StreamTest.scala:481) org.apache.spark.sql.streaming.StreamTest.$anonfun$testStream$7$adapted(StreamTest.scala:480) scala.collection.mutable.HashMap$Node.foreach(HashMap.scala:642) scala.collection.mutable.HashMap.foreach(HashMap.scala:504) org.apache.spark.sql.streaming.StreamTest.fetchStreamAnswer$1(StreamTest.scala:480) Caused by: null java.base/java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:1764) org.apache.spark.sql.execution.streaming.StreamExecution.awaitOffset(StreamExecution.scala:481) org.apache.spark.sql.streaming.StreamTest.$anonfun$testStream$8(StreamTest.scala:482) scala.runtime.java8.JFunction0$mcV$sp.apply(JFunction0$mcV$sp.scala:18) org.scalatest.enablers.Timed$$anon$1.timeoutAfter(Timed.scala:127) org.scalatest.concurrent.TimeLimits$.failAfterImpl(TimeLimits.scala:282) org.scalatest.concurrent.TimeLimits.failAfter(TimeLimits.scala:231) org.scalatest.concurrent.TimeLimits.failAfter$(TimeLimits.scala:230) org.apache.spark.SparkFunSuite.failAfter(SparkFunSuite.scala:69) org.apache.spark.sql.streaming.StreamTest.$anonfun$testStream$7(StreamTest.scala:481) == Progress == StartStream(ProcessingTimeTrigger(0),org.apache.spark.util.SystemClock@4587dfa5,Map(),/home/runner/work/spark/spark/target/tmp/spark-66af3ef5-c6f3-4baf-baad-8c55b424d347) AddData to MemoryStream[value#18410]: 1,11 => CheckLastBatch: [1,12,6.0,11] StopStream == Stream == Output Mode: Update Stream state: {} Thread state: alive Thread stack trace: [email protected]/jdk.internal.misc.Unsafe.park(Native Method) [email protected]/java.util.concurrent.locks.LockSupport.park(LockSupport.java:211) [email protected]/java.util.concurrent.locks.AbstractQueuedSynchronizer.acquire(AbstractQueuedSynchronizer.java:715) [email protected]/java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1047) app//scala.concurrent.impl.Promise$DefaultPromise.tryAwait0(Promise.scala:243) app//scala.concurrent.impl.Promise$DefaultPromise.ready(Promise.scala:255) app//scala.concurrent.impl.Promise$DefaultPromise.ready(Promise.scala:104) app//org.apache.spark.util.ThreadUtils$.awaitReady(ThreadUtils.scala:342) app//org.apache.spark.scheduler.DAGScheduler.runJob(DAGScheduler.scala:975) app//org.apache.spark.SparkContext.runJob(SparkContext.scala:2428) app//org.apache.spark.sql.execution.datasources.v2.V2TableWriteExec.writeWithV2(WriteToDataSourceV2Exec.scala:386) app//org.apache.spark.sql.execution.datasources.v2.V2TableWriteExec.writeWithV2$(WriteToDataSourceV2Exec.scala:360) app//org.apache.spark.sql.execution.datasources.v2.WriteToDataSourceV2Exec.writeWithV2(WriteToDataSourceV2Exec.scala:308) app//org.apache.spark.sql.execution.datasources.v2.WriteToDataSourceV2Exec.run(WriteToDataSourceV2Exec.scala:319) app//org.apache.spark.sql.execution.datasources.v2.V2CommandExec.result$lzycompute(V2CommandExec.scala:43) app//org.apache.spark.sql.execution.datasources.v2.V2CommandExec.result(V2CommandExec.scala:43) app//org.apache.spark.sql.execution.datasources.v2.V2CommandExec.executeCollect(V2CommandExec.scala:49) app//org.apache.spark.sql.Dataset.collectFromPlan(Dataset.scala:4402) app//org.apache.spark.sql.Dataset.$anonfun$collect$1(Dataset.scala:3634) app//org.apache.spark.sql.Dataset$$Lambda$2304/0x00007f07e92df830.apply(Unknown Source) app//org.apache.spark.sql.Dataset.$anonfun$withAction$2(Dataset.scala:4392) app//org.apache.spark.sql.Dataset$$Lambda$2316/0x00007f07e92e42d8.apply(Unknown Source) app//org.apache.spark.sql.execution.QueryExecution$.withInternalError(QueryExecution.scala:557) app//org.apache.spark.sql.Dataset.$anonfun$withAction$1(Dataset.scala:4390) app//org.apache.spark.sql.Dataset$$Lambda$2305/0x00007f07e92dd000.apply(Unknown Source) app//org.apache.spark.sql.execution.SQLExecution$.$anonfun$withNewExecutionId0$6(SQLExecution.scala:150) app//org.apache.spark.sql.execution.SQLExecution$$$Lambda$2260/0x00007f07e92c2780.apply(Unknown Source) app//org.apache.spark.sql.execution.SQLExecution$.withSQLConfPropagated(SQLExecution.scala:241) app//org.apache.spark.sql.execution.SQLExecution$.$anonfun$withNewExecutionId0$1(SQLExecution.scala:116) app//org.apache.spark.sql.execution.SQLExecution$$$Lambda$2246/0x00007f07e92bf578.apply(Unknown Source) app//org.apache.spark.sql.SparkSession.withActive(SparkSession.scala:907) app//org.apache.spark.sql.execution.SQLExecution$.withNewExecutionId0(SQLExecution.scala:72) app//org.apache.spark.sql.execution.SQLExecution$.withNewExecutionId(SQLExecution.scala:196) app//org.apache.spark.sql.Dataset.withAction(Dataset.scala:4390) app//org.apache.spark.sql.Dataset.collect(Dataset.scala:3634) app//org.apache.spark.sql.execution.streaming.MicroBatchExecution.$anonfun$runBatch$17(MicroBatchExecution.scala:783) app//org.apache.spark.sql.execution.streaming.MicroBatchExecution$$Lambda$2245/0x00007f07e92bf2b8.apply(Unknown Source) app//org.apache.spark.sql.execution.SQLExecution$.$anonfun$withNewExecutionId0$6(SQLExecution.scala:150) app//org.apache.spark.sql.execution.SQLExecution$$$Lambda$2260/0x00007f07e92c2780.apply(Unknown Source) app//org.apache.spark.sql.execution.SQLExecution$.withSQLConfPropagated(SQLExecution.scala:241) app//org.apache.spark.sql.execution.SQLExecution$.$anonfun$withNewExecutionId0$1(SQLExecution.scala:116) app//org.apache.spark.sql.execution.SQLExecution$$$Lambda$2246/0x00007f07e92bf578.apply(Unknown Source) app//org.apache.spark.sql.SparkSession.withActive(SparkSession.scala:907) app//org.apache.spark.sql.execution.SQLExecution$.withNewExecutionId0(SQLExecution.scala:72) app//org.apache.spark.sql.execution.SQLExecution$.withNewExecutionId(SQLExecution.scala:196) app//org.apache.spark.sql.execution.streaming.MicroBatchExecution.$anonfun$runBatch$16(MicroBatchExecution.scala:771) app//org.apache.spark.sql.execution.streaming.MicroBatchExecution$$Lambda$2243/0x00007f07e92bea68.apply(Unknown Source) app//org.apache.spark.sql.execution.streaming.ProgressReporter.reportTimeTaken(ProgressReporter.scala:427) app//org.apache.spark.sql.execution.streaming.ProgressReporter.reportTimeTaken$(ProgressReporter.scala:425) app//org.apache.spark.sql.execution.streaming.StreamExecution.reportTimeTaken(StreamExecution.scala:67) app//org.apache.spark.sql.execution.streaming.MicroBatchExecution.runBatch(MicroBatchExecution.scala:771) app//org.apache.spark.sql.execution.streaming.MicroBatchExecution.$anonfun$runActivatedStream$2(MicroBatchExecution.scala:326) app//org.apache.spark.sql.execution.streaming.MicroBatchExecution$$Lambda$1945/0x00007f07e91e1918.apply$mcV$sp(Unknown Source) app//scala.runtime.java8.JFunction0$mcV$sp.apply(JFunction0$mcV$sp.scala:18) app//org.apache.spark.sql.execution.streaming.ProgressReporter.reportTimeTaken(ProgressReporter.scala:427) app//org.apache.spark.sql.execution.streaming.ProgressReporter.reportTimeTaken$(ProgressReporter.scala:425) app//org.apache.spark.sql.execution.streaming.StreamExecution.reportTimeTaken(StreamExecution.scala:67) app//org.apache.spark.sql.execution.streaming.MicroBatchExecution.$anonfun$runActivatedStream$1(MicroBatchExecution.scala:289) app//org.apache.spark.sql.execution.streaming.MicroBatchExecution$$Lambda$1942/0x00007f07e91e0890.apply$mcZ$sp(Unknown Source) app//org.apache.spark.sql.execution.streaming.ProcessingTimeExecutor.execute(TriggerExecutor.scala:67) app//org.apache.spark.sql.execution.streaming.MicroBatchExecution.runActivatedStream(MicroBatchExecution.scala:279) app//org.apache.spark.sql.execution.streaming.StreamExecution.$anonfun$runStream$1(StreamExecution.scala:311) app//org.apache.spark.sql.execution.streaming.StreamExecution$$Lambda$1933/0x00007f07e91d8578.apply$mcV$sp(Unknown Source) app//scala.runtime.java8.JFunction0$mcV$sp.apply(JFunction0$mcV$sp.scala:18) app//org.apache.spark.sql.SparkSession.withActive(SparkSession.scala:907) app//org.apache.spark.sql.execution.streaming.StreamExecution.org$apache$spark$sql$execution$streaming$StreamExecution$$runStream(StreamExecution.scala:289) app//org.apache.spark.sql.execution.streaming.StreamExecution$$anon$1.$anonfun$run$1(StreamExecution.scala:211) app//org.apache.spark.sql.execution.streaming.StreamExecution$$anon$1$$Lambda$1929/0x00007f07e91d6dd0.apply$mcV$sp(Unknown Source) app//scala.runtime.java8.JFunction0$mcV$sp.apply(JFunction0$mcV$sp.scala:18) app//org.apache.spark.JobArtifactSet$.withActiveJobArtifactState(JobArtifactSet.scala:94) app//org.apache.spark.sql.execution.streaming.StreamExecution$$anon$1.run(StreamExecution.scala:211) == Sink == == Plan == == Parsed Logical Plan == WriteToMicroBatchDataSource MemorySink, 0430d473-964d-4315-8149-df7d344ffeeb, Update, 0 +- Aggregate [id#18412], [id#18412, sum(value#18410) AS sum_value#18417L, avg(value#18410) AS avg_value#18418, max(value#18410) AS max_value#18419] +- Project [(value#18410 % 10) AS id#18412, value#18410] +- StreamingDataSourceV2Relation [value#18410], org.apache.spark.sql.execution.streaming.MemoryStreamScanBuilder@32c496bd, MemoryStream[value#18410], -1, 0 == Analyzed Logical Plan == WriteToMicroBatchDataSource MemorySink, 0430d473-964d-4315-8149-df7d344ffeeb, Update, 0 +- Aggregate [id#18412], [id#18412, sum(value#18410) AS sum_value#18417L, avg(value#18410) AS avg_value#18418, max(value#18410) AS max_value#18419] +- Project [(value#18410 % 10) AS id#18412, value#18410] +- StreamingDataSourceV2Relation [value#18410], org.apache.spark.sql.execution.streaming.MemoryStreamScanBuilder@32c496bd, MemoryStream[value#18410], -1, 0 == Optimized Logical Plan == WriteToDataSourceV2 MicroBatchWrite[epoch: 0, writer: org.apache.spark.sql.execution.streaming.sources.MemoryStreamingWrite@7e577f2d] +- Aggregate [id#18412], [id#18412, sum(value#18410) AS sum_value#18417L, avg(value#18410) AS avg_value#18418, max(value#18410) AS max_value#18419] +- Project [(value#18410 % 10) AS id#18412, value#18410] +- StreamingDataSourceV2Relation [value#18410], org.apache.spark.sql.execution.streaming.MemoryStreamScanBuilder@32c496bd, MemoryStream[value#18410], -1, 0 == Physical Plan == WriteToDataSourceV2 MicroBatchWrite[epoch: 0, writer: org.apache.spark.sql.execution.streaming.sources.MemoryStreamingWrite@7e577f2d], org.apache.spark.sql.execution.datasources.v2.DataSourceV2Strategy$$Lambda$2189/0x00007f07e928d350@4d9d0ee4 +- *(4) HashAggregate(keys=[id#18412], functions=[sum(value#18410), avg(value#18410), max(value#18410)], output=[id#18412, sum_value#18417L, avg_value#18418, max_value#18419]) +- StateStoreSave [id#18412], state info [ checkpoint = file:/home/runner/work/spark/spark/target/tmp/spark-66af3ef5-c6f3-4baf-baad-8c55b424d347/state, runId = 251da6d0-95d0-48cc-99e3-5c08eb05de94, opId = 0, ver = 0, numPartitions = 1], Update, 0, 0, 1 +- *(3) HashAggregate(keys=[id#18412], functions=[merge_sum(value#18410), merge_avg(value#18410), merge_max(value#18410)], output=[id#18412, sum#18449L, sum#18452, count#18453L, max#18455]) +- StateStoreRestore [id#18412], state info [ checkpoint = file:/home/runner/work/spark/spark/target/tmp/spark-66af3ef5-c6f3-4baf-baad-8c55b424d347/state, runId = 251da6d0-95d0-48cc-99e3-5c08eb05de94, opId = 0, ver = 0, numPartitions = 1], 1 +- *(2) HashAggregate(keys=[id#18412], functions=[merge_sum(value#18410), merge_avg(value#18410), merge_max(value#18410)], output=[id#18412, sum#18449L, sum#18452, count#18453L, max#18455]) +- Exchange hashpartitioning(id#18412, 1), ENSURE_REQUIREMENTS, [plan_id=85845] +- *(1) HashAggregate(keys=[id#18412], functions=[partial_sum(value#18410), partial_avg(value#18410), partial_max(value#18410)], output=[id#18412, sum#18449L, sum#18452, count#18453L, max#18455]) +- *(1) Project [(value#18410 % 10) AS id#18412, value#18410] +- MicroBatchScan[value#18410] MemoryStreamDataSource
Run / Run Docker integration tests
No files were found with the provided path: **/target/unit-tests.log. No artifacts will be uploaded.
Run / Run Docker integration tests
No files were found with the provided path: **/target/test-reports/*.xml. No artifacts will be uploaded.
Run / Build modules: pyspark-errors
No files were found with the provided path: **/target/test-reports/*.xml. No artifacts will be uploaded.

Artifacts

Produced during runtime
Name Size
site Expired
60.6 MB
test-results-api, catalyst, hive-thriftserver--17-hadoop3-hive2.3 Expired
2.81 MB
test-results-core, unsafe, kvstore, avro, utils, network-common, network-shuffle, repl, launcher, examples, sketch--17-hadoop3-hive2.3 Expired
2.53 MB
test-results-hive-- other tests-17-hadoop3-hive2.3 Expired
922 KB
test-results-hive-- slow tests-17-hadoop3-hive2.3 Expired
857 KB
test-results-mllib-local, mllib, graphx--17-hadoop3-hive2.3 Expired
1.45 MB
test-results-pyspark-connect--17-hadoop3-hive2.3 Expired
413 KB
test-results-pyspark-core, pyspark-streaming--17-hadoop3-hive2.3 Expired
80.1 KB
test-results-pyspark-mllib, pyspark-ml, pyspark-ml-connect--17-hadoop3-hive2.3 Expired
1.09 MB
test-results-pyspark-pandas--17-hadoop3-hive2.3 Expired
1.46 MB
test-results-pyspark-pandas-connect-part0--17-hadoop3-hive2.3 Expired
1.32 MB
test-results-pyspark-pandas-connect-part1--17-hadoop3-hive2.3 Expired
1.42 MB
test-results-pyspark-pandas-connect-part2--17-hadoop3-hive2.3 Expired
953 KB
test-results-pyspark-pandas-connect-part3--17-hadoop3-hive2.3 Expired
530 KB
test-results-pyspark-pandas-slow--17-hadoop3-hive2.3 Expired
2.86 MB
test-results-pyspark-sql, pyspark-resource, pyspark-testing--17-hadoop3-hive2.3 Expired
406 KB
test-results-sparkr--17-hadoop3-hive2.3 Expired
280 KB
test-results-sql-- extended tests-17-hadoop3-hive2.3 Expired
3 MB
test-results-sql-- other tests-17-hadoop3-hive2.3 Expired
4.31 MB
test-results-sql-- slow tests-17-hadoop3-hive2.3 Expired
2.83 MB
test-results-streaming, sql-kafka-0-10, streaming-kafka-0-10, yarn, kubernetes, hadoop-cloud, spark-ganglia-lgpl, connect, protobuf--17-hadoop3-hive2.3 Expired
1.46 MB
test-results-tpcds--17-hadoop3-hive2.3 Expired
21.8 KB
unit-tests-log-sql-- slow tests-17-hadoop3-hive2.3 Expired
377 MB