Skip to content

Build / Cache base image #23

Build / Cache base image

Build / Cache base image #23

Triggered via create August 9, 2024 08:29
Status Skipped
Total duration 1s
Artifacts
Fit to window
Zoom out
Zoom in

Annotations

3 errors
ExpressionsSchemaSuite.Check schemas for expression examples: ExpressionsSchemaSuite#L164
org.scalatest.exceptions.TestFailedException: 435 did not equal 436 Expected 435 blocks in result file but got 436. Try regenerating the result files.
SparkSqlParserSuite.Checks if SET/RESET can parse all the configurations: SparkSqlParserSuite#L58
org.apache.spark.sql.catalyst.parser.ParseException: [INVALID_SET_SYNTAX] Expected format is 'SET', 'SET key', or 'SET key=value'. If you want to include special characters in key, or include semicolon in value, please use backquotes, e.g., SET `key`=`value`.(line 1, pos 0) == SQL == SET spark.sql.view-truncate-enabled ^^^
TruncateTableSuite.TRUNCATE TABLE using V1 catalog V1 command: SPARK-30312: truncate table - keep acl/permission: TruncateTableSuite#L45
org.apache.spark.SparkException: Job aborted due to stage failure: Task 0 in stage 68.0 failed 1 times, most recent failure: Lost task 0.0 in stage 68.0 (TID 84) (localhost executor driver): java.lang.UnsupportedOperationException: Not implemented by the FakeLocalFsFileSystem FileSystem implementation at org.apache.hadoop.fs.FileSystem.getScheme(FileSystem.java:361) at org.apache.spark.sql.util.S3FileUtils$.tryOpenClose(S3FileUtils.scala:32) at org.apache.spark.sql.execution.datasources.parquet.ParquetFileFormat.$anonfun$buildReaderWithPartitionValues$2(ParquetFileFormat.scala:209) at org.apache.spark.sql.execution.datasources.FileScanRDD$$anon$1.org$apache$spark$sql$execution$datasources$FileScanRDD$$anon$$readCurrentFile(FileScanRDD.scala:217) at org.apache.spark.sql.execution.datasources.FileScanRDD$$anon$1.nextIterator(FileScanRDD.scala:279) at org.apache.spark.sql.execution.datasources.FileScanRDD$$anon$1.hasNext(FileScanRDD.scala:129) at org.apache.spark.sql.execution.FileSourceScanExec$$anon$1.hasNext(DataSourceScanExec.scala:595) at org.apache.spark.sql.catalyst.expressions.GeneratedClass$GeneratedIteratorForCodegenStage1.columnartorow_nextBatch_0$(Unknown Source) at org.apache.spark.sql.catalyst.expressions.GeneratedClass$GeneratedIteratorForCodegenStage1.processNext(Unknown Source) at org.apache.spark.sql.execution.BufferedRowIterator.hasNext(BufferedRowIterator.java:43) at org.apache.spark.sql.execution.WholeStageCodegenEvaluatorFactory$WholeStageCodegenPartitionEvaluator$$anon$1.hasNext(WholeStageCodegenEvaluatorFactory.scala:43) at scala.collection.Iterator$$anon$10.hasNext(Iterator.scala:460) at scala.collection.Iterator$$anon$10.hasNext(Iterator.scala:460) at org.apache.spark.util.Iterators$.size(Iterators.scala:29) at org.apache.spark.util.Utils$.getIteratorSize(Utils.scala:1787) at org.apache.spark.rdd.RDD.$anonfun$count$1(RDD.scala:1296) at org.apache.spark.rdd.RDD.$anonfun$count$1$adapted(RDD.scala:1296) at org.apache.spark.SparkContext.$anonfun$runJob$5(SparkContext.scala:2433) at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:93) at org.apache.spark.TaskContext.runTaskWithListeners(TaskContext.scala:166) at org.apache.spark.scheduler.Task.run(Task.scala:141) at org.apache.spark.executor.Executor$TaskRunner.$anonfun$run$4(Executor.scala:620) at org.apache.spark.util.SparkErrorUtils.tryWithSafeFinally(SparkErrorUtils.scala:64) at org.apache.spark.util.SparkErrorUtils.tryWithSafeFinally$(SparkErrorUtils.scala:61) at org.apache.spark.util.Utils$.tryWithSafeFinally(Utils.scala:94) at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:623) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) at java.lang.Thread.run(Thread.java:750) Driver stacktrace: