Why do different loggers output different logs in spark 3.5.0? #334
Replies: 2 comments 9 replies
-
Hey, Spark is both a library and an application therefore it comes bundled with both the According to the link in the warning , there should be one and only one such jar in the classpath. My suggestion would be to add the exact same logging dependencies to your application as Spark bundles them. I hope this helps.
|
Beta Was this translation helpful? Give feedback.
-
Hey, the Spark history server is unfortunately only a monitoring tool. It doesn't aggregate log messages from Spark applications. It just provides a UI for application life-cycle events after they are finished. For application log aggregation, the Stackable Spark Operator installs a vector agent in form of a side car container with every Spark application Pod. The user is responsible for setting up the vector aggregator and configure the sinks he/she wants. We have documentation on how to do that here and also have examples in the integration tests. We are happy to take feedback if you find something wrong or unclear. |
Beta Was this translation helpful? Give feedback.
-
When I tried running Spark jobs, I got these lines in the logs:
So I tried 3 sets of jars that have
org.slf4j.impl.StaticLoggerBinder
Set number 1:
Set number 2:
log4j-slf4j-impl-2.22.0.jar
Set number3:
slf4j-log4j12-1.7.30.jar
With set number 1 I get exceptions (non-fatal) that I do not get in set number 2, for example:
I do not get any sort of exceptions with set number 2 at all.
With set number 3 my spark app actually crashes because executors get lost for some reason, I don't know how a logger can make a spark job crash, but it happens somehow, so, I guess set number 3 is a no go, and now I have to choose between set 1 and set 2.
But the problem is that either
or
Which one is it in your opinion?
Beta Was this translation helpful? Give feedback.
All reactions