spark-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Alex Landa <metalo...@gmail.com>
Subject Spark Standalone - Failing to pass extra java options to the driver in cluster mode
Date Mon, 19 Aug 2019 18:42:43 GMT
Hi,

We are using Spark Standalone 2.4.0 in production and publishing our Scala
app using cluster mode.
I saw that extra java options passed to the driver don't actually pass.
A submit example:
*spark-submit --deploy-mode cluster --master spark://<master ip>:7077
--driver-memory 512mb --conf
"spark.driver.extraJavaOptions=-XX:+HeapDumpOnOutOfMemoryError" --class
App  app.jar *

Doesn't pass *-XX:+HeapDumpOnOutOfMemoryError *as a JVM argument, but pass
instead
*-Dspark.driver.extraJavaOptions=-XX:+HeapDumpOnOutOfMemoryError*I created
a test app for it:

val spark = SparkSession.builder()
  .master("local")
  .appName("testApp").getOrCreate()
import spark.implicits._

// get a RuntimeMXBean reference
val runtimeMxBean = ManagementFactory.getRuntimeMXBean

// get the jvm's input arguments as a list of strings
val listOfArguments = runtimeMxBean.getInputArguments

// print the arguments
listOfArguments.asScala.foreach(a => println(s"ARG: $a"))


I see that for client mode I get :
ARG: -XX:+HeapDumpOnOutOfMemoryError
while in cluster mode I get:
ARG: -Dspark.driver.extraJavaOptions=-XX:+HeapDumpOnOutOfMemoryError

Would appreciate your help how to work around this issue.
Thanks,
Alex

Mime
View raw message