spark-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Mark Hamstra <>
Subject Does ExecutorRunner.buildJavaOpts work the way we want it to?
Date Mon, 14 Oct 2013 20:54:07 GMT
I'm busy working on upgrading an application stack of which Spark and Shark
are components.  The 0.8.0 changes in how configuration, environment
variables, and SPARK_JAVA_OPTS are handled are giving me some trouble, but
I'm not sure whether it is just my trouble or a more general trouble with

The essence of the problem is that workerLocalOpts and userOpts are both
ending up with the same options set -- usually with the same value, but not
always.  Having particular options set twice with the same values is, at
best, pointless.  Having a particular option set twice with different
values is causing my shark-server to fail to start.

Now, at least in my circumstances, it wouldn't seem to ever make sense for
any option to be inherited from both workerLocalOpts and userOpts; and that
the value associated with any duplicate key in userOpts should override the
value from workerLocalOpts.  I can customize ExecutorRunner for my
environment (or look for some other work around), but what I am really
wondering is whether the userOpts-override behavior is what we actually
want in Spark instead of the current union of workerLocalOpts and userOpts?

  • Unnamed multipart/alternative (inline, None, 0 bytes)
View raw message