spark-issues mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Michael Schmei├čer (JIRA) <j...@apache.org>
Subject [jira] [Commented] (SPARK-650) Add a "setup hook" API for running initialization code on each executor
Date Sun, 16 Oct 2016 20:36:20 GMT

    [ https://issues.apache.org/jira/browse/SPARK-650?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15580502#comment-15580502
] 

Michael Schmei├čer commented on SPARK-650:
-----------------------------------------

What if I have a Hadoop InputFormat? Then, certain things happen before the first RDD exists,
don't they?

I'll give the solution with the empty RDD a shot next week, this sounds a little bit better
than what we have right now, but it still relies on certain internals of Spark which are most
likely undocumented and might change in future? I've had the feeling that Spark basically
has a functional approach with the RDDs and executing anything on an empty RDD could be optimized
to just do nothing?

> Add a "setup hook" API for running initialization code on each executor
> -----------------------------------------------------------------------
>
>                 Key: SPARK-650
>                 URL: https://issues.apache.org/jira/browse/SPARK-650
>             Project: Spark
>          Issue Type: New Feature
>          Components: Spark Core
>            Reporter: Matei Zaharia
>            Priority: Minor
>
> Would be useful to configure things like reporting libraries



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscribe@spark.apache.org
For additional commands, e-mail: issues-help@spark.apache.org


Mime
View raw message