spark-issues mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Brandon (Jira)" <>
Subject [jira] [Commented] (SPARK-24918) Executor Plugin API
Date Tue, 29 Oct 2019 04:42:00 GMT


Brandon commented on SPARK-24918:

[~nsheth] placing the plugin class inside a jar and passing as `–jars` to spark-submit
should sufficient, right? It seems this is not enough to make the class visible to the executor.
I have had to explicitly add this jar to `spark.executor.extraClassPath` for plugins to load

> Executor Plugin API
> -------------------
>                 Key: SPARK-24918
>                 URL:
>             Project: Spark
>          Issue Type: New Feature
>          Components: Spark Core
>    Affects Versions: 2.4.0
>            Reporter: Imran Rashid
>            Assignee: Nihar Sheth
>            Priority: Major
>              Labels: SPIP, memory-analysis
>             Fix For: 2.4.0
> It would be nice if we could specify an arbitrary class to run within each executor for
debugging and instrumentation.  Its hard to do this currently because:
> a) you have no idea when executors will come and go with DynamicAllocation, so don't
have a chance to run custom code before the first task
> b) even with static allocation, you'd have to change the code of your spark app itself
to run a special task to "install" the plugin, which is often tough in production cases when
those maintaining regularly running applications might not even know how to make changes to
the application.
> For example, could be used in a debugging context
to understand memory use, just by re-running an application with extra command line arguments
(as opposed to rebuilding spark).
> I think one tricky part here is just deciding the api, and how its versioned.  Does it
just get created when the executor starts, and thats it?  Or does it get more specific events,
like task start, task end, etc?  Would we ever add more events?  It should definitely be a
{{DeveloperApi}}, so breaking compatibility would be allowed ... but still should be avoided.
 We could create a base class that has no-op implementations, or explicitly version everything.
> Note that this is not needed in the driver as we already have SparkListeners (even if
you don't care about the SparkListenerEvents and just want to inspect objects in the JVM,
its still good enough).

This message was sent by Atlassian Jira

To unsubscribe, e-mail:
For additional commands, e-mail:

View raw message