spark-issues mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Twinkle Sachdeva (JIRA)" <>
Subject [jira] [Commented] (SPARK-4705) Driver retries in yarn-cluster mode always fail if event logging is enabled
Date Mon, 09 Feb 2015 15:08:34 GMT


Twinkle Sachdeva commented on SPARK-4705:

Hi [~vanzin]

Please take a look at the screenshot. I will make NA to be non-anchored element.

It shows the UI for a history server, where some of the applications has been run on a scheduler
where multiple attempts are not supported, whereas some of the applications has multiple attempts.

Should we introduce a property, which will show multi-attempt UI by default?

> Driver retries in yarn-cluster mode always fail if event logging is enabled
> ---------------------------------------------------------------------------
>                 Key: SPARK-4705
>                 URL:
>             Project: Spark
>          Issue Type: Bug
>          Components: Spark Core, YARN
>    Affects Versions: 1.2.0
>            Reporter: Marcelo Vanzin
>         Attachments: multi-attempts with no attempt based UI.png
> yarn-cluster mode will retry to run the driver in certain failure modes. If even logging
is enabled, this will most probably fail, because:
> {noformat}
> Exception in thread "Driver" Log directory hdfs://
already exists!
>         at org.apache.spark.util.FileLogger.createLogDir(FileLogger.scala:129)
>         at org.apache.spark.util.FileLogger.start(FileLogger.scala:115)
>         at org.apache.spark.scheduler.EventLoggingListener.start(EventLoggingListener.scala:74)
>         at org.apache.spark.SparkContext.<init>(SparkContext.scala:353)
> {noformat}
> The even log path should be "more unique". Or perhaps retries of the same app should
clean up the old logs first.

This message was sent by Atlassian JIRA

To unsubscribe, e-mail:
For additional commands, e-mail:

View raw message