spark-issues mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Sean Owen (JIRA)" <j...@apache.org>
Subject [jira] [Commented] (SPARK-2645) Spark driver calls System.exit(50) after calling SparkContext.stop() the second time
Date Sun, 25 Jan 2015 12:17:34 GMT

    [ https://issues.apache.org/jira/browse/SPARK-2645?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14291072#comment-14291072
] 

Sean Owen commented on SPARK-2645:
----------------------------------

Just checking if you believe this is still an issue, since I don't see any code that exits
with status 50 as mentioned in the description. I have not tested this at all myself. If it's
still a problem, is the fix simply to handle multiple calls to {{stop()}} better?

> Spark driver calls System.exit(50) after calling SparkContext.stop() the second time

> -------------------------------------------------------------------------------------
>
>                 Key: SPARK-2645
>                 URL: https://issues.apache.org/jira/browse/SPARK-2645
>             Project: Spark
>          Issue Type: Bug
>          Components: Spark Core
>    Affects Versions: 1.0.0
>            Reporter: Vlad Komarov
>
> In some cases my application calls SparkContext.stop() after it has already stopped and
this leads to stopping JVM that runs spark driver.
> E.g
> This program should run forever
> {code}
> JavaSparkContext context = new JavaSparkContext("spark://12.34.21.44:7077", "DummyApp");
>         try {
>             JavaRDD<Integer> rdd = context.parallelize(Arrays.asList(1, 2, 3));
>             rdd.count();
>         } catch (Throwable e) {
>             e.printStackTrace();
>         }
>         try {
>             context.cancelAllJobs();
>             context.stop();
>             //call stop second time
>             context.stop();
>         } catch (Throwable e) {
>             e.printStackTrace();
>         }
>         Thread.currentThread().join();
> {code}
> but it finishes with exit code 50 after calling SparkContext.stop() the second time.
> Also it throws an exception like this
> {code}
> org.apache.spark.ServerStateException: Server is already stopped
> 	at org.apache.spark.HttpServer.stop(HttpServer.scala:122) ~[spark-core_2.10-1.0.0.jar:1.0.0]
> 	at org.apache.spark.HttpFileServer.stop(HttpFileServer.scala:48) ~[spark-core_2.10-1.0.0.jar:1.0.0]
> 	at org.apache.spark.SparkEnv.stop(SparkEnv.scala:81) ~[spark-core_2.10-1.0.0.jar:1.0.0]
> 	at org.apache.spark.SparkContext.stop(SparkContext.scala:984) ~[spark-core_2.10-1.0.0.jar:1.0.0]
> 	at org.apache.spark.scheduler.cluster.SparkDeploySchedulerBackend.dead(SparkDeploySchedulerBackend.scala:92)
~[spark-core_2.10-1.0.0.jar:1.0.0]
> 	at org.apache.spark.deploy.client.AppClient$ClientActor.markDead(AppClient.scala:178)
~[spark-core_2.10-1.0.0.jar:1.0.0]
> 	at org.apache.spark.deploy.client.AppClient$ClientActor$$anonfun$registerWithMaster$1$$anonfun$apply$mcV$sp$1.apply$mcV$sp(AppClient.scala:96)
~[spark-core_2.10-1.0.0.jar:1.0.0]
> 	at org.apache.spark.util.Utils$.tryOrExit(Utils.scala:790) ~[spark-core_2.10-1.0.0.jar:1.0.0]
> 	at org.apache.spark.deploy.client.AppClient$ClientActor$$anonfun$registerWithMaster$1.apply$mcV$sp(AppClient.scala:91)
[spark-core_2.10-1.0.0.jar:1.0.0]
> 	at akka.actor.Scheduler$$anon$9.run(Scheduler.scala:80) [akka-actor_2.10-2.2.3-shaded-protobuf.jar:na]
> 	at akka.actor.LightArrayRevolverScheduler$$anon$3$$anon$2.run(Scheduler.scala:241) [akka-actor_2.10-2.2.3-shaded-protobuf.jar:na]
> 	at akka.dispatch.TaskInvocation.run(AbstractDispatcher.scala:42) [akka-actor_2.10-2.2.3-shaded-protobuf.jar:na]
> 	at akka.dispatch.ForkJoinExecutorConfigurator$AkkaForkJoinTask.exec(AbstractDispatcher.scala:386)
[akka-actor_2.10-2.2.3-shaded-protobuf.jar:na]
> 	at scala.concurrent.forkjoin.ForkJoinTask.doExec(ForkJoinTask.java:260) [scala-library-2.10.4.jar:na]
> 	at scala.concurrent.forkjoin.ForkJoinPool$WorkQueue.runTask(ForkJoinPool.java:1339)
[scala-library-2.10.4.jar:na]
> 	at scala.concurrent.forkjoin.ForkJoinPool.runWorker(ForkJoinPool.java:1979) [scala-library-2.10.4.jar:na]
> 	at scala.concurrent.forkjoin.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:107)
[scala-library-2.10.4.jar:na]
> {code}
> One remark is that this behavior is only reproducible when I call SparkContext.cancellAllJobs()
before calling SparkContext.stop()



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscribe@spark.apache.org
For additional commands, e-mail: issues-help@spark.apache.org


Mime
View raw message