spark-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Andrew Or <and...@databricks.com>
Subject Re: SparkContext#stop
Date Thu, 22 May 2014 09:21:53 GMT
You should always call sc.stop(), so it cleans up state and does not fill
up your disk over time. The strange behavior you observe is mostly benign,
as it only occurs after you have supposedly finished all of your work with
the SparkContext. I am not aware of a bug in Spark that causes this
behavior.

What are you doing in your application? Do you see any exceptions in the
logs? Have you looked at the worker logs? You can browse through these on
the worker web UI on http://<worker-url>:8081

Andrew

Mime
View raw message