spark-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Nguyen, Duc" <duc.ngu...@pearson.com>
Subject Re: spark streaming: stderr does not roll
Date Wed, 12 Nov 2014 20:15:32 GMT
I've also tried setting the aforementioned properties using
System.setProperty() as well as on the command line while submitting the
job using --conf key=value. All to no success. When I go to the Spark UI
and click on that particular streaming job and then the "Environment" tab,
I can see the properties are correctly set. But regardless of what I've
tried, the "stderr" log file on the worker nodes does not roll and
continues to grow...leading to a crash of the cluster once it claims 100%
of disk. Has anyone else encountered this? Anyone?



On Fri, Nov 7, 2014 at 3:35 PM, Nguyen, Duc <duc.nguyen@pearson.com> wrote:

> We are running spark streaming jobs (version 1.1.0). After a sufficient
> amount of time, the stderr file grows until the disk is full at 100% and
> crashes the cluster. I've read this
>
> https://github.com/apache/spark/pull/895
>
> and also read this
>
> http://spark.apache.org/docs/latest/configuration.html#spark-streaming
>
>
> So I've tried testing with this in an attempt to get the stderr log file
> to roll.
>
> sparkConf.set("spark.executor.logs.rolling.strategy", "size")
>             .set("spark.executor.logs.rolling.size.maxBytes", "1024")
>             .set("spark.executor.logs.rolling.maxRetainedFiles", "3")
>
>
> Yet it does not roll and continues to grow. Am I missing something obvious?
>
>
> thanks,
> Duc
>
>

Mime
View raw message