spark-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Yana Kadiyska <>
Subject Executor size and checkpoints
Date Sun, 22 Feb 2015 03:30:31 GMT
Hi all,

I had a streaming application and midway through things decided to up the
executor memory. I spent a long time launching like this:

~/spark-1.2.0-bin-cdh4/bin/spark-submit --class StreamingTest
--executor-memory 2G --master...

and observing the executor memory is still at old 512 setting

I was about to ask if this is a bug when I decided to delete the
checkpoints. Sure enough the setting took after that.

So my question is -- why is it required to remove checkpoints to increase
memory allowed on an executor? This seems pretty un-intuitive to me.

Thanks for any insights.

View raw message