You will need to restart your Mesos workers to pick up the new limits as well.

On Tue, Oct 7, 2014 at 4:02 PM, Sunny Khatri <> wrote:
Make sure ulimit has taken effect as Todd mentioned. You can verify via ulimit -a. Also make sure you have proper kernel parameters set in /etc/sysctl.conf (MacOSX) 

On Tue, Oct 7, 2014 at 3:57 PM, Lisonbee, Todd <> wrote:

Are you sure the new ulimit has taken effect?

How many cores are you using?  How many reducers?

        "In general if a node in your cluster has C assigned cores and you run
        a job with X reducers then Spark will open C*X files in parallel and
        start writing. Shuffle consolidation will help decrease the total
        number of files created but the number of file handles open at any
        time doesn't change so it won't help the ulimit problem."

Quoted from Patrick at:



-----Original Message-----
From: SK []
Sent: Tuesday, October 7, 2014 2:12 PM
Subject: Re: Shuffle files

- We set ulimit to 500000. But I still get the same "too many open files"

- I tried setting consolidateFiles to True, but that did not help either.

I am using a Mesos cluster.   Does Mesos have any limit on the number of
open files?


View this message in context:
Sent from the Apache Spark User List mailing list archive at

To unsubscribe, e-mail:
For additional commands, e-mail:

To unsubscribe, e-mail:
For additional commands, e-mail: