spark-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Khanderao kand <khanderao.k...@gmail.com>
Subject Re: Question on Scalability
Date Thu, 30 Jan 2014 02:19:49 GMT
Yes. the Overflowing memory would be locally persisted. As a result
performance will degrade but application will continue.


On Thu, Jan 30, 2014 at 6:20 AM, David Thomas <dt5434884@gmail.com> wrote:

> How does Spark handle the situation where the RDD does not fit into the
> memory of all the machines in the cluster together?
>

Mime
View raw message