spark-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Sai Prasanna <>
Subject Preferred RDD Size
Date Wed, 07 May 2014 10:52:32 GMT

Is there any lower-bound on the size of RDD to optimally utilize the
in-memory framework Spark.
Say creating RDD for very small data set of some 64 MB is not as efficient
as that of some 256 MB, then accordingly the application can be tuned.

So is there a soft-lowerbound related to hadoop-block size or something
else ?

Thanks in Advance !

View raw message