spark-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Alan Prando <>
Subject Spark saveAsText file size
Date Tue, 25 Nov 2014 02:38:14 GMT
Hi Folks!

I'm running a spark JOB on a cluster with 9 slaves and 1 master (250GB RAM,
32 cores each and 1TB of storage each).

This job generates 1.200 TB of data on a RDD with 1200 partitions.
When I call saveAsTextFile("hdfs://..."), spark creates 1200 files named
"part-000*" on HDFS's folder. However, just a few files have content (~450
files has 2.3GB) and all others with no content (0 bytes).

Is there any explanation for this file size (2.3GB)?
Shouldn't spark saves 1200 files with 1GB each?

Thanks in advance.

Alan Vidotti Prando.

View raw message