spark-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From tridib <>
Subject Control number of parquet generated from JavaSchemaRDD
Date Tue, 25 Nov 2014 04:24:25 GMT
I am reading around 1000 input files from disk in an RDD and generating
parquet. It always produces same number of parquet files as number of input
files. I tried to merge them using 

rdd.coalesce(n) and/or rdd.repatition(n).
also tried using:

        int MB_128 = 128*1024*1024;
        sc.hadoopConfiguration().setInt("dfs.blocksize", MB_128);
        sc.hadoopConfiguration().setInt("parquet.block.size", MB_128);

No luck.
Is there a way to control the size/number of parquet files generated?


View this message in context:
Sent from the Apache Spark User List mailing list archive at

To unsubscribe, e-mail:
For additional commands, e-mail:

View raw message