spark-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Naveen Kumar Pokala <npok...@spcapitaliq.com>
Subject RE: Control number of parquet generated from JavaSchemaRDD
Date Tue, 25 Nov 2014 12:42:41 GMT
Hi,

While submitting your spark job mention --executor-cores 2 --num-executors 24 it will divide
the dataset into 24*2 parquet files.

Or set spark.default.parallelism value like 50 on sparkconf object. It will divide the dataset
into 50 files into your HDFS.


-Naveen

-----Original Message-----
From: tridib [mailto:tridib.samanta@live.com] 
Sent: Tuesday, November 25, 2014 9:54 AM
To: user@spark.incubator.apache.org
Subject: Control number of parquet generated from JavaSchemaRDD

Hello,
I am reading around 1000 input files from disk in an RDD and generating parquet. It always
produces same number of parquet files as number of input files. I tried to merge them using


rdd.coalesce(n) and/or rdd.repatition(n).
also tried using:

        int MB_128 = 128*1024*1024;
        sc.hadoopConfiguration().setInt("dfs.blocksize", MB_128);
        sc.hadoopConfiguration().setInt("parquet.block.size", MB_128);

No luck.
Is there a way to control the size/number of parquet files generated?

Thanks
Tridib



--
View this message in context: http://apache-spark-user-list.1001560.n3.nabble.com/Control-number-of-parquet-generated-from-JavaSchemaRDD-tp19717.html
Sent from the Apache Spark User List mailing list archive at Nabble.com.

---------------------------------------------------------------------
To unsubscribe, e-mail: user-unsubscribe@spark.apache.org For additional commands, e-mail:
user-help@spark.apache.org


---------------------------------------------------------------------
To unsubscribe, e-mail: user-unsubscribe@spark.apache.org
For additional commands, e-mail: user-help@spark.apache.org


Mime
View raw message