spark-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Al M <>
Subject Re: Spark DataFrames uses too many partition
Date Wed, 12 Aug 2015 10:25:09 GMT
The DataFrames parallelism currently controlled through configuration option
spark.sql.shuffle.partitions.  The default value is 200

I have raised an Improvement Jira to make it possible to specify the number
of partitions in

View this message in context:
Sent from the Apache Spark User List mailing list archive at

To unsubscribe, e-mail:
For additional commands, e-mail:

View raw message