[ https://issues.apache.org/jira/browse/SPARK-4019?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Josh Rosen updated SPARK-4019:
------------------------------
Summary: Repartitioning with more than 2000 partitions may drop all data when partitions
are mostly empty. (was: Repartitioning with more than 2000 partitions drops all data)
> Repartitioning with more than 2000 partitions may drop all data when partitions are mostly
empty.
> -------------------------------------------------------------------------------------------------
>
> Key: SPARK-4019
> URL: https://issues.apache.org/jira/browse/SPARK-4019
> Project: Spark
> Issue Type: Bug
> Components: Spark Core
> Affects Versions: 1.2.0
> Reporter: Xiangrui Meng
> Assignee: Josh Rosen
> Priority: Blocker
>
> {code}
> sc.makeRDD(0 until 10, 1000).repartition(2001).collect()
> {code}
> returns `Array()`.
> 1.1.0 doesn't have this issue. Tried both HASH and SORT manager.
--
This message was sent by Atlassian JIRA
(v6.3.4#6332)
---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscribe@spark.apache.org
For additional commands, e-mail: issues-help@spark.apache.org
|