spark-issues mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Apache Spark (Jira)" <j...@apache.org>
Subject [jira] [Commented] (SPARK-20007) Make SparkR apply() functions robust to workers that return empty data.frame
Date Mon, 11 May 2020 23:22:00 GMT

    [ https://issues.apache.org/jira/browse/SPARK-20007?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17104967#comment-17104967
] 

Apache Spark commented on SPARK-20007:
--------------------------------------

User 'liangz1' has created a pull request for this issue:
https://github.com/apache/spark/pull/28504

> Make SparkR apply() functions robust to workers that return empty data.frame
> ----------------------------------------------------------------------------
>
>                 Key: SPARK-20007
>                 URL: https://issues.apache.org/jira/browse/SPARK-20007
>             Project: Spark
>          Issue Type: Bug
>          Components: SparkR
>    Affects Versions: 2.2.0
>            Reporter: Hossein Falaki
>            Priority: Major
>              Labels: bulk-closed
>
> When using {{gapply()}} (or other members of {{apply()}} family) with a schema, Spark
will try to parse data returned form the R process on each worker as Spark DataFrame Rows
based on the schema. In this case our provided schema suggests that we have six column. When
an R worker returns results to JVM, SparkSQL will try to access its columns one by one and
cast them to proper types. If R worker returns nothing, JVM will throw {{ArrayIndexOutOfBoundsException}}
exception.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscribe@spark.apache.org
For additional commands, e-mail: issues-help@spark.apache.org


Mime
View raw message