spark-issues mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Sean Owen (JIRA)" <>
Subject [jira] [Reopened] (SPARK-19646) binaryRecords replicates records in scala API
Date Fri, 17 Feb 2017 09:38:41 GMT


Sean Owen reopened SPARK-19646:

Ah, I take it back. With that info I think this is in fact a problem. Although the problem
is indeed because of Hadoop reusing Writables, this is not a case where the user is touching
Writables. binaryRecords is getting the byte[] from a BytesWritable but actually this reference
is the same every time, including the internal byte array. It needs to be copied. Simple fix.

> binaryRecords replicates records in scala API
> ---------------------------------------------
>                 Key: SPARK-19646
>                 URL:
>             Project: Spark
>          Issue Type: Bug
>          Components: Spark Core
>    Affects Versions: 2.0.0, 2.1.0
>            Reporter: BahaaEddin AlAila
>            Priority: Minor
> The scala sc.binaryRecords replicates one record for the entire set.
> for example, I am trying to load the cifar binary data where in a big binary file, each
3073 represents a 32x32x3 bytes image with 1 byte for the label label. The file resides on
my local filesystem.
> .take(5) returns 5 records all the same, .collect() returns 10,000 records all the same.
> What is puzzling is that the pyspark one works perfectly even though underneath it is
calling the scala implementation.
> I have tested this on 2.1.0 and 2.0.0

This message was sent by Atlassian JIRA

To unsubscribe, e-mail:
For additional commands, e-mail:

View raw message