spark-issues mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Jim Blomo (JIRA)" <j...@apache.org>
Subject [jira] [Updated] (SPARK-1353) IllegalArgumentException when writing to disk
Date Sun, 30 Mar 2014 06:32:14 GMT

     [ https://issues.apache.org/jira/browse/SPARK-1353?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]

Jim Blomo updated SPARK-1353:
-----------------------------

    Environment: 
AWS EMR 3.2.30-49.59.amzn1.x86_64 #1 SMP  x86_64 GNU/Linux
Spark 1.0.0-SNAPSHOT built for Hadoop 1.0.4 built 2014-03-18

  was:AWS EMR 3.2.30-49.59.amzn1.x86_64 #1 SMP  x86_64 GNU/Linux


> IllegalArgumentException when writing to disk
> ---------------------------------------------
>
>                 Key: SPARK-1353
>                 URL: https://issues.apache.org/jira/browse/SPARK-1353
>             Project: Apache Spark
>          Issue Type: Bug
>          Components: Block Manager
>         Environment: AWS EMR 3.2.30-49.59.amzn1.x86_64 #1 SMP  x86_64 GNU/Linux
> Spark 1.0.0-SNAPSHOT built for Hadoop 1.0.4 built 2014-03-18
>            Reporter: Jim Blomo
>            Priority: Minor
>
> The Executor may fail when trying to mmap a file bigger than Integer.MAX_VALUE due to
the constraints of FileChannel.map (http://docs.oracle.com/javase/7/docs/api/java/nio/channels/FileChannel.html#map(java.nio.channels.FileChannel.MapMode,
long, long)).  The signature takes longs, but the size value must be less than MAX_VALUE.
 This manifests with the following backtrace:
> java.lang.IllegalArgumentException: Size exceeds Integer.MAX_VALUE
>         at sun.nio.ch.FileChannelImpl.map(FileChannelImpl.java:828)
>         at org.apache.spark.storage.DiskStore.getBytes(DiskStore.scala:98)
>         at org.apache.spark.storage.BlockManager.doGetLocal(BlockManager.scala:337)
>         at org.apache.spark.storage.BlockManager.getLocal(BlockManager.scala:281)
>         at org.apache.spark.storage.BlockManager.get(BlockManager.scala:430)
>         at org.apache.spark.CacheManager.getOrCompute(CacheManager.scala:38)
>         at org.apache.spark.rdd.RDD.iterator(RDD.scala:220)
>         at org.apache.spark.api.python.PythonRDD$$anon$2.run(PythonRDD.scala:85)



--
This message was sent by Atlassian JIRA
(v6.2#6252)

Mime
View raw message