spark-issues mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Matt Cheah (JIRA)" <j...@apache.org>
Subject [jira] [Created] (SPARK-6405) Spark Kryo buffer should be forced to be max. 2GB
Date Thu, 19 Mar 2015 00:14:39 GMT
Matt Cheah created SPARK-6405:
---------------------------------

             Summary: Spark Kryo buffer should be forced to be max. 2GB
                 Key: SPARK-6405
                 URL: https://issues.apache.org/jira/browse/SPARK-6405
             Project: Spark
          Issue Type: Improvement
          Components: Spark Core
    Affects Versions: 1.3.0
            Reporter: Matt Cheah
             Fix For: 1.4.0


Kryo buffers used in serialization are backed by Java byte arrays, which have a maximum size
of 2GB. However, we blindly set the size without worrying about numeric overflow or regards
to the maximum array size. We should enforce the maximum buffer size to be 2GB and warn the
user when they have exceeded that amount.

I'm open to the idea of flat-out failing the initialization of the Spark Context if the buffer
size is over 2GB, but I'm afraid that could break backwards-compatability... although one
can argue that the user had incorrect buffer sizes in the first place.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscribe@spark.apache.org
For additional commands, e-mail: issues-help@spark.apache.org


Mime
View raw message