I think we should reopen it.
> On Aug 16, 2016, at 1:48 AM, Kazuaki Ishizaki <ISHIZAKI@jp.ibm.com> wrote:
>
> I just realized it since it broken a build with Scala 2.10.
> https://github.com/apache/spark/commit/fa244e5a90690d6a31be50f2aa203ae1a2e9a1cf
>
> I can reproduce the problem in SPARK-15285 with master branch.
> Should we reopen SPARK-15285?
>
> Best Regards,
> Kazuaki Ishizaki,
>
>
>
> From: Ted Yu <yuzhihong@gmail.com>
> To: dhruve ashar <dhruveashar@gmail.com>
> Cc: Aris <arisofalaska@gmail.com>, "user@spark.apache.org" <user@spark.apache.org>
> Date: 2016/08/15 06:19
> Subject: Re: Spark 2.0.0 JaninoRuntimeException
>
>
>
> Looks like the proposed fix was reverted:
>
> Revert "[SPARK-15285][SQL] Generated SpecificSafeProjection.apply method grows beyond
64 KB"
>
> This reverts commit fa244e5a90690d6a31be50f2aa203ae1a2e9a1cf.
>
> Maybe this was fixed in some other JIRA ?
>
> On Fri, Aug 12, 2016 at 2:30 PM, dhruve ashar <dhruveashar@gmail.com> wrote:
> I see a similar issue being resolved recently: https://issues.apache.org/jira/browse/SPARK-15285
>
> On Fri, Aug 12, 2016 at 3:33 PM, Aris <arisofalaska@gmail.com> wrote:
> Hello folks,
>
> I'm on Spark 2.0.0 working with Datasets -- and despite the fact that smaller data unit
tests work on my laptop, when I'm on a cluster, I get cryptic error messages:
>
> Caused by: org.codehaus.janino.JaninoRuntimeException: Code of method "(Lorg/apache/spark/sql/catalyst/InternalRow;Lorg/apache/spark/sql/catalyst/InternalRow;)I"
of class "org.apache.spark.sql.catalyst.expressions.GeneratedClass$SpecificOrdering" grows
beyond 64 KB
>
> Unfortunately I'm not clear on how to even isolate the source of this problem. I didn't
have this problem in Spark 1.6.1.
>
> Any clues?
>
>
>
> --
> -Dhruve Ashar
>
>
>
|