spark-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Sean Owen <so...@cloudera.com>
Subject Re: [VOTE] Release Apache Spark 1.3.0 (RC3)
Date Sun, 08 Mar 2015 19:46:39 GMT
Yeah, interesting question of what is the better default for the
single set of artifacts published to Maven. I think there's an
argument for Hadoop 2 and perhaps Hive for the 2.10 build too. Pros
and cons discussed more at

https://issues.apache.org/jira/browse/SPARK-5134
https://github.com/apache/spark/pull/3917

On Sun, Mar 8, 2015 at 7:42 PM, Matei Zaharia <matei.zaharia@gmail.com> wrote:
> +1
>
> Tested it on Mac OS X.
>
> One small issue I noticed is that the Scala 2.11 build is using Hadoop 1 without Hive,
which is kind of weird because people will more likely want Hadoop 2 with Hive. So it would
be good to publish a build for that configuration instead. We can do it if we do a new RC,
or it might be that binary builds may not need to be voted on (I forgot the details there).
>
> Matei

---------------------------------------------------------------------
To unsubscribe, e-mail: dev-unsubscribe@spark.apache.org
For additional commands, e-mail: dev-help@spark.apache.org


Mime
View raw message