spark-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Nicholas Chammas <nicholas.cham...@gmail.com>
Subject Re: Build fails on master (f90ad5d)
Date Wed, 05 Nov 2014 02:00:03 GMT
Ah, found it:
https://github.com/apache/spark/blob/master/docs/building-spark.md#building-with-sbt

This version of the docs should be published once 1.2.0 is released.

Nick

On Tue, Nov 4, 2014 at 8:53 PM, Alessandro Baretta <alexbaretta@gmail.com>
wrote:

> Nicholas,
>
> Indeed, I was trying to use sbt to speed up the build. My initial
> experiments with the maven process took over 50 minutes, which on a 4-core
> 2014 MacBookPro seems obscene. Then again, after the failed attempt with
> sbt, mvn clean package took only 13 minutes, leading me to think that most
> of the time was somehow being spent in downloading and building
> dependencies.
>
> Anyway, if sbt is supported it would be great to add docs about somewhere,
> especially since, as you point out, most devs are using it.
>
> Thanks for your help.
>
> Alex
>
> On Tue, Nov 4, 2014 at 5:42 PM, Nicholas Chammas <
> nicholas.chammas@gmail.com> wrote:
>
>> Zinc, I believe, is something you can install and run to speed up your
>> Maven builds. It's not required.
>>
>> I get a bunch of warnings when compiling with Maven, too. Dunno if they
>> are expected or not, but things work fine from there on.
>>
>> Many people do indeed use sbt. I don't know where we have documentation
>> on how to use sbt (we recently removed it from the README), but sbt/sbt
>> clean followed by sbt/sbt assembly should work fine.
>>
>> Maven is indeed the "proper" way to build Spark, but building with sbt is
>> supported too and most Spark devs I believe use it because it's faster than
>> Maven.
>>
>> Nick
>>
>> On Tue, Nov 4, 2014 at 8:03 PM, Alessandro Baretta <alexbaretta@gmail.com
>> > wrote:
>>
>>> Nicholas,
>>>
>>> Yes, I saw them, but they refer to maven, and I'm under the impression
>>> that sbt is the preferred way of building spark. Is indeed maven the "right
>>> way"? Anyway, as per your advice I ctrl-d'ed my sbt shell and have ran `mvn
>>> -DskipTests clean package`, which completed successfully. So, indeed, in
>>> trying to use sbt I was on a wild goose chase.
>>>
>>> Here's a couple of glitches I'm seeing. First of all many warnings such
>>> as the following:
>>>
>>> [WARNING]     assert(windowedStream2.generatedRDDs.contains(Time(10000)))
>>> [WARNING]                            ^
>>> [WARNING]
>>> /home/alex/git/spark/streaming/src/test/scala/org/apache/spark/streaming/BasicOperationsSuite.scala:454:
>>> inferred existential type
>>> scala.collection.mutable.HashMap[org.apache.spark.streaming.Time,org.apache.spark.rdd.RDD[_$2]]
>>> forSome { type _$2 }, which cannot be expressed by wildcards,  should be
>>> enabled
>>> by making the implicit value scala.language.existentials visible.
>>>
>>> [WARNING]
>>> /home/alex/git/spark/sql/hive/src/main/scala/org/apache/spark/sql/hive/parquet/FakeParquetSerDe.scala:34:
>>> @deprecated now takes two arguments; see the scaladoc.
>>> [WARNING] @deprecated("No code should depend on FakeParquetHiveSerDe as
>>> it is only intended as a " +
>>> [WARNING]  ^
>>>
>>> [WARNING]
>>> /home/alex/git/spark/sql/hive/src/main/scala/org/apache/spark/sql/hive/HiveMetastoreCatalog.scala:435:
>>> trait Deserializer in package serde2 is deprecated: see corresponding
>>> Javadoc for more information.
>>> [WARNING]
>>> Utils.getContextOrSparkClassLoader).asInstanceOf[Class[Deserializer]],
>>> [WARNING]                                                        ^
>>>
>>> [WARNING]
>>> /home/alex/git/spark/examples/src/main/scala/org/apache/spark/examples/mllib/StreamingKMeans.scala:22:
>>> imported `StreamingKMeans' is permanently hidden by definition of object
>>> StreamingKMeans in package mllib
>>> [WARNING] import org.apache.spark.mllib.clustering.StreamingKMeans
>>> Are they expected?
>>>
>>> Also, mvn complains about not having zinc. Is this a problem?
>>>
>>> [WARNING] Zinc server is not available at port 3030 - reverting to
>>> normal incremental compile
>>>
>>> Alex
>>>
>>> On Tue, Nov 4, 2014 at 3:11 PM, Nicholas Chammas <
>>> nicholas.chammas@gmail.com> wrote:
>>>
>>>> FWIW, the "official" build instructions are here:
>>>> https://github.com/apache/spark#building-spark
>>>>
>>>> On Tue, Nov 4, 2014 at 5:11 PM, Ted Yu <yuzhihong@gmail.com> wrote:
>>>>
>>>>> I built based on this commit today and the build was successful.
>>>>>
>>>>> What command did you use ?
>>>>>
>>>>> Cheers
>>>>>
>>>>> On Tue, Nov 4, 2014 at 2:08 PM, Alessandro Baretta <
>>>>> alexbaretta@gmail.com>
>>>>> wrote:
>>>>>
>>>>> > Fellow Sparkers,
>>>>> >
>>>>> > I am new here and still trying to learn to crawl. Please, bear with
>>>>> me.
>>>>> >
>>>>> > I just pulled f90ad5d from https://github.com/apache/spark.git and
>>>>> am
>>>>> > running the compile command in the sbt shell. This is the error
I'm
>>>>> seeing:
>>>>> >
>>>>> > [error]
>>>>> >
>>>>> >
>>>>> /home/alex/git/spark/mllib/src/main/scala/org/apache/spark/mllib/linalg/Vectors.scala:32:
>>>>> > object sql is not a member of package org.apache.spark
>>>>> > [error] import org.apache.spark.sql.catalyst.types._
>>>>> > [error]                         ^
>>>>> >
>>>>> > Am I doing something obscenely stupid is the build genuinely broken?
>>>>> >
>>>>> > Alex
>>>>> >
>>>>>
>>>>
>>>>
>>>
>>
>

Mime
  • Unnamed multipart/alternative (inline, None, 0 bytes)
View raw message