spark-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Ted Yu <yuzhih...@gmail.com>
Subject Re: Error building Spark on Windows with sbt
Date Sun, 25 Oct 2015 19:50:19 GMT
If you have a pull request, Jenkins can test your change for you. 

FYI 

> On Oct 25, 2015, at 12:43 PM, Richard Eggert <richard.eggert@gmail.com> wrote:
> 
> Also, if I run the Maven build on Windows or Linux without setting -DskipTests=true,
it hangs indefinitely when it gets to org.apache.spark.JavaAPISuite.
> 
> It's hard to test patches when the build doesn't work. :-/
> 
>> On Sun, Oct 25, 2015 at 3:41 PM, Richard Eggert <richard.eggert@gmail.com>
wrote:
>> By "it works", I mean, "It gets past that particular error". It still fails several
minutes later with a different error: 
>> 
>> java.lang.IllegalStateException: impossible to get artifacts when data has not been
loaded. IvyNode = org.scala-lang#scala-library;2.10.3
>> 
>> 
>>> On Sun, Oct 25, 2015 at 3:38 PM, Richard Eggert <richard.eggert@gmail.com>
wrote:
>>> When I try to start up sbt for the Spark build,  or if I try to import it in
IntelliJ IDEA as an sbt project, it fails with a "No such file or directory" error when it
attempts to "git clone" sbt-pom-reader into .sbt/0.13/staging/some-sha1-hash.
>>> 
>>> If I manually create the expected directory before running sbt or importing into
IntelliJ, then it works. Why is it necessary to do this,  and what can be done to make it
not necessary?
>>> 
>>> Rich
>>> 
>> 
>> 
>> 
>> -- 
>> Rich
> 
> 
> 
> -- 
> Rich

Mime
View raw message