spark-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Deepak Vohra <dvohr...@yahoo.com.INVALID>
Subject Re: Is it feasible to build and run Spark on Windows?
Date Thu, 05 Dec 2019 23:39:30 GMT
 Multiple Guava versions could be in the classpath inherited from Hadoop. Use the Guava version
supported by Spark, and exclude other Guava. Also add spark.executor.userClassPathFirst=true
and spark.driver.userClassPathFirst=true in properties.


    On Thursday, December 5, 2019, 11:35:27 PM UTC, Ping Liu <pingpinganan@gmail.com>
wrote:  
 
 Hi Sean,
Oh, sorry.  I just came back to Spark home.  However, the same error came out.
D:\apache\spark\bin>cd ..

D:\apache\spark>bin\spark-shell
Exception in thread "main" java.lang.NoSuchMethodError: com.google.common.base.Preconditions.checkArgument(ZLjava/lang/String;Ljava/lang/Object;)V
        at org.apache.hadoop.conf.Configuration.set(Configuration.java:1357)
        at org.apache.hadoop.conf.Configuration.set(Configuration.java:1338)
        at org.apache.spark.deploy.SparkHadoopUtil$.org$apache$spark$deploy$SparkHadoopUtil$$appendS3AndSparkHadoopHiveConfigurations(SparkHadoopUtil.scala:456)
        at org.apache.spark.deploy.SparkHadoopUtil$.newConfiguration(SparkHadoopUtil.scala:427)
        at org.apache.spark.deploy.SparkSubmit.$anonfun$prepareSubmitEnvironment$2(SparkSubmit.scala:342)
        at org.apache.spark.deploy.SparkSubmit$$Lambda$132/817978763.apply(Unknown Source)
        at scala.Option.getOrElse(Option.scala:189)
        at org.apache.spark.deploy.SparkSubmit.prepareSubmitEnvironment(SparkSubmit.scala:342)
        at org.apache.spark.deploy.SparkSubmit.org$apache$spark$deploy$SparkSubmit$$runMain(SparkSubmit.scala:871)
        at org.apache.spark.deploy.SparkSubmit.doRunMain$1(SparkSubmit.scala:180)
        at org.apache.spark.deploy.SparkSubmit.submit(SparkSubmit.scala:203)
        at org.apache.spark.deploy.SparkSubmit.doSubmit(SparkSubmit.scala:90)
        at org.apache.spark.deploy.SparkSubmit$$anon$2.doSubmit(SparkSubmit.scala:1007)
        at org.apache.spark.deploy.SparkSubmit$.main(SparkSubmit.scala:1016)
        at org.apache.spark.deploy.SparkSubmit.main(SparkSubmit.scala)

D:\apache\spark>
The error shows com.google.common.base.Preconditions.checkArgument() requires two parameters:
String and Object.

But Guava version 19 Preconditions (https://guava.dev/releases/19.0/api/docs/com/google/common/base/Preconditions.html)
shows an additinal boolean variable as first parameter.

| 
| 
 | 
| static void | checkArgument(boolean expression, String errorMessageTemplate, Object... errorMessageArgs)
|

 |

 |


>From Hadoop Configuration source code here (https://hadoop.apache.org/docs/r2.7.1/api/src-html/org/apache/hadoop/conf/Configuration.html),


1130  public void set(String name, String value, String source) {
1131    Preconditions.checkArgument(
1132        name != null,
1133        "Property name must not be null");
1134    Preconditions.checkArgument(
1135        value != null,
1136        "The value of property " + name + " must not be null");My best guess was that
maybe an old version of Hadoop is used somewhere that might incorrectly call Preditions.checkArgument(String,
Object) but not Preditions.checkArgument(boolean, String, Object).  But this is just my guess.
Thanks.
Ping


| 
|  |

 | 
 |
| 
 | 
 |


On Thu, Dec 5, 2019 at 2:38 PM Sean Owen <srowen@gmail.com> wrote:

No, the build works fine, at least certainly on test machines. As I
say, try running from the actual Spark home, not bin/. You are still
running spark-shell there.

On Thu, Dec 5, 2019 at 4:37 PM Ping Liu <pingpinganan@gmail.com> wrote:
>
> Hi Sean,
>
> Thanks for your response!
>
> Sorry, I didn't mention that "build/mvn ..." doesn't work.  So I did go to Spark home
directory and ran mvn from there.  Following is my build and running result.  The source
code was just updated yesterday.  I guess the POM should specify newer Guava library somehow.
>
> Thanks Sean.
>
> Ping
>
> [INFO] Reactor Summary for Spark Project Parent POM 3.0.0-SNAPSHOT:
> [INFO]
> [INFO] Spark Project Parent POM ........................... SUCCESS [ 14.794 s]
> [INFO] Spark Project Tags ................................. SUCCESS [ 18.233 s]
> [INFO] Spark Project Sketch ............................... SUCCESS [ 20.077 s]
> [INFO] Spark Project Local DB ............................. SUCCESS [  7.846 s]
> [INFO] Spark Project Networking ........................... SUCCESS [ 14.906 s]
> [INFO] Spark Project Shuffle Streaming Service ............ SUCCESS [  6.267 s]
> [INFO] Spark Project Unsafe ............................... SUCCESS [ 31.710 s]
> [INFO] Spark Project Launcher ............................. SUCCESS [ 10.227 s]
> [INFO] Spark Project Core ................................. SUCCESS [08:03 min]
> [INFO] Spark Project ML Local Library ..................... SUCCESS [01:51 min]
> [INFO] Spark Project GraphX ............................... SUCCESS [02:20 min]
> [INFO] Spark Project Streaming ............................ SUCCESS [03:16 min]
> [INFO] Spark Project Catalyst ............................. SUCCESS [08:45 min]
> [INFO] Spark Project SQL .................................. SUCCESS [12:12 min]
> [INFO] Spark Project ML Library ........................... SUCCESS [  16:28 h]
> [INFO] Spark Project Tools ................................ SUCCESS [ 23.602 s]
> [INFO] Spark Project Hive ................................. SUCCESS [07:50 min]
> [INFO] Spark Project Graph API ............................ SUCCESS [  8.734 s]
> [INFO] Spark Project Cypher ............................... SUCCESS [ 12.420 s]
> [INFO] Spark Project Graph ................................ SUCCESS [ 10.186 s]
> [INFO] Spark Project REPL ................................. SUCCESS [01:03 min]
> [INFO] Spark Project YARN Shuffle Service ................. SUCCESS [01:19 min]
> [INFO] Spark Project YARN ................................. SUCCESS [02:19 min]
> [INFO] Spark Project Assembly ............................. SUCCESS [ 18.912 s]
> [INFO] Kafka 0.10+ Token Provider for Streaming ........... SUCCESS [ 57.925 s]
> [INFO] Spark Integration for Kafka 0.10 ................... SUCCESS [01:20 min]
> [INFO] Kafka 0.10+ Source for Structured Streaming ........ SUCCESS [02:26 min]
> [INFO] Spark Project Examples ............................. SUCCESS [02:00 min]
> [INFO] Spark Integration for Kafka 0.10 Assembly .......... SUCCESS [ 28.354 s]
> [INFO] Spark Avro ......................................... SUCCESS [01:44 min]
> [INFO] ------------------------------------------------------------------------
> [INFO] BUILD SUCCESS
> [INFO] ------------------------------------------------------------------------
> [INFO] Total time:  17:30 h
> [INFO] Finished at: 2019-12-05T12:20:01-08:00
> [INFO] ------------------------------------------------------------------------
>
> D:\apache\spark>cd bin
>
> D:\apache\spark\bin>ls
> beeline               load-spark-env.cmd  run-example       spark-shell 
     spark-sql2.cmd     sparkR.cmd
> beeline.cmd           load-spark-env.sh   run-example.cmd   spark-shell.cmd 
 spark-submit       sparkR2.cmd
> docker-image-tool.sh  pyspark             spark-class       spark-shell2.cmd 
spark-submit.cmd
> find-spark-home       pyspark.cmd         spark-class.cmd   spark-sql   
     spark-submit2.cmd
> find-spark-home.cmd   pyspark2.cmd        spark-class2.cmd  spark-sql.cmd   
 sparkR
>
> D:\apache\spark\bin>spark-shell
> Exception in thread "main" java.lang.NoSuchMethodError: com.google.common.base.Preconditions.checkArgument(ZLjava/lang/String;Ljava/lang/Object;)V
>         at org.apache.hadoop.conf.Configuration.set(Configuration.java:1357)
>         at org.apache.hadoop.conf.Configuration.set(Configuration.java:1338)
>         at org.apache.spark.deploy.SparkHadoopUtil$.org$apache$spark$deploy$SparkHadoopUtil$$appendS3AndSparkHadoopHiveConfigurations(SparkHadoopUtil.scala:456)
>         at org.apache.spark.deploy.SparkHadoopUtil$.newConfiguration(SparkHadoopUtil.scala:427)
>         at org.apache.spark.deploy.SparkSubmit.$anonfun$prepareSubmitEnvironment$2(SparkSubmit.scala:342)
>         at org.apache.spark.deploy.SparkSubmit$$Lambda$132/817978763.apply(Unknown
Source)
>         at scala.Option.getOrElse(Option.scala:189)
>         at org.apache.spark.deploy.SparkSubmit.prepareSubmitEnvironment(SparkSubmit.scala:342)
>         at org.apache.spark.deploy.SparkSubmit.org$apache$spark$deploy$SparkSubmit$$runMain(SparkSubmit.scala:871)
>         at org.apache.spark.deploy.SparkSubmit.doRunMain$1(SparkSubmit.scala:180)
>         at org.apache.spark.deploy.SparkSubmit.submit(SparkSubmit.scala:203)
>         at org.apache.spark.deploy.SparkSubmit.doSubmit(SparkSubmit.scala:90)
>         at org.apache.spark.deploy.SparkSubmit$$anon$2.doSubmit(SparkSubmit.scala:1007)
>         at org.apache.spark.deploy.SparkSubmit$.main(SparkSubmit.scala:1016)
>         at org.apache.spark.deploy.SparkSubmit.main(SparkSubmit.scala)
>
> D:\apache\spark\bin>
>
> On Thu, Dec 5, 2019 at 1:33 PM Sean Owen <srowen@gmail.com> wrote:
>>
>> What was the build error? you didn't say. Are you sure it succeeded?
>> Try running from the Spark home dir, not bin.
>> I know we do run Windows tests and it appears to pass tests, etc.
>>
>> On Thu, Dec 5, 2019 at 3:28 PM Ping Liu <pingpinganan@gmail.com> wrote:
>> >
>> > Hello,
>> >
>> > I understand Spark is preferably built on Linux.  But I have a Windows machine
with a slow Virtual Box for Linux.  So I wish I am able to build and run Spark code on Windows
environment.
>> >
>> > Unfortunately,
>> >
>> > # Apache Hadoop 2.6.X
>> > ./build/mvn -Pyarn -DskipTests clean package
>> >
>> > # Apache Hadoop 2.7.X and later
>> > ./build/mvn -Pyarn -Phadoop-2.7 -Dhadoop.version=2.7.3 -DskipTests clean package
>> >
>> >
>> > Both are listed on http://spark.apache.org/docs/latest/building-spark.html#specifying-the-hadoop-version-and-enabling-yarn
>> >
>> > But neither works for me (I stay directly under spark root directory and run
"mvn -Pyarn -Phadoop-2.7 -Dhadoop.version=2.7.3 -DskipTests clean package"
>> >
>> > and
>> >
>> > Then I tried "mvn -Pyarn -Phadoop-3.2 -Dhadoop.version=3.2.1 -DskipTests clean
package"
>> >
>> > Now build works.  But when I run spark-shell.  I got the following error.
>> >
>> > D:\apache\spark\bin>spark-shell
>> > Exception in thread "main" java.lang.NoSuchMethodError: com.google.common.base.Preconditions.checkArgument(ZLjava/lang/String;Ljava/lang/Object;)V
>> >         at org.apache.hadoop.conf.Configuration.set(Configuration.java:1357)
>> >         at org.apache.hadoop.conf.Configuration.set(Configuration.java:1338)
>> >         at org.apache.spark.deploy.SparkHadoopUtil$.org$apache$spark$deploy$SparkHadoopUtil$$appendS3AndSparkHadoopHiveConfigurations(SparkHadoopUtil.scala:456)
>> >         at org.apache.spark.deploy.SparkHadoopUtil$.newConfiguration(SparkHadoopUtil.scala:427)
>> >         at org.apache.spark.deploy.SparkSubmit.$anonfun$prepareSubmitEnvironment$2(SparkSubmit.scala:342)
>> >         at org.apache.spark.deploy.SparkSubmit$$Lambda$132/817978763.apply(Unknown
Source)
>> >         at scala.Option.getOrElse(Option.scala:189)
>> >         at org.apache.spark.deploy.SparkSubmit.prepareSubmitEnvironment(SparkSubmit.scala:342)
>> >         at org.apache.spark.deploy.SparkSubmit.org$apache$spark$deploy$SparkSubmit$$runMain(SparkSubmit.scala:871)
>> >         at org.apache.spark.deploy.SparkSubmit.doRunMain$1(SparkSubmit.scala:180)
>> >         at org.apache.spark.deploy.SparkSubmit.submit(SparkSubmit.scala:203)
>> >         at org.apache.spark.deploy.SparkSubmit.doSubmit(SparkSubmit.scala:90)
>> >         at org.apache.spark.deploy.SparkSubmit$$anon$2.doSubmit(SparkSubmit.scala:1007)
>> >         at org.apache.spark.deploy.SparkSubmit$.main(SparkSubmit.scala:1016)
>> >         at org.apache.spark.deploy.SparkSubmit.main(SparkSubmit.scala)
>> >
>> >
>> > Has anyone experienced building and running Spark source code successfully on
Windows?  Could you please share your experience?
>> >
>> > Thanks a lot!
>> >
>> > Ping
>> >

  
Mime
View raw message