spark-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Deepak Vohra <>
Subject Re: Is it feasible to build and run Spark on Windows?
Date Thu, 05 Dec 2019 23:30:39 GMT
 Such type exception could occur if a dependency (most likely Guava) version is not supported
by the Spark version. What is the Spark and Guava versions? Use a more recent Guava version
dependency in Maven pom.xml. 
Regarding Docker, a cloud platform instance such as EC2 could be used with Hyper-V support.
    On Thursday, December 5, 2019, 10:51:59 PM UTC, Ping Liu <>
 Hi Deepak,
Yes, I did use Maven. I even have the build pass successfully when setting Hadoop version
to 3.2.  Please see my response to Sean's email.
Unfortunately, I only have Docker Toolbox as my Windows doesn't have Microsoft Hyper-V. 
So I want to avoid using Docker to do major work if possible.

On Thu, Dec 5, 2019 at 2:24 PM Deepak Vohra <> wrote:

 Several alternatives are available:
- Use Maven to build Spark on Windows.

- Use Docker image for  CDH on WindowsDocker Hub

|  | 
Docker Hub




    On Thursday, December 5, 2019, 09:33:43 p.m. UTC, Sean Owen <> wrote:
 What was the build error? you didn't say. Are you sure it succeeded?
Try running from the Spark home dir, not bin.
I know we do run Windows tests and it appears to pass tests, etc.

On Thu, Dec 5, 2019 at 3:28 PM Ping Liu <> wrote:
> Hello,
> I understand Spark is preferably built on Linux.  But I have a Windows machine with
a slow Virtual Box for Linux.  So I wish I am able to build and run Spark code on Windows
> Unfortunately,
> # Apache Hadoop 2.6.X
> ./build/mvn -Pyarn -DskipTests clean package
> # Apache Hadoop 2.7.X and later
> ./build/mvn -Pyarn -Phadoop-2.7 -Dhadoop.version=2.7.3 -DskipTests clean package
> Both are listed on
> But neither works for me (I stay directly under spark root directory and run "mvn -Pyarn
-Phadoop-2.7 -Dhadoop.version=2.7.3 -DskipTests clean package"
> and
> Then I tried "mvn -Pyarn -Phadoop-3.2 -Dhadoop.version=3.2.1 -DskipTests clean package"
> Now build works.  But when I run spark-shell.  I got the following error.
> D:\apache\spark\bin>spark-shell
> Exception in thread "main" java.lang.NoSuchMethodError:;Ljava/lang/Object;)V
>        at org.apache.hadoop.conf.Configuration.set(
>        at org.apache.hadoop.conf.Configuration.set(
>        at org.apache.spark.deploy.SparkHadoopUtil$.org$apache$spark$deploy$SparkHadoopUtil$$appendS3AndSparkHadoopHiveConfigurations(SparkHadoopUtil.scala:456)
>        at org.apache.spark.deploy.SparkHadoopUtil$.newConfiguration(SparkHadoopUtil.scala:427)
>        at org.apache.spark.deploy.SparkSubmit.$anonfun$prepareSubmitEnvironment$2(SparkSubmit.scala:342)
>        at org.apache.spark.deploy.SparkSubmit$$Lambda$132/817978763.apply(Unknown
>        at scala.Option.getOrElse(Option.scala:189)
>        at org.apache.spark.deploy.SparkSubmit.prepareSubmitEnvironment(SparkSubmit.scala:342)
>        at$apache$spark$deploy$SparkSubmit$$runMain(SparkSubmit.scala:871)
>        at org.apache.spark.deploy.SparkSubmit.doRunMain$1(SparkSubmit.scala:180)
>        at org.apache.spark.deploy.SparkSubmit.submit(SparkSubmit.scala:203)
>        at org.apache.spark.deploy.SparkSubmit.doSubmit(SparkSubmit.scala:90)
>        at org.apache.spark.deploy.SparkSubmit$$anon$2.doSubmit(SparkSubmit.scala:1007)
>        at org.apache.spark.deploy.SparkSubmit$.main(SparkSubmit.scala:1016)
>        at org.apache.spark.deploy.SparkSubmit.main(SparkSubmit.scala)
> Has anyone experienced building and running Spark source code successfully on Windows? 
Could you please share your experience?
> Thanks a lot!
> Ping

To unsubscribe e-mail:

View raw message