spark-issues mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Sean Owen (JIRA)" <j...@apache.org>
Subject [jira] [Commented] (SPARK-4267) Failing to launch jobs on Spark on YARN with Hadoop 2.5.0 or later
Date Sat, 24 Jan 2015 18:09:34 GMT

    [ https://issues.apache.org/jira/browse/SPARK-4267?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14290742#comment-14290742
] 

Sean Owen commented on SPARK-4267:
----------------------------------

The warning is from YARN, I believe, rather than Spark. Yeah maybe should be an error. 

Your info however points to the problem; I'm sure it's {{-Dnumbers="one two three"}}. {{Utils.splitCommandString}}
strips quotes as it parses them, so will turn it into {{-Dnumbers=one two three}} so the command
is becoming {{java -Dnumbers=one two three ...}} and this isn't valid.

I suggest that {{Utils.splitCommandString}} not strip the quotes that it parses, so that the
reconstructed command line is exactly like the original. It's just splitting, not interpreting
the command. This also seems less surprising. PR coming to demonstrate.

> Failing to launch jobs on Spark on YARN with Hadoop 2.5.0 or later
> ------------------------------------------------------------------
>
>                 Key: SPARK-4267
>                 URL: https://issues.apache.org/jira/browse/SPARK-4267
>             Project: Spark
>          Issue Type: Bug
>            Reporter: Tsuyoshi OZAWA
>
> Currently we're trying Spark on YARN included in Hadoop 2.5.1. Hadoop 2.5 uses protobuf
2.5.0 so I compiled with protobuf 2.5.1 like this:
> {code}
>  ./make-distribution.sh --name spark-1.1.1 --tgz -Pyarn -Dhadoop.version=2.5.1 -Dprotobuf.version=2.5.0
> {code}
> Then Spark on YARN fails to launch jobs with NPE.
> {code}
> $ bin/spark-shell --master yarn-client
> scala>     sc.textFile("hdfs:///user/ozawa/wordcountInput20G").flatMap(line =>
line.split(" ")).map(word => (word, 1)).persist().reduceByKey((a, b) => a + b, 16).saveAsTextFile("hdfs:///user/ozawa/sparkWordcountOutNew2");
> java.lang.NullPointerException                                                      
                                                                                         
                                                                               
>         at org.apache.spark.SparkContext.defaultParallelism(SparkContext.scala:1284)
>         at org.apache.spark.SparkContext.defaultMinPartitions(SparkContext.scala:1291)
                                                                                         
                                                                             
>         at org.apache.spark.SparkContext.textFile$default$2(SparkContext.scala:480)
>         at $iwC$$iwC$$iwC$$iwC.<init>(<console>:13)                     
                                                                                         
                                                                                         
 
>         at $iwC$$iwC$$iwC.<init>(<console>:18)
>         at $iwC$$iwC.<init>(<console>:20)                               
                                                                                         
                                                                                         
 
>         at $iwC.<init>(<console>:22)
>         at <init>(<console>:24)                                         
                                                                                         
                                                                                         
 
>         at .<init>(<console>:28)
>         at .<clinit>(<console>)                                         
                                                                                         
                                                                                         
 
>         at .<init>(<console>:7)
>         at .<clinit>(<console>)                                         
                                                                                         
                                                                                         
 
>         at $print(<console>)
>         at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)              
                                                                                         
                                                                               
>         at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
>         at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
                                                                                         
                                                                   
>         at java.lang.reflect.Method.invoke(Method.java:606)
>         at org.apache.spark.repl.SparkIMain$ReadEvalPrint.call(SparkIMain.scala:789)
                                                                                         
                                                                               
>         at org.apache.spark.repl.SparkIMain$Request.loadAndRun(SparkIMain.scala:1062)
>         at org.apache.spark.repl.SparkIMain.loadAndRunReq$1(SparkIMain.scala:615)   
                                                                                         
                                                                               
>         at org.apache.spark.repl.SparkIMain.interpret(SparkIMain.scala:646)
>         at org.apache.spark.repl.SparkIMain.interpret(SparkIMain.scala:610)         
                                                                                         
                                                                               
>         at org.apache.spark.repl.SparkILoop.reallyInterpret$1(SparkILoop.scala:823)
>         at org.apache.spark.repl.SparkILoop.interpretStartingWith(SparkILoop.scala:868)
                                                                                         
                                                                            
>         at org.apache.spark.repl.SparkILoop.command(SparkILoop.scala:780)
>         at org.apache.spark.repl.SparkILoop.processLine$1(SparkILoop.scala:625)     
                                                                                         
                                                                               
>         at org.apache.spark.repl.SparkILoop.innerLoop$1(SparkILoop.scala:633)
>         at org.apache.spark.repl.SparkILoop.loop(SparkILoop.scala:638)              
                                                                                         
                                                                               
>         at org.apache.spark.repl.SparkILoop$$anonfun$process$1.apply$mcZ$sp(SparkILoop.scala:963)
>         at org.apache.spark.repl.SparkILoop$$anonfun$process$1.apply(SparkILoop.scala:911)
                                                                                         
                                                                         
>         at org.apache.spark.repl.SparkILoop$$anonfun$process$1.apply(SparkILoop.scala:911)
>         at scala.tools.nsc.util.ScalaClassLoader$.savingContextLoader(ScalaClassLoader.scala:135)
                                                                                         
                                                                  
>         at org.apache.spark.repl.SparkILoop.process(SparkILoop.scala:911)
>         at org.apache.spark.repl.SparkILoop.process(SparkILoop.scala:1006)          
                                                                                         
                                                                               
>         at org.apache.spark.repl.Main$.main(Main.scala:31)
>         at org.apache.spark.repl.Main.main(Main.scala)                              
                                                                                         
                                                                               
>         at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>         at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
                                                                                         
                                                                           
>         at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>         at java.lang.reflect.Method.invoke(Method.java:606)                         
                                                                                         
                                                                               
>         at org.apache.spark.deploy.SparkSubmit$.launch(SparkSubmit.scala:329)
>         at org.apache.spark.deploy.SparkSubmit$.main(SparkSubmit.scala:75)          
                                                                                         
                                                                               
>         at org.apache.spark.deploy.SparkSubmit.main(SparkSubmit.scala)
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscribe@spark.apache.org
For additional commands, e-mail: issues-help@spark.apache.org


Mime
View raw message