Sure in local mode it works for me as well, the issue is that I run master only, I needed worker as well.


תודה רבה,
משה בארי.
054-3133943
[hidden email] | linkedin



On Mon, Sep 22, 2014 at 9:58 AM, Akhil Das-2 [via Apache Spark User List] <[hidden email]> wrote:
Hi Moshe,

I ran the same code on my machine and it is working without any issues. You can try running it in local mode and if that is working fine, then the issue is with your configuration.

Inline image 1

Thanks
Best Regards

On Sat, Sep 20, 2014 at 3:34 PM, Moshe Beeri <[hidden email]> wrote:
Hi Sean,

Thanks a lot for the answer , I loved your excellent book 
Mahout in Action 
hope you'll keep on writing more books in the field of Big Data.
The issue was with redundant Hadoop library, But now I am facing some other issue (see prev post in this thread) 
java.lang.ClassNotFoundException: com.example.scamel.Nizoz$$anonfun$3

But the class com.example.scamel.Nizoz (in fact Scala object) is the one under debugging.

  def main(args: Array[String]) {
    println(scala.tools.nsc.Properties.versionString)
    try {
      //Nizoz.connect
      val logFile = "/home/moshe/store/frameworks/spark-1.1.0-bin-hadoop1/README.md" // Should be some file on your system
      val conf = new SparkConf().setAppName("spark town").setMaster("spark://nash:7077"); //spark://master:7077
      val sc = new SparkContext(conf)
      val logData = sc.textFile(logFile, 2).cache()
      val numAs = logData.filter(line => line.contains("a")).count()    // <- here is  where the exception thrown 

Do you have any idea whats wrong?
Thanks,
Moshe Beeri.




תודה רבה,
משה בארי.
054-3133943



On Sat, Sep 20, 2014 at 12:02 PM, sowen [via Apache Spark User List] <[hidden email]> wrote:

Spark does not require Hadoop 2 or YARN. This looks like a problem with the Hadoop installation as it is not funding native libraries it needs to make some security related system call. Check the installation.

On Sep 20, 2014 9:13 AM, "Manu Suryavansh" <[hidden email]> wrote:
Hi Moshe,

Spark needs a Hadoop 2.x/YARN cluster. Other wise you can run it without hadoop in the stand alone mode.

Manu



On Sat, Sep 20, 2014 at 12:55 AM, Moshe Beeri <[hidden email]> wrote:
object Nizoz {

  def connect(): Unit = {
    val conf = new SparkConf().setAppName("nizoz").setMaster("master");
    val spark = new SparkContext(conf)
    val lines =
spark.textFile("file:///home/moshe/store/frameworks/spark-1.1.0-bin-hadoop1/README.md")
    val lineLengths = lines.map(s => s.length)
    val totalLength = lineLengths.reduce((a, b) => a + b)
    println("totalLength=" + totalLength)

  }

  def main(args: Array[String]) {
    println(scala.tools.nsc.Properties.versionString)
    try {
      //Nizoz.connect
      val logFile =
"/home/moshe/store/frameworks/spark-1.1.0-bin-hadoop1/README.md" // Should
be some file on your system
      val conf = new SparkConf().setAppName("Simple
Application").setMaster("spark://master:7077")
      val sc = new SparkContext(conf)
      val logData = sc.textFile(logFile, 2).cache()
      val numAs = logData.filter(line => line.contains("a")).count()
      val numBs = logData.filter(line => line.contains("b")).count()
      println("Lines with a: %s, Lines with b: %s".format(numAs, numBs))

    } catch {
      case e => {
        println(e.getCause())
        println("stack:")
        e.printStackTrace()
      }
    }
  }
}
Runs with Scala 2.10.4
The problem is this [vogue] exception:

        at com.example.scamel.Nizoz.main(Nizoz.scala)
Caused by: java.lang.RuntimeException:
java.lang.reflect.InvocationTargetException
        at
org.apache.hadoop.util.ReflectionUtils.newInstance(ReflectionUtils.java:131)
        at org.apache.hadoop.security.Groups.<init>(Groups.java:64)
        at
org.apache.hadoop.security.Groups.getUserToGroupsMappingService(Groups.java:240)
...
Caused by: java.lang.reflect.InvocationTargetException
        at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
        at
sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
        at
sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
...
        ... 10 more
Caused by: java.lang.UnsatisfiedLinkError:
org.apache.hadoop.security.JniBasedUnixGroupsMapping.anchorNative()V
        at org.apache.hadoop.security.JniBasedUnixGroupsMapping.anchorNative(Native
Method)
        at
org.apache.hadoop.security.JniBasedUnixGroupsMapping.<clinit>(JniBasedUnixGroupsMapping.java:49)

I have Hadoop 1.2.1 running on Ubuntu 14.04, the Scala console run as
expected.

What am I doing wrong?
Any idea will be welcome





--
View this message in context: http://apache-spark-user-list.1001560.n3.nabble.com/Fails-to-run-simple-Spark-Hello-World-scala-program-tp14718.html
Sent from the Apache Spark User List mailing list archive at Nabble.com.

---------------------------------------------------------------------
To unsubscribe, e-mail: [hidden email]
For additional commands, e-mail: [hidden email]




--
Manu Suryavansh



If you reply to this email, your message will be added to the discussion below:
http://apache-spark-user-list.1001560.n3.nabble.com/Fails-to-run-simple-Spark-Hello-World-scala-program-tp14718p14724.html
To unsubscribe from Fails to run simple Spark (Hello World) scala program, click here.
NAML



View this message in context: Re: Fails to run simple Spark (Hello World) scala program

Sent from the Apache Spark User List mailing list archive at Nabble.com.




If you reply to this email, your message will be added to the discussion below:
http://apache-spark-user-list.1001560.n3.nabble.com/Fails-to-run-simple-Spark-Hello-World-scala-program-tp14718p14785.html
To unsubscribe from Fails to run simple Spark (Hello World) scala program, click here.
NAML



View this message in context: Re: Fails to run simple Spark (Hello World) scala program
Sent from the Apache Spark User List mailing list archive at Nabble.com.