Spark does not require Hadoop 2 or YARN. This looks like a problem with the Hadoop installation as it is not funding native libraries it needs to make some security related system call. Check the installation.

On Sep 20, 2014 9:13 AM, "Manu Suryavansh" <suryavanshi.manu@gmail.com> wrote:
Hi Moshe,

Spark needs a Hadoop 2.x/YARN cluster. Other wise you can run it without hadoop in the stand alone mode.

Manu



On Sat, Sep 20, 2014 at 12:55 AM, Moshe Beeri <moshe.beeri@gmail.com> wrote:
object Nizoz {

  def connect(): Unit = {
    val conf = new SparkConf().setAppName("nizoz").setMaster("master");
    val spark = new SparkContext(conf)
    val lines =
spark.textFile("file:///home/moshe/store/frameworks/spark-1.1.0-bin-hadoop1/README.md")
    val lineLengths = lines.map(s => s.length)
    val totalLength = lineLengths.reduce((a, b) => a + b)
    println("totalLength=" + totalLength)

  }

  def main(args: Array[String]) {
    println(scala.tools.nsc.Properties.versionString)
    try {
      //Nizoz.connect
      val logFile =
"/home/moshe/store/frameworks/spark-1.1.0-bin-hadoop1/README.md" // Should
be some file on your system
      val conf = new SparkConf().setAppName("Simple
Application").setMaster("spark://master:7077")
      val sc = new SparkContext(conf)
      val logData = sc.textFile(logFile, 2).cache()
      val numAs = logData.filter(line => line.contains("a")).count()
      val numBs = logData.filter(line => line.contains("b")).count()
      println("Lines with a: %s, Lines with b: %s".format(numAs, numBs))

    } catch {
      case e => {
        println(e.getCause())
        println("stack:")
        e.printStackTrace()
      }
    }
  }
}
Runs with Scala 2.10.4
The problem is this [vogue] exception:

        at com.example.scamel.Nizoz.main(Nizoz.scala)
Caused by: java.lang.RuntimeException:
java.lang.reflect.InvocationTargetException
        at
org.apache.hadoop.util.ReflectionUtils.newInstance(ReflectionUtils.java:131)
        at org.apache.hadoop.security.Groups.<init>(Groups.java:64)
        at
org.apache.hadoop.security.Groups.getUserToGroupsMappingService(Groups.java:240)
...
Caused by: java.lang.reflect.InvocationTargetException
        at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
        at
sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
        at
sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
...
        ... 10 more
Caused by: java.lang.UnsatisfiedLinkError:
org.apache.hadoop.security.JniBasedUnixGroupsMapping.anchorNative()V
        at org.apache.hadoop.security.JniBasedUnixGroupsMapping.anchorNative(Native
Method)
        at
org.apache.hadoop.security.JniBasedUnixGroupsMapping.<clinit>(JniBasedUnixGroupsMapping.java:49)

I have Hadoop 1.2.1 running on Ubuntu 14.04, the Scala console run as
expected.

What am I doing wrong?
Any idea will be welcome





--
View this message in context: http://apache-spark-user-list.1001560.n3.nabble.com/Fails-to-run-simple-Spark-Hello-World-scala-program-tp14718.html
Sent from the Apache Spark User List mailing list archive at Nabble.com.

---------------------------------------------------------------------
To unsubscribe, e-mail: user-unsubscribe@spark.apache.org
For additional commands, e-mail: user-help@spark.apache.org




--
Manu Suryavansh