spark-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Vadim Chekan <kot.bege...@gmail.com>
Subject Re: Spark Bagel error
Date Fri, 06 Sep 2013 01:14:59 GMT
Could it be discrepancy between what IP spark announce itself and listens
to?
> 13/09/04 13:00:53 WARN spark.Utils: Your hostname, vm4 resolves to a
loopback address: 127.0.1.1; using 192.168.0.50 instead (on interface eth0)

Vadim.


On Wed, Sep 4, 2013 at 5:49 AM, lorraine d almeida <
lorrainedalmeida@gmail.com> wrote:

> The following is the log entry for master
>
> 13/09/04 18:08:10 INFO slf4j.Slf4jEventHandler: Slf4jEventHandler started
> 13/09/04 18:08:10 INFO actor.ActorSystemImpl: RemoteServerStarted
> @akka://sparkMaster@vm4:7077
> 13/09/04 18:08:10 INFO master.Master: Starting Spark master at
> spark://vm4:7077
> 13/09/04 18:08:11 INFO io.IoWorker: IoWorker thread 'spray-io-worker-0'
> started
> 13/09/04 18:08:12 INFO server.HttpServer:
> akka://sparkMaster/user/HttpServer started on /0.0.0.0:8080
> 13/09/04 18:08:13 INFO actor.ActorSystemImpl: RemoteClientStarted
> @akka://sparkWorker@vm4:34930
> 13/09/04 18:08:13 INFO master.Master: Registering worker vm4:34930 with 1
> cores, 979.0 MB RAM
> 13/09/04 18:08:23 INFO actor.ActorSystemImpl: RemoteClientStarted@akka://
> spark@192.168.0.50:53259
> 13/09/04 18:08:24 INFO master.Master: Registering app
> WikipediaPageRankStandalone
> 13/09/04 18:08:24 INFO master.Master: Registered app
> WikipediaPageRankStandalone with ID app-20130904180824-0000
> 13/09/04 18:08:24 INFO master.Master: Launching executor
> app-20130904180824-0000/0 on worker worker-20130904180813-vm4-34930
> 13/09/04 18:08:27 INFO master.Master: Removing app app-20130904180824-0000
> 13/09/04 18:08:27 INFO actor.ActorSystemImpl: RemoteClientShutdown@akka://
> spark@192.168.0.50:53259
> 13/09/04 18:08:27 ERROR actor.ActorSystemImpl: RemoteClientError@akka://
> spark@192.168.0.50:53259: Error[java.net.ConnectException:Connection
> refused
>     at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method)
>     at
> sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:708)
>     at
> org.jboss.netty.channel.socket.nio.NioClientSocketPipelineSink$Boss.connect(NioClientSocketPipelineSink.java:404)
>     at
> org.jboss.netty.channel.socket.nio.NioClientSocketPipelineSink$Boss.processSelectedKeys(NioClientSocketPipelineSink.java:366)
>     at
> org.jboss.netty.channel.socket.nio.NioClientSocketPipelineSink$Boss.run(NioClientSocketPipelineSink.java:282)
>     at
> org.jboss.netty.util.ThreadRenamingRunnable.run(ThreadRenamingRunnable.java:102)
>     at
> org.jboss.netty.util.internal.DeadLockProofWorker$1.run(DeadLockProofWorker.java:42)
>     at
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
>     at
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
>     at java.lang.Thread.run(Thread.java:724)
> ]
> 13/09/04 18:08:27 ERROR actor.ActorSystemImpl: RemoteClientError@akka://
> spark@192.168.0.50:53259: Error[java.net.ConnectException:Connection
> refused
>     at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method)
>     at
> sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:708)
>     at
> org.jboss.netty.channel.socket.nio.NioClientSocketPipelineSink$Boss.connect(NioClientSocketPipelineSink.java:404)
>     at
> org.jboss.netty.channel.socket.nio.NioClientSocketPipelineSink$Boss.processSelectedKeys(NioClientSocketPipelineSink.java:366)
>     at
> org.jboss.netty.channel.socket.nio.NioClientSocketPipelineSink$Boss.run(NioClientSocketPipelineSink.java:282)
>     at
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
>     at
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
>     at java.lang.Thread.run(Thread.java:724)
> ]
> 13/09/04 18:08:27 ERROR actor.ActorSystemImpl: RemoteClientError@akka://
> spark@192.168.0.50:53259:
> Error[java.nio.channels.ClosedChannelException:null
>     at
> org.jboss.netty.channel.socket.nio.AbstractNioWorker.cleanUpWriteBuffer(AbstractNioWorker.java:698)
>     at
> org.jboss.netty.channel.socket.nio.AbstractNioWorker.writeFromUserCode(AbstractNioWorker.java:421)
>     at
> org.jboss.netty.channel.socket.nio.NioClientSocketPipelineSink.eventSunk(NioClientSocketPipelineSink.java:116)
>     at org.jboss.netty.channel.Channels.write(Channels.java:733)
>     at
> org.jboss.netty.handler.codec.oneone.OneToOneEncoder.handleDownstream(OneToOneEncoder.java:65)
>     at org.jboss.netty.channel.Channels.write(Channels.java:733)
>     at
> org.jboss.netty.handler.codec.oneone.OneToOneEncoder.handleDownstream(OneToOneEncoder.java:65)
>     at
> org.jboss.netty.handler.execution.ExecutionHandler.handleDownstream(ExecutionHandler.java:185)
>     at org.jboss.netty.channel.Channels.write(Channels.java:712)
>     at org.jboss.netty.channel.Channels.write(Channels.java:679)
>     at
> org.jboss.netty.channel.AbstractChannel.write(AbstractChannel.java:246)
>     at akka.remote.netty.RemoteClient.send(Client.scala:76)
>     at akka.remote.netty.RemoteClient.send(Client.scala:63)
>     at
> akka.remote.netty.NettyRemoteTransport.send(NettyRemoteSupport.scala:153)
>     at akka.remote.RemoteActorRef.$bang(RemoteActorRefProvider.scala:247)
>     at spark.deploy.master.Master.removeApplication(Master.scala:278)
>     at spark.deploy.master.Master.finishApplication(Master.scala:261)
>     at
> spark.deploy.master.Master$$anonfun$receive$1.apply(Master.scala:144)
>     at spark.deploy.master.Master$$anonfun$receive$1.apply(Master.scala:67)
>     at akka.actor.Actor$class.apply(Actor.scala:318)
>     at spark.deploy.master.Master.apply(Master.scala:18)
>     at akka.actor.ActorCell.invoke(ActorCell.scala:626)
>     at akka.dispatch.Mailbox.processMailbox(Mailbox.scala:197)
>     at akka.dispatch.Mailbox.run(Mailbox.scala:179)
>     at
> akka.dispatch.ForkJoinExecutorConfigurator$MailboxExecutionTask.exec(AbstractDispatcher.scala:516)
>     at akka.jsr166y.ForkJoinTask.doExec(ForkJoinTask.java:259)
>     at akka.jsr166y.ForkJoinPool$WorkQueue.runTask(ForkJoinPool.java:975)
>     at akka.jsr166y.ForkJoinPool.runWorker(ForkJoinPool.java:1479)
>     at akka.jsr166y.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:104)
> ]
> 13/09/04 18:08:27 WARN actor.ActorSystemImpl: RemoteClientWriteFailed
> @akka://spark@192.168.0.50:53259: MessageClass[scala.Tuple3]
> Error[java.nio.channels.ClosedChannelException:null
>     at
> org.jboss.netty.channel.socket.nio.AbstractNioWorker.cleanUpWriteBuffer(AbstractNioWorker.java:698)
>     at
> org.jboss.netty.channel.socket.nio.AbstractNioWorker.writeFromUserCode(AbstractNioWorker.java:421)
>     at
> org.jboss.netty.channel.socket.nio.NioClientSocketPipelineSink.eventSunk(NioClientSocketPipelineSink.java:116)
>     at org.jboss.netty.channel.Channels.write(Channels.java:733)
>     at
> org.jboss.netty.handler.codec.oneone.OneToOneEncoder.handleDownstream(OneToOneEncoder.java:65)
>     at org.jboss.netty.channel.Channels.write(Channels.java:733)
>     at
> org.jboss.netty.handler.codec.oneone.OneToOneEncoder.handleDownstream(OneToOneEncoder.java:65)
>     at
> org.jboss.netty.handler.execution.ExecutionHandler.handleDownstream(ExecutionHandler.java:185)
>     at org.jboss.netty.channel.Channels.write(Channels.java:712)
>     at org.jboss.netty.channel.Channels.write(Channels.java:679)
>     at
> org.jboss.netty.channel.AbstractChannel.write(AbstractChannel.java:246)
>     at akka.remote.netty.RemoteClient.send(Client.scala:76)
>     at akka.remote.netty.RemoteClient.send(Client.scala:63)
>     at
> akka.remote.netty.NettyRemoteTransport.send(NettyRemoteSupport.scala:153)
>     at akka.remote.RemoteActorRef.$bang(RemoteActorRefProvider.scala:247)
>     at spark.deploy.master.Master.removeApplication(Master.scala:278)
>     at spark.deploy.master.Master.finishApplication(Master.scala:261)
>     at
> spark.deploy.master.Master$$anonfun$receive$1.apply(Master.scala:144)
>     at spark.deploy.master.Master$$anonfun$receive$1.apply(Master.scala:67)
>     at akka.actor.Actor$class.apply(Actor.scala:318)
>     at spark.deploy.master.Master.apply(Master.scala:18)
>     at akka.actor.ActorCell.invoke(ActorCell.scala:626)
>     at akka.dispatch.Mailbox.processMailbox(Mailbox.scala:197)
>     at akka.dispatch.Mailbox.run(Mailbox.scala:179)
>     at
> akka.dispatch.ForkJoinExecutorConfigurator$MailboxExecutionTask.exec(AbstractDispatcher.scala:516)
>     at akka.jsr166y.ForkJoinTask.doExec(ForkJoinTask.java:259)
>     at akka.jsr166y.ForkJoinPool$WorkQueue.runTask(ForkJoinPool.java:975)
>     at akka.jsr166y.ForkJoinPool.runWorker(ForkJoinPool.java:1479)
>     at akka.jsr166y.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:104)
> ]
> 13/09/04 18:08:27 INFO actor.ActorSystemImpl: RemoteClientShutdown@akka://
> spark@192.168.0.50:53259
> 13/09/04 18:08:27 WARN master.Master: Got status update for unknown
> executor app-20130904180824-0000/0
>
>
> On 4 September 2013 15:26, lorraine d almeida <lorrainedalmeida@gmail.com>wrote:
>
>> Hi
>>
>> I tried to run the program WikipediaPageRankStandalone from examples
>> directory of Spark. But am getting the following error.Please help out.
>>
>> hduser@vm4:~/spark-test/spark-0.7.2$ ./run
>> spark.bagel.examples.WikipediaPageRankStandalone pagerank_data.txt 2 1
>> spark://vm4:7077 true
>> 13/09/04 13:00:53 WARN spark.Utils: Your hostname, vm4 resolves to a
>> loopback address: 127.0.1.1; using 192.168.0.50 instead (on interface eth0)
>> 13/09/04 13:00:53 WARN spark.Utils: Set SPARK_LOCAL_IP if you need to
>> bind to another address
>> 13/09/04 13:00:54 INFO slf4j.Slf4jEventHandler: Slf4jEventHandler started
>> 13/09/04 13:00:55 INFO spark.SparkEnv: Registering BlockManagerMaster
>> 13/09/04 13:00:55 INFO storage.MemoryStore: MemoryStore started with
>> capacity 326.7 MB.
>> 13/09/04 13:00:55 INFO storage.DiskStore: Created local directory at
>> /tmp/spark-local-20130904130055-5c43
>> 13/09/04 13:00:55 INFO network.ConnectionManager: Bound socket to port
>> 42840 with id = ConnectionManagerId(vm4,42840)
>> 13/09/04 13:00:55 INFO storage.BlockManagerMaster: Trying to register
>> BlockManager
>> 13/09/04 13:00:55 INFO storage.BlockManagerMaster: Registered BlockManager
>> 13/09/04 13:00:55 INFO server.Server: jetty-7.6.8.v20121106
>> 13/09/04 13:00:55 INFO server.AbstractConnector: Started
>> SocketConnector@0.0.0.0:44114
>> 13/09/04 13:00:55 INFO broadcast.HttpBroadcast: Broadcast server started
>> at http://192.168.0.50:44114
>> 13/09/04 13:00:55 INFO spark.SparkEnv: Registering MapOutputTracker
>> 13/09/04 13:00:55 INFO spark.HttpFileServer: HTTP File server directory
>> is /tmp/spark-980d9d9a-8451-4803-8835-04fa9d29bb65
>> 13/09/04 13:00:55 INFO server.Server: jetty-7.6.8.v20121106
>> 13/09/04 13:00:55 INFO server.AbstractConnector: Started
>> SocketConnector@0.0.0.0:47072
>> 13/09/04 13:00:55 INFO io.IoWorker: IoWorker thread 'spray-io-worker-0'
>> started
>> 13/09/04 13:00:56 INFO server.HttpServer:
>> akka://spark/user/BlockManagerHTTPServer started on /0.0.0.0:55091
>> 13/09/04 13:00:56 INFO storage.BlockManagerUI: Started BlockManager web
>> UI at http://vm4:55091
>> 13/09/04 13:00:56 INFO client.Client$ClientActor: Connecting to master
>> spark://vm4:7077
>> 13/09/04 13:00:56 INFO cluster.SparkDeploySchedulerBackend: Connected to
>> Spark cluster with app ID app-20130904130056-0012
>> 13/09/04 13:00:56 INFO client.Client$ClientActor: Executor added:
>> app-20130904130056-0012/0 on worker-20130903170743-vm4-37060 (vm4) with 1
>> cores
>> 13/09/04 13:00:56 INFO cluster.SparkDeploySchedulerBackend: Granted
>> executor ID app-20130904130056-0012/0 on host vm4 with 1 cores, 512.0 MB RAM
>> 13/09/04 13:00:57 INFO client.Client$ClientActor: Executor updated:
>> app-20130904130056-0012/0 is now RUNNING
>> 13/09/04 13:00:58 WARN util.NativeCodeLoader: Unable to load
>> native-hadoop library for your platform... using builtin-java classes where
>> applicable
>> 13/09/04 13:00:59 INFO storage.MemoryStore: ensureFreeSpace(123002)
>> called with curMem=0, maxMem=342526525
>> 13/09/04 13:00:59 INFO storage.MemoryStore: Block broadcast_0 stored as
>> values to memory (estimated size 120.1 KB, free 326.5 MB)
>> Exception in thread "main" scala.MatchError: Configuration:
>> core-default.xml, core-site.xml, mapred-default.xml, mapred-site.xml,
>> hdfs-default.xml, hdfs-site.xml (of class spark.SerializableWritable)
>>         at
>> spark.bagel.examples.WPRSerializationStream.writeObject(WikipediaPageRankStandalone.scala:146)
>>         at spark.broadcast.HttpBroadcast$.write(HttpBroadcast.scala:115)
>>         at spark.broadcast.HttpBroadcast.<init>(HttpBroadcast.scala:28)
>>         at
>> spark.broadcast.HttpBroadcastFactory.newBroadcast(HttpBroadcast.scala:54)
>>         at
>> spark.broadcast.HttpBroadcastFactory.newBroadcast(HttpBroadcast.scala:50)
>>         at
>> spark.broadcast.BroadcastManager.newBroadcast(Broadcast.scala:50)
>>         at spark.SparkContext.broadcast(SparkContext.scala:440)
>>         at spark.rdd.HadoopRDD.<init>(HadoopRDD.scala:50)
>>         at spark.SparkContext.hadoopFile(SparkContext.scala:264)
>>         at spark.SparkContext.textFile(SparkContext.scala:235)
>>         at
>> spark.bagel.examples.WikipediaPageRankStandalone$.main(WikipediaPageRankStandalone.scala:33)
>>         at
>> spark.bagel.examples.WikipediaPageRankStandalone.main(WikipediaPageRankStandalone.scala)
>> hduser@vm4:~/spark-test/spark-0.7.2$
>>
>
>


-- 
>From RFC 2631: In ASN.1, EXPLICIT tagging is implicit unless IMPLICIT is
explicitly specified

Mime
View raw message