spark-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Jeniba Johnson <Jeniba.John...@lntinfotech.com>
Subject RE: Bind exception while running FlumeEventCount
Date Tue, 11 Nov 2014 07:32:59 GMT
Hi Hari

Meanwhile Iam  trying out with different port. I need to confirm with you about the installation
for Spark and Flume.
For installation, I have  just unzipped spark-1.1.0-bin-hadoop1.tar.gz and  apache-flume-1.4.0-bin.tar.gz
for running spark streaming examples.
Is this the correct way or else Is there any other way, then just let me know.

Awaiting for your kind reply.

Regards,
Jeniba Johnson
From: Hari Shreedharan [mailto:hshreedharan@cloudera.com]
Sent: Tuesday, November 11, 2014 12:41 PM
To: Jeniba Johnson
Cc: dev@spark.apache.org
Subject: RE: Bind exception while running FlumeEventCount

First, can you try a different port?

TIME_WAIT is basically a timeout for a socket to be completely decommissioned for the port
to be available for binding. Once you wait for a few minutes and if you still see a startup
issue, can you also send the error logs? From what I can see, the port seems to be in use.

Thanks,
Hari


On Mon, Nov 10, 2014 at 11:07 PM, Jeniba Johnson <Jeniba.Johnson@lntinfotech.com<mailto:Jeniba.Johnson@lntinfotech.com>>
wrote:

Hi Hari

Just to give you a background , I had installed spark-1.1.0 and apache flume 1.4 with basic
configurations as needed. I just wanted to know that
Is this the correct way for running Spark streaming examples with Flume.

So As you had mentioned about the TIME_WAIT parameter, did not get exactly.. Iam attaching
the screenshot ,so that you can help me with it
The screenshot specify the ports listening after the program is executed


Regards,
Jeniba Johnson

-----Original Message-----
From: Hari Shreedharan [mailto:hshreedharan@cloudera.com]
Sent: Tuesday, November 11, 2014 11:04 AM
To: Jeniba Johnson
Cc: dev@spark.apache.org<mailto:dev@spark.apache.org>
Subject: RE: Bind exception while running FlumeEventCount

The socket may have been in TIME_WAIT. Can you try after a bit? The error message definitely
suggests that some other app is listening on that port.


Thanks,
Hari

On Mon, Nov 10, 2014 at 9:30 PM, Jeniba Johnson <Jeniba.Johnson@lntinfotech.com<mailto:Jeniba.Johnson@lntinfotech.com>>
wrote:

> Hi Hari
> Thanks for your kind reply
> Even after killing the process id of the specific port. Still Iam facing with the similar
error.
> The command I use is
> sudo lsof -i -P | grep -i "listen"
> Kill -9 PID
> However If I try to work with the port which is available, still the error remains the
same.
> Regards,
> Jeniba Johnson
> From: Hari Shreedharan [mailto:hshreedharan@cloudera.com]
> Sent: Tuesday, November 11, 2014 4:41 AM
> To: Jeniba Johnson
> Cc: dev@spark.apache.org<mailto:dev@spark.apache.org>
> Subject: Re: Bind exception while running FlumeEventCount Looks like
> that port is not available because another app is using that port. Can you take a look
at netstat -a and use a port that is free?
> Thanks,
> Hari
> On Fri, Nov 7, 2014 at 2:05 PM, Jeniba Johnson <Jeniba.Johnson@lntinfotech.com<mailto:Jeniba.Johnson@lntinfotech.com<mailto:Jeniba.Johnson@lntinfotech.com%3cmailto:Jeniba.Johnson@lntinfotech.com>>>
wrote:
> Hi,
> I have installed spark-1.1.0 and apache flume 1.4 for running streaming example FlumeEventCount.
Previously the code was working fine. Now Iam facing with the below mentioned issues. My flume
is running properly it is able to write the file.
> The command I use is
> bin/run-example org.apache.spark.examples.streaming.FlumeEventCount
> 172.29.17.178 65001
> 14/11/07 23:19:23 INFO receiver.ReceiverSupervisorImpl: Stopping
> receiver with message: Error starting receiver 0:
> org.jboss.netty.channel.ChannelException: Failed to bind to:
> /172.29.17.178:65001
> 14/11/07 23:19:23 INFO flume.FlumeReceiver: Flume receiver stopped
> 14/11/07 23:19:23 INFO receiver.ReceiverSupervisorImpl: Called
> receiver onStop
> 14/11/07 23:19:23 INFO receiver.ReceiverSupervisorImpl: Deregistering
> receiver 0
> 14/11/07 23:19:23 ERROR scheduler.ReceiverTracker: Deregistered
> receiver for stream 0: Error starting receiver 0 -
> org.jboss.netty.channel.ChannelException: Failed to bind to:
> /172.29.17.178:65001 at
> org.jboss.netty.bootstrap.ServerBootstrap.bind(ServerBootstrap.java:27
> 2) at org.apache.avro.ipc.NettyServer.<init>(NettyServer.java:106)
> at org.apache.avro.ipc.NettyServer.<init>(NettyServer.java:119)
> at org.apache.avro.ipc.NettyServer.<init>(NettyServer.java:74)
> at org.apache.avro.ipc.NettyServer.<init>(NettyServer.java:68)
> at
> org.apache.spark.streaming.flume.FlumeReceiver.initServer(FlumeInputDS
> tream.scala:164) at
> org.apache.spark.streaming.flume.FlumeReceiver.onStart(FlumeInputDStre
> am.scala:171) at
> org.apache.spark.streaming.receiver.ReceiverSupervisor.startReceiver(R
> eceiverSupervisor.scala:121) at
> org.apache.spark.streaming.receiver.ReceiverSupervisor.start(ReceiverS
> upervisor.scala:106) at
> org.apache.spark.streaming.scheduler.ReceiverTracker$ReceiverLauncher$
> $anonfun$9.apply(ReceiverTracker.scala:264)
> at
> org.apache.spark.streaming.scheduler.ReceiverTracker$ReceiverLauncher$
> $anonfun$9.apply(ReceiverTracker.scala:257)
> at
> org.apache.spark.SparkContext$$anonfun$runJob$4.apply(SparkContext.sca
> la:1121) at
> org.apache.spark.SparkContext$$anonfun$runJob$4.apply(SparkContext.sca
> la:1121) at
> org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:62)
> at org.apache.spark.scheduler.Task.run(Task.scala:54)
> at
> org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:177)
> at
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.j
> ava:1145) at
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.
> java:615) at java.lang.Thread.run(Thread.java:722)
> Caused by: java.net.BindException: Address already in use at
> sun.nio.ch.Net.bind0(Native Method) at
> sun.nio.ch.Net.bind(Net.java:344) at sun.nio.ch.Net.bind(Net.java:336)
> at
> sun.nio.ch.ServerSocketChannelImpl.bind(ServerSocketChannelImpl.java:1
> 99) at
> sun.nio.ch.ServerSocketAdaptor.bind(ServerSocketAdaptor.java:74)
> at
> org.jboss.netty.channel.socket.nio.NioServerBoss$RegisterTask.run(NioS
> erverBoss.java:193) at
> org.jboss.netty.channel.socket.nio.AbstractNioSelector.processTaskQueu
> e(AbstractNioSelector.java:366) at
> org.jboss.netty.channel.socket.nio.AbstractNioSelector.run(AbstractNio
> Selector.java:290) at
> org.jboss.netty.channel.socket.nio.NioServerBoss.run(NioServerBoss.jav
> a:42)
> ... 3 more
> 14/11/07 23:19:23 INFO receiver.ReceiverSupervisorImpl: Stopped
> receiver 0
> 14/11/07 23:19:23 INFO receiver.BlockGenerator: Stopping
> BlockGenerator
> 14/11/07 23:19:23 INFO util.RecurringTimer: Stopped timer for
> BlockGenerator after time 1415382563200
> 14/11/07 23:19:23 INFO receiver.BlockGenerator: Waiting for block
> pushing thread
> 14/11/07 23:19:23 INFO receiver.BlockGenerator: Pushing out the last 0
> blocks
> 14/11/07 23:19:23 INFO receiver.BlockGenerator: Stopped block pushing
> thread
> 14/11/07 23:19:23 INFO receiver.BlockGenerator: Stopped BlockGenerator
> 14/11/07 23:19:23 INFO receiver.ReceiverSupervisorImpl: Waiting for
> executor stop is over
> 14/11/07 23:19:23 ERROR receiver.ReceiverSupervisorImpl: Stopped
> executor with error: org.jboss.netty.channel.ChannelException: Failed
> to bind to: /172.29.17.178:65001
> 14/11/07 23:19:23 ERROR executor.Executor: Exception in task 0.0 in
> stage 0.0 (TID 0)
> org.jboss.netty.channel.ChannelException: Failed to bind to:
> /172.29.17.178:65001 at
> org.jboss.netty.bootstrap.ServerBootstrap.bind(ServerBootstrap.java:27
> 2) at org.apache.avro.ipc.NettyServer.<init>(NettyServer.java:106)
> at org.apache.avro.ipc.NettyServer.<init>(NettyServer.java:119)
> at org.apache.avro.ipc.NettyServer.<init>(NettyServer.java:74)
> at org.apache.avro.ipc.NettyServer.<init>(NettyServer.java:68)
> at
> org.apache.spark.streaming.flume.FlumeReceiver.initServer(FlumeInputDS
> tream.scala:164) at
> org.apache.spark.streaming.flume.FlumeReceiver.onStart(FlumeInputDStre
> am.scala:171) at
> org.apache.spark.streaming.receiver.ReceiverSupervisor.startReceiver(R
> eceiverSupervisor.scala:121) at
> org.apache.spark.streaming.receiver.ReceiverSupervisor.start(ReceiverS
> upervisor.scala:106) at
> org.apache.spark.streaming.scheduler.ReceiverTracker$ReceiverLauncher$
> $anonfun$9.apply(ReceiverTracker.scala:264)
> at
> org.apache.spark.streaming.scheduler.ReceiverTracker$ReceiverLauncher$
> $anonfun$9.apply(ReceiverTracker.scala:257)
> at
> org.apache.spark.SparkContext$$anonfun$runJob$4.apply(SparkContext.sca
> la:1121) at
> org.apache.spark.SparkContext$$anonfun$runJob$4.apply(SparkContext.sca
> la:1121) at
> org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:62)
> at org.apache.spark.scheduler.Task.run(Task.scala:54)
> at
> org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:177)
> at
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.j
> ava:1145) at
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.
> java:615) at java.lang.Thread.run(Thread.java:722)
> Caused by: java.net.BindException: Address already in use at
> sun.nio.ch.Net.bind0(Native Method) at
> sun.nio.ch.Net.bind(Net.java:344) at sun.nio.ch.Net.bind(Net.java:336)
> at
> sun.nio.ch.ServerSocketChannelImpl.bind(ServerSocketChannelImpl.java:1
> 99) at
> sun.nio.ch.ServerSocketAdaptor.bind(ServerSocketAdaptor.java:74)
> at
> org.jboss.netty.channel.socket.nio.NioServerBoss$RegisterTask.run(NioS
> erverBoss.java:193) at
> org.jboss.netty.channel.socket.nio.AbstractNioSelector.processTaskQueu
> e(AbstractNioSelector.java:366) at
> org.jboss.netty.channel.socket.nio.AbstractNioSelector.run(AbstractNio
> Selector.java:290) at
> org.jboss.netty.channel.socket.nio.NioServerBoss.run(NioServerBoss.jav
> a:42)
> ... 3 more
> 14/11/07 23:19:23 WARN scheduler.TaskSetManager: Lost task 0.0 in
> stage 0.0 (TID 0, localhost):
> org.jboss.netty.channel.ChannelException: Failed to bind to:
> /172.29.17.178:65001
> org.jboss.netty.bootstrap.ServerBootstrap.bind(ServerBootstrap.java:27
> 2)
> org.apache.avro.ipc.NettyServer.<init>(NettyServer.java:106)
> org.apache.avro.ipc.NettyServer.<init>(NettyServer.java:119)
> org.apache.avro.ipc.NettyServer.<init>(NettyServer.java:74)
> org.apache.avro.ipc.NettyServer.<init>(NettyServer.java:68)
> org.apache.spark.streaming.flume.FlumeReceiver.initServer(FlumeInputDS
> tream.scala:164)
> org.apache.spark.streaming.flume.FlumeReceiver.onStart(FlumeInputDStre
> am.scala:171)
> org.apache.spark.streaming.receiver.ReceiverSupervisor.startReceiver(R
> eceiverSupervisor.scala:121)
> org.apache.spark.streaming.receiver.ReceiverSupervisor.start(ReceiverS
> upervisor.scala:106)
> org.apache.spark.streaming.scheduler.ReceiverTracker$ReceiverLauncher$
> $anonfun$9.apply(ReceiverTracker.scala:264)
> org.apache.spark.streaming.scheduler.ReceiverTracker$ReceiverLauncher$
> $anonfun$9.apply(ReceiverTracker.scala:257)
> org.apache.spark.SparkContext$$anonfun$runJob$4.apply(SparkContext.sca
> la:1121)
> org.apache.spark.SparkContext$$anonfun$runJob$4.apply(SparkContext.sca
> la:1121)
> org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:62)
> org.apache.spark.scheduler.Task.run(Task.scala:54)
> org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:177)
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.j
> ava:1145)
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.
> java:615)
> java.lang.Thread.run(Thread.java:722)
> 14/11/07 23:19:23 ERROR scheduler.TaskSetManager: Task 0 in stage 0.0
> failed 1 times; aborting job
> 14/11/07 23:19:23 INFO scheduler.TaskSchedulerImpl: Removed TaskSet
> 0.0, whose tasks have all completed, from pool
> 14/11/07 23:19:23 INFO scheduler.TaskSchedulerImpl: Cancelling stage 0
> 14/11/07 23:19:23 INFO scheduler.DAGScheduler: Failed to run runJob at
> ReceiverTracker.scala:275 Exception in thread "Thread-28"
> org.apache.spark.SparkException: Job aborted due to stage failure:
> Task 0 in stage 0.0 failed 1 times, most recent failure: Lost task 0.0
> in stage 0.0 (TID 0, localhost):
> org.jboss.netty.channel.ChannelException: Failed to bind to:
> /172.29.17.178:65001
> org.jboss.netty.bootstrap.ServerBootstrap.bind(ServerBootstrap.java:27
> 2)
> org.apache.avro.ipc.NettyServer.<init>(NettyServer.java:106)
> org.apache.avro.ipc.NettyServer.<init>(NettyServer.java:119)
> org.apache.avro.ipc.NettyServer.<init>(NettyServer.java:74)
> org.apache.avro.ipc.NettyServer.<init>(NettyServer.java:68)
> org.apache.spark.streaming.flume.FlumeReceiver.initServer(FlumeInputDS
> tream.scala:164)
> org.apache.spark.streaming.flume.FlumeReceiver.onStart(FlumeInputDStre
> am.scala:171)
> org.apache.spark.streaming.receiver.ReceiverSupervisor.startReceiver(R
> eceiverSupervisor.scala:121)
> org.apache.spark.streaming.receiver.ReceiverSupervisor.start(ReceiverS
> upervisor.scala:106)
> org.apache.spark.streaming.scheduler.ReceiverTracker$ReceiverLauncher$
> $anonfun$9.apply(ReceiverTracker.scala:264)
> org.apache.spark.streaming.scheduler.ReceiverTracker$ReceiverLauncher$
> $anonfun$9.apply(ReceiverTracker.scala:257)
> org.apache.spark.SparkContext$$anonfun$runJob$4.apply(SparkContext.sca
> la:1121)
> org.apache.spark.SparkContext$$anonfun$runJob$4.apply(SparkContext.sca
> la:1121)
> org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:62)
> org.apache.spark.scheduler.Task.run(Task.scala:54)
> org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:177)
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.j
> ava:1145)
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.
> java:615)
> java.lang.Thread.run(Thread.java:722)
> Driver stacktrace:
> at
> org.apache.spark.scheduler.DAGScheduler.org$apache$spark$scheduler$DAG
> Scheduler$$failJobAndIndependentStages(DAGScheduler.scala:1185)
> at
> org.apache.spark.scheduler.DAGScheduler$$anonfun$abortStage$1.apply(DA
> GScheduler.scala:1174) at
> org.apache.spark.scheduler.DAGScheduler$$anonfun$abortStage$1.apply(DA
> GScheduler.scala:1173) at
> scala.collection.mutable.ResizableArray$class.foreach(ResizableArray.s
> cala:59) at
> scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:47)
> at
> org.apache.spark.scheduler.DAGScheduler.abortStage(DAGScheduler.scala:
> 1173) at
> org.apache.spark.scheduler.DAGScheduler$$anonfun$handleTaskSetFailed$1
> .apply(DAGScheduler.scala:688) at
> org.apache.spark.scheduler.DAGScheduler$$anonfun$handleTaskSetFailed$1
> .apply(DAGScheduler.scala:688) at
> scala.Option.foreach(Option.scala:236)
> at
> org.apache.spark.scheduler.DAGScheduler.handleTaskSetFailed(DAGSchedul
> er.scala:688) at
> org.apache.spark.scheduler.DAGSchedulerEventProcessActor$$anonfun$rece
> ive$2.applyOrElse(DAGScheduler.scala:1391)
> at akka.actor.ActorCell.receiveMessage(ActorCell.scala:498)
> at akka.actor.ActorCell.invoke(ActorCell.scala:456)
> at akka.dispatch.Mailbox.processMailbox(Mailbox.scala:237)
> at akka.dispatch.Mailbox.run(Mailbox.scala:219)
> at
> akka.dispatch.ForkJoinExecutorConfigurator$AkkaForkJoinTask.exec(Abstr
> actDispatcher.scala:386) at
> scala.concurrent.forkjoin.ForkJoinTask.doExec(ForkJoinTask.java:260)
> at
> scala.concurrent.forkjoin.ForkJoinPool$WorkQueue.runTask(ForkJoinPool.
> java:1339) at
> scala.concurrent.forkjoin.ForkJoinPool.runWorker(ForkJoinPool.java:197
> 9) at
> scala.concurrent.forkjoin.ForkJoinWorkerThread.run(ForkJoinWorkerThrea
> d.java:107)
> Regards,
> Jeniba Johnson
> ________________________________
> The contents of this e-mail and any attachment(s) may contain confidential or privileged
information for the intended recipient(s). Unintended recipients are prohibited from taking
action on the basis of information in this e-mail and using or disseminating the information,
and must notify the sender and delete it from their system. L&T Infotech will not accept
responsibility or liability for the accuracy or completeness of, or the presence of any virus
or disabling code in this e-mail"
<port_status.png><Time_status.png><screenshot3.png>
<port_status.png><Time_status.png><screenshot3.png>

Mime
  • Unnamed multipart/alternative (inline, None, 0 bytes)
View raw message