spark-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Tathagata Das <t...@databricks.com>
Subject Re: Is it this a BUG?: Why Spark Flume Streaming job is not deploying the Receiver to the specified host?
Date Tue, 18 Aug 2015 21:40:20 GMT
Are you using the Flume polling stream or the older stream?

Such problems of binding used to occur in the older push-based approach,
hence we built the polling stream (pull-based).


On Tue, Aug 18, 2015 at 4:45 AM, diplomatic Guru <diplomaticguru@gmail.com>
wrote:

> I'm testing the Flume + Spark integration example (flume count).
>
> I'm deploying the job using yarn cluster mode.
>
> I first logged into the Yarn cluster, then submitted the job and passed in
> a specific worker node's IP to deploy the job. But when I checked the
> WebUI, it failed to bind to the specified IP because the receiver was
> deployed to a different host, not the one I asked it to. Do you know?
>
> For your information,  I've also tried passing the IP address used by the
> resource manager to find resources but no joy. But when I set the host to
> 'localhost' and deploy to the cluster it is binding a worker node that is
> selected by the resource manager.
>
>
>

Mime
View raw message