spark-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From akshay naidu <akshaynaid...@gmail.com>
Subject Re: sqoop import job not working when spark thrift server is running.
Date Tue, 20 Feb 2018 10:09:46 GMT
hello vijay,
appreciate your reply.

  what was the error when you are trying to run mapreduce import job when
> the
> thrift server is running.


it didnt throw any error, it just gets stuck at
INFO mapreduce.Job: Running job: job_151911053

and resumes the moment i kill Thrift .

thanks

On Tue, Feb 20, 2018 at 1:48 PM, vijay.bvp <bvpsarma@gmail.com> wrote:

> what was the error when you are trying to run mapreduce import job when the
> thrift server is running.
> this is only config changed? what was the config before...
> also share the spark thrift server job config such as no of executors,
> cores
> memory etc.
>
> My guess is your mapreduce job is unable to get sufficient resources,
> container couldn't be launched and so failing to start, this could either
> because of non availability sufficient cores or RAM
>
> 9 worker nodes 12GB RAM each with 6 cores (max allowed cores 4 per
> container)
> you have to keep some room for operation system and other daemons.
>
> if thrift server is setup to have 11 executors with 3 cores each = 33 cores
> for workers and 1 for driver so 34 cores required for spark job and rest
> for
> any other jobs.
>
> spark driver and worker memory is ~9GB
> with 9 12 GB RAM worker nodes not sure how much you can allocate.
>
> thanks
> Vijay
>
>
>
> --
> Sent from: http://apache-spark-user-list.1001560.n3.nabble.com/
>
> ---------------------------------------------------------------------
> To unsubscribe e-mail: user-unsubscribe@spark.apache.org
>
>

Mime
View raw message