spark-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Ji Yan <ji...@drive.ai>
Subject Spark job only starts tasks on a single node
Date Wed, 06 Dec 2017 06:45:24 GMT
Hi all,

I am running Spark 2.0 on Mesos 1.1. I was trying to split up my job onto
several nodes. I try to set the number of executors by the formula
(spark.cores.max / spark.executor.cores). The behavior I saw was that Spark
will try to fill up on one mesos node as many executors as it can, then it
stops going to other mesos nodes despite that it has not done scheduling
all the executors I have asked it to yet! This is super weird!

Did anyone notice this behavior before? Any help appreciated!

Ji

-- 
 

The information in this email is confidential and may be legally 
privileged. It is intended solely for the addressee. Access to this email 
by anyone else is unauthorized. If you are not the intended recipient, any 
disclosure, copying, distribution or any action taken or omitted to be 
taken in reliance on it, is prohibited and may be unlawful.

Mime
View raw message