spark-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From zhan8610189 <375901...@qq.com>
Subject spark streaming executor number still increase
Date Wed, 13 Sep 2017 05:30:37 GMT
I use CDH spark(1.5.0-hadoop2.6.0) cluster, and  write one spark streaming
application, and start spark streaming using following command:

spark-submit --master spark://xxxx:7077 --conf spark.cores.max=4
--num-executors 4 --total-executor-cores 4 --executor-cores 4
--executor-memory 2g --class com.xxxx.KafkaActive
streaming-assembly-0.0.1-SNAPSHOT.jar

but I found spark node(server) memory is all used out, and the spark
streaming executor number is still increasing, and new executor are started
but the removed executors(CoarseGrainedExecutorBackend instances) does not
exited.

what should i do.

look the following pics:

<http://apache-spark-user-list.1001560.n3.nabble.com/file/t8431/0FBB2E50-FBCA-4AB7-A9B4-9A248082EFD6.png>










--
Sent from: http://apache-spark-user-list.1001560.n3.nabble.com/

---------------------------------------------------------------------
To unsubscribe e-mail: user-unsubscribe@spark.apache.org


Mime
View raw message