spark-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From zhan8610189 <>
Subject spark streaming executor number still increase
Date Wed, 13 Sep 2017 05:30:37 GMT
I use CDH spark(1.5.0-hadoop2.6.0) cluster, and  write one spark streaming
application, and start spark streaming using following command:

spark-submit --master spark://xxxx:7077 --conf spark.cores.max=4
--num-executors 4 --total-executor-cores 4 --executor-cores 4
--executor-memory 2g --class com.xxxx.KafkaActive

but I found spark node(server) memory is all used out, and the spark
streaming executor number is still increasing, and new executor are started
but the removed executors(CoarseGrainedExecutorBackend instances) does not

what should i do.

look the following pics:


Sent from:

To unsubscribe e-mail:

View raw message