spark-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Matthias Boehm <>
Subject Fair scheduler pool leak
Date Fri, 06 Apr 2018 02:46:43 GMT
Hi all,

for concurrent Spark jobs spawned from the driver, we use Spark's fair
scheduler pools, which are set and unset in a thread-local manner by
each worker thread. Typically (for rather long jobs), this works very
well. Unfortunately, in an application with lots of very short
parallel sections, we see 1000s of these pools remaining in the Spark
UI, which indicates some kind of leak. Each worker cleans up its local
property by setting it to null, but not all pools are properly
removed. I've checked and reproduced this behavior with Spark 2.1-2.3.

Now my question: Is there a way to explicitly remove these pools,
either globally, or locally while the thread is still alive?


To unsubscribe e-mail:

View raw message