spark-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Luca Canali <>
Subject RE: Understanding Executors UI
Date Fri, 08 Jan 2021 20:59:41 GMT
You report 'Storage Memory': 3.3TB/ 598.5 GB -> The first number is the memory used for
storage, the second one is the available memory (for storage) in the unified memory pool.
The used memory shown in your webui snippet is indeed quite high (higher than the available
memory!? ), you can probably profit by drilling down on that to understand better what is
For example look at the details per executor (the numbers you reported are aggregated values),
then also look at the “storage tab” for a list of cached RDDs with details.
In case, Spark 3.0 has improved memory instrumentation and improved instrumentation for streaming,
so you can you profit from testing there too.

From: Eric Beabes <>
Sent: Friday, January 8, 2021 04:23
To: Luca Canali <>
Cc: spark-user <>
Subject: Re: Understanding Executors UI

So when I see this for 'Storage Memory': 3.3TB/ 598.5 GB - it's telling me that Spark is using
3.3 TB of memory & 598.5 GB is used for caching data, correct? What I am surprised about
is that these numbers don't change at all throughout the day even though the load on the system
is low after 5pm PST.

I would expect the "Memory used" to be lower than 3.3Tb after 5pm PST.

Does Spark 3.0 do a better job of memory management? Wondering if upgrading to Spark 3.0 would
improve performance?

On Wed, Jan 6, 2021 at 2:29 PM Luca Canali <<>>
Hi Eric,

A few links, in case they can be useful for your troubleshooting:

The Spark Web UI is documented in Spark 3.x documentation, although you can use most of it
for Spark 2.4 too:

Spark memory management is documented at
Additional resource: see also this diagram


From: Eric Beabes <<>>
Sent: Wednesday, January 6, 2021 00:20
To: spark-user <<>>
Subject: Understanding Executors UI


Not sure if this image will go through. (Never sent an email to this mailing list with an

I am trying to understand this 'Executors' UI in Spark 2.4. I have a Stateful Structured Streaming
job with 'State timeout' set to 10 minutes. When the load on the system is low a message gets
written to Kafka immediately after the State times out BUT under heavy load it takes over
40 minutes to get a message on the output topic. Trying to debug this issue & see if performance
can be improved.


1) I am requesting 3.2 TB of memory but it seems the job keeps using only 598.5 GB as per
the values in 'Storage Memory' as well as 'On Heap Storage Memory'. Wondering if this is a
Cluster issue OR am I not setting values correctly?
2) Where can I find documentation to understand different 'Tabs' in the Spark UI? (Sorry,
Googling didn't help. I will keep searching.)

Any pointers would be appreciated. Thanks.

View raw message