Hi,

I’ve recently bumped up the resources for a spark streaming job – and the performance started to degrade over time.
it was running fine on 7 nodes with 14 executor cores each (via Yarn) until I bumped executor.cores to 22 cores/node (out of 32 on AWS c3.xlarge, 24 for yarn)

The driver has 2 cores and 2 GB ram (usage is at zero).

For really low data volume it goes from 1-2 seconds per batch to 4-5 s/batch after about 6 hours, doing almost nothing. I’ve noticed that the scheduler delay is 3-4s, even 5-6 seconds for some tasks. Should be in the low tens of milliseconds. What’s weirder is that under moderate load (thousands of events per second) - the delay is not as obvious anymore.

After this I reduced the executor.cores to 20 and bumped driver.cores to 4 and it seems to be ok now. 
However, this is totally empirical, I have not found any documentation, code samples or email discussion on how to properly set driver.cores.

Does anyone know:
Thanks in advance,
-adrian