spark-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Bruno Faria <>
Subject Shuffle service with more than one executor
Date Mon, 04 Mar 2019 02:51:41 GMT

I have a spark standalone cluster running on Kubernetes With anti-affinity for network performance.

I’d like to enable spark dynamic allocation and for this I need to enable shuffle services
but Looks like I can’t do that running more than one worker instance on the same worker.
Is there a way to accomplish that or I should create one worker per pod?


View raw message