I have been looking into Spark pools and have two questions I would really like to get answers about.
1. Are pools available when Yarn is used as resource manager?
2. Do pools define static partitioning of the cluster? I mean, if I define two pools (using xml file) with equal weight, and only submit jobs to one of them, will only half of the resources be utilized? in other words will Spark reserve the resources of the seconds pool?
Regarding the second question, is it different with on the fly configuration of pools (when pool is created from the code without it appearing in XML), and if the resources are not reserved, how exactly the cluster re-balance itself? i.e. what is the process of getting resources to the second pool once applications are submit to it?