Take a look at https://github.com/spark-jobserver/spark-jobserver or https://github.com/cloudera/livy

you can launch a persistent spark context and then submit your jobs using a already running context

On Wed, Nov 2, 2016 at 3:34 AM, Fanjin Zeng <fj.zeng@yahoo.com.invalid> wrote:

 I am working on a project that takes requests from HTTP server and computes accordingly on spark. But the problem is when I receive many request at the same time, users have to waste a lot of time on the unnecessary startups that occur on each request. Does Spark have built-in job scheduler function to solve this problem or is there any trick can be used to avoid these unnecessary startups?


To unsubscribe e-mail: user-unsubscribe@spark.apache.org