spark-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Qiao, Richard" <Richard.Q...@capitalone.com>
Subject Re: Programmatically get status of job (WAITING/RUNNING)
Date Fri, 08 Dec 2017 02:05:53 GMT
For #2, do you mean “RUNNING” showing in “Driver” table? If yes, that is not a problem,
because driver does run, while there is no executor available, as can be a status for you
to catch – Driver running while no executors.
Comparing #1 and #3, my understanding of “submitted” is “the jar is submitted to executors”.
With this concept, you may define your own status.

Best Regards
Richard


On 12/4/17, 4:06 AM, "bsikander" <behroz89@gmail.com> wrote:

    So, I tried to use SparkAppHandle.Listener with SparkLauncher as you
    suggested. The behavior of Launcher is not what I expected.
    
    1- If I start the job (using SparkLauncher) and my Spark cluster has enough
    cores available, I receive events in my class extending
    SparkAppHandle.Listener and I see the status getting changed from
    UNKOWN->CONNECTED -> SUBMITTED -> RUNNING. All good here.
    
    2- If my Spark cluster has cores only for my Driver process (running in
    cluster mode) but no cores for my executor, then I still receive the RUNNING
    event. I was expecting something else since my executor has no cores and
    Master UI shows WAITING state for executors, listener should respond with
    SUBMITTED state instead of RUNNING.
    
    3- If my Spark cluster has no cores for even the driver process then
    SparkLauncher invokes no events at all. The state stays in UNKNOWN. I would
    have expected it to be in SUBMITTED state atleast.
    
    *Is there any way with which I can reliably get the WAITING state of job?*
    Driver=RUNNING, executor=RUNNING, overall state should be RUNNING
    Driver=RUNNING, executor=WAITING overall state should be SUBMITTED/WAITING
    Driver=WAITING, executor=WAITING overall state should be
    CONNECTED/SUBMITTED/WAITING
    
    
    
    
    
    
    
    --
    Sent from: http://apache-spark-user-list.1001560.n3.nabble.com/
    
    ---------------------------------------------------------------------
    To unsubscribe e-mail: user-unsubscribe@spark.apache.org
    
    

________________________________________________________

The information contained in this e-mail is confidential and/or proprietary to Capital One
and/or its affiliates and may only be used solely in performance of work or services for Capital
One. The information transmitted herewith is intended only for use by the individual or entity
to which it is addressed. If the reader of this message is not the intended recipient, you
are hereby notified that any review, retransmission, dissemination, distribution, copying
or other use of, or taking of any action in reliance upon this information is strictly prohibited.
If you have received this communication in error, please contact the sender and delete the
material from your computer.
Mime
View raw message