spark-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Qiao, Richard" <Richard.Q...@capitalone.com>
Subject Re: Programmatically get status of job (WAITING/RUNNING)
Date Fri, 08 Dec 2017 02:27:25 GMT
For your question of example, the answer is yes.
“    For example, if an application wanted 4 executors
    (spark.executor.instances=4) but the spark cluster can only provide 1
    executor. This means that I will only receive 1 onExecutorAdded event. Will
    the application state change to RUNNING (even if 1 executor was allocated)?
“

Best Regards
Richard


On 12/7/17, 2:40 PM, "bsikander" <behroz89@gmail.com> wrote:

    Marcelo Vanzin wrote
    > I'm not sure I follow you here. This is something that you are
    > defining, not Spark.
    
    Yes, you are right. In my code, 
    1) my notion of RUNNING is that both driver + executors are in RUNNING
    state.
    2) my notion of WAITING is if any one of driver/executor is in WAITING
    state.
    
    So,
    - SparkLauncher provides me the details about the "driver".
    RUNNING/SUBMITTED/WAITING
    - SparkListener provides me the details about the "executor" using
    onExecutorAdded/onExecutorDeleted
    
    I want to combine both SparkLauncher + SparkListener to achieve my view of
    RUNNING/WAITING.
    
    The only thing confusing me here is that I don't know how Spark internally
    converts applications from WAITING to RUNNING state.
    For example, if an application wanted 4 executors
    (spark.executor.instances=4) but the spark cluster can only provide 1
    executor. This means that I will only receive 1 onExecutorAdded event. Will
    the application state change to RUNNING (even if 1 executor was allocated)?
    
    If I am clear on this logic I can implement my feature.
    
    
    
    --
    Sent from: http://apache-spark-user-list.1001560.n3.nabble.com/
    
    ---------------------------------------------------------------------
    To unsubscribe e-mail: user-unsubscribe@spark.apache.org
    
    

________________________________________________________

The information contained in this e-mail is confidential and/or proprietary to Capital One
and/or its affiliates and may only be used solely in performance of work or services for Capital
One. The information transmitted herewith is intended only for use by the individual or entity
to which it is addressed. If the reader of this message is not the intended recipient, you
are hereby notified that any review, retransmission, dissemination, distribution, copying
or other use of, or taking of any action in reliance upon this information is strictly prohibited.
If you have received this communication in error, please contact the sender and delete the
material from your computer.
Mime
View raw message