spark-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From satish saley <satishsale...@gmail.com>
Subject Re: killing spark job which is submitted using SparkSubmit
Date Fri, 06 May 2016 20:27:18 GMT
Thank you Anthony. I am clearer on yarn-cluster and yarn-client now.

On Fri, May 6, 2016 at 1:05 PM, Anthony May <anthonymay@gmail.com> wrote:

> Making the master yarn-cluster means that the driver is then running on
> YARN not just the executor nodes. It's then independent of your application
> and can only be killed via YARN commands, or if it's batch and completes.
> The simplest way to tie the driver to your app is to pass in yarn-client as
> master instead.
>
> On Fri, May 6, 2016 at 2:00 PM satish saley <satishsaleyos@gmail.com>
> wrote:
>
>> Hi Anthony,
>>
>> I am passing
>>
>>                     --master
>>                     yarn-cluster
>>                     --name
>>                     pysparkexample
>>                     --executor-memory
>>                     1G
>>                     --driver-memory
>>                     1G
>>                     --conf
>>                     spark.yarn.historyServer.address=http://localhost:18080
>>                     --conf
>>                     spark.eventLog.enabled=true
>>
>>                     --verbose
>>
>>                     pi.py
>>
>>
>> I am able to run the job successfully. I just want to get it killed automatically
whenever I kill my application.
>>
>>
>> On Fri, May 6, 2016 at 11:58 AM, Anthony May <anthonymay@gmail.com>
>> wrote:
>>
>>> Greetings Satish,
>>>
>>> What are the arguments you're passing in?
>>>
>>> On Fri, 6 May 2016 at 12:50 satish saley <satishsaleyos@gmail.com>
>>> wrote:
>>>
>>>> Hello,
>>>>
>>>> I am submitting a spark job using SparkSubmit. When I kill my
>>>> application, it does not kill the corresponding spark job. How would I kill
>>>> the corresponding spark job? I know, one way is to use SparkSubmit again
>>>> with appropriate options. Is there any way though which I can tell
>>>> SparkSubmit at the time of job submission itself. Here is my code:
>>>>
>>>>
>>>>    -
>>>>    import org.apache.spark.deploy.SparkSubmit;
>>>>    - class MyClass{
>>>>    -
>>>>    - public static void main(String args[]){
>>>>    - //preparing args
>>>>    - SparkSubmit.main(args);
>>>>    - }
>>>>    -
>>>>    - }
>>>>
>>>>
>>

Mime
View raw message