spark-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Soumya Simanta <soumya.sima...@gmail.com>
Subject Re: rsync problem
Date Fri, 19 Sep 2014 11:16:44 GMT
One possible reason is maybe that the checkpointing directory
$SPARK_HOME/work is rsynced as well.
Try emptying the contents of the work folder on each node and try again.



On Fri, Sep 19, 2014 at 4:53 AM, rapelly kartheek <kartheek.mbms@gmail.com>
wrote:

> I
> * followed this command:rsync -avL --progress path/to/spark-1.0.0
> username@destinationhostname:*
>
>
> *path/to/destdirectory. Anyway, for now, I did it individually for each
> node.*
>
> I have copied to each node at a time individually using the above command.
> So, I guess the copying may not contain any mixture of files.  Also, as of
> now, I am not facing any MethodNotFound exceptions. But, there is no job
> execution taking place.
>
> After sometime, one by one, each goes down and the cluster shuts down.
>
> On Fri, Sep 19, 2014 at 2:15 PM, Tobias Pfeiffer <tgp@preferred.jp> wrote:
>
>> Hi,
>>
>> On Fri, Sep 19, 2014 at 5:17 PM, rapelly kartheek <
>> kartheek.mbms@gmail.com> wrote:
>>
>>> > ,
>>>
>>> * you have copied a lot of files from various hosts to
>>> username@slave3:path*
>>> only from one node to all the other nodes...
>>>
>>
>> I don't think rsync can do that in one command as you described. My guess
>> is that now you have a wild mixture of jar files all across your cluster
>> which will lead to fancy exceptions like MethodNotFound etc., that's maybe
>> why your cluster is not working correctly.
>>
>> Tobias
>>
>>
>>
>

Mime
View raw message