spark-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From qingyang li <liqingyang1...@gmail.com>
Subject Re: how to config worker HA
Date Wed, 12 Mar 2014 13:26:42 GMT
in addition:
on this site:
https://spark.apache.org/docs/0.9.0/scala-programming-guide.html#hadoop-datasets
,
i find RDD can be stored using a different *storage level on the web,  and
*also find StorageLevel's attribute MEMORY_ONLY_2 .
MEMORY_ONLY_2, Same as the levels above, but replicate each partition on
two cluster nodes.
1. is this one point of fault-tolerance ?
2.if replicate each partition on two cluster nodes will help worker node HA
?
3. if there is MEMORY_ONLY_3 which could replicate each partition on three
cluster nodes?




2014-03-12 12:11 GMT+08:00 qingyang li <liqingyang1985@gmail.com>:

> i have one table in memery,  when one worker becomes dead, i can not query
> data from that table. Here is it's storage status:
>
>
> RDD Name Storage LevelCached PartitionsFraction CachedSize in MemorySize
> on Disk
>  <http://192.168.1.101:4040/storage/rdd?id=47>
> table01 Memory Deserialized 1x Replicated 119 88%       697.0 MB     0.0 Bso,
> my question is:
> 1. what meaning is " Memory Deserialized 1x Replicated" ?
> 2. how to config worker HA so that i can query data even one worker dead.
>

Mime
View raw message