spark-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Steve Loughran <>
Subject Re: Spark Job always cause a node to reboot
Date Fri, 05 Jun 2015 10:51:40 GMT

> On 4 Jun 2015, at 15:59, Chao Chen <> wrote:
> But when I try to run the Pagerank from HiBench, it always cause a node to reboot during
the middle of the work for all scala, java, and python versions. But works fine
> with the MapReduce version from the same benchmark. 

do you mean a real server reboot? Without warning?

That's a serious problem. If it was just one server I'd look at hardware problems, especially
memory, whether you have mixed CPUs in a dual-socket server, or even potentially HDD issue.

if its all servers then its an OS or filesystem problem.

As well as the vm.swappiness, turn off huge pages in the kernel

See also some Hadoop/HDFS notes on filesystems, 5 years old

Everyone generally still recommends ext3 & maybe ext4 with noatime

To unsubscribe, e-mail:
For additional commands, e-mail:

View raw message