spark-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Han JU <ju.han.fe...@gmail.com>
Subject Re: No space left on device error when pulling data from s3
Date Tue, 06 May 2014 17:24:49 GMT
After some investigation, I found out that there's lots of temp files under

/tmp/hadoop-root/s3/

But this is strange since in both conf files,
~/ephemeral-hdfs/conf/core-site.xml and ~/spark/conf/core-site.xml, the
setting `hadoop.tmp.dir` is set to `/mnt/ephemeral-hdfs/`. Why spark jobs
still write temp files to /tmp/hadoop-root ?


2014-05-06 18:05 GMT+02:00 Han JU <ju.han.felix@gmail.com>:

> Hi,
>
> I've a `no space left on device` exception when pulling some 22GB data
> from s3 block storage to the ephemeral HDFS. The cluster is on EC2 using
> spark-ec2 script with 4 m1.large.
>
> The code is basically:
>   val in = sc.textFile("s3://...")
>   in.saveAsTextFile("hdfs://...")
>
> Spark creates 750 input partitions based on the input splits, when it
> begins throwing this exception, there's no space left on the root file
> system on some worker machine:
>
> Filesystem           1K-blocks      Used Available Use% Mounted on
> /dev/xvda1             8256952   8256952         0 100% /
> tmpfs                  3816808         0   3816808   0% /dev/shm
> /dev/xvdb            433455904  29840684 381596916   8% /mnt
> /dev/xvdf            433455904  29437000 382000600   8% /mnt2
>
> Before the job begins, only 35% is used.
>
> Filesystem           1K-blocks      Used Available Use% Mounted on
> /dev/xvda1             8256952   2832256   5340840  35% /
> tmpfs                  3816808         0   3816808   0% /dev/shm
> /dev/xvdb            433455904  29857768 381579832   8% /mnt
> /dev/xvdf            433455904  29470104 381967496   8% /mnt2
>
>
> Some suggestions on this problem? Does Spark caches/stores some data
> before writing to HDFS?
>
>
> Full stacktrace:
> ---------------------
> java.io.IOException: No space left on device
> at java.io.FileOutputStream.writeBytes(Native Method)
>  at java.io.FileOutputStream.write(FileOutputStream.java:345)
> at java.io.BufferedOutputStream.write(BufferedOutputStream.java:122)
>  at
> org.apache.hadoop.fs.s3.Jets3tFileSystemStore.retrieveBlock(Jets3tFileSystemStore.java:210)
> at sun.reflect.GeneratedMethodAccessor5.invoke(Unknown Source)
>  at
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> at java.lang.reflect.Method.invoke(Method.java:606)
>  at
> org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:82)
> at
> org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:59)
>  at com.sun.proxy.$Proxy8.retrieveBlock(Unknown Source)
> at
> org.apache.hadoop.fs.s3.S3InputStream.blockSeekTo(S3InputStream.java:160)
>  at org.apache.hadoop.fs.s3.S3InputStream.read(S3InputStream.java:119)
> at java.io.DataInputStream.read(DataInputStream.java:100)
>  at org.apache.hadoop.util.LineReader.readLine(LineReader.java:134)
> at
> org.apache.hadoop.mapred.LineRecordReader.<init>(LineRecordReader.java:92)
>  at
> org.apache.hadoop.mapred.TextInputFormat.getRecordReader(TextInputFormat.java:51)
> at org.apache.spark.rdd.HadoopRDD$$anon$1.<init>(HadoopRDD.scala:156)
>  at org.apache.spark.rdd.HadoopRDD.compute(HadoopRDD.scala:149)
> at org.apache.spark.rdd.HadoopRDD.compute(HadoopRDD.scala:64)
>  at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:241)
> at org.apache.spark.rdd.RDD.iterator(RDD.scala:232)
>  at org.apache.spark.rdd.MappedRDD.compute(MappedRDD.scala:31)
> at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:241)
>  at org.apache.spark.rdd.RDD.iterator(RDD.scala:232)
> at org.apache.spark.rdd.MappedRDD.compute(MappedRDD.scala:31)
>  at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:241)
> at org.apache.spark.rdd.RDD.iterator(RDD.scala:232)
>  at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:109)
> at org.apache.spark.scheduler.Task.run(Task.scala:53)
>  at
> org.apache.spark.executor.Executor$TaskRunner$$anonfun$run$1.apply$mcV$sp(Executor.scala:213)
> at
> org.apache.spark.deploy.SparkHadoopUtil.runAsUser(SparkHadoopUtil.scala:49)
>  at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:178)
> at
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
>  at
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
> at java.lang.Thread.run(Thread.java:744)
>
>
> --
> *JU Han*
>
> Data Engineer @ Botify.com
>
> +33 0619608888
>



-- 
*JU Han*

Data Engineer @ Botify.com

+33 0619608888

Mime
View raw message