spark-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Steve Loughran <ste...@hortonworks.com>
Subject Re: Missing output partition file in S3
Date Fri, 16 Sep 2016 10:46:12 GMT

On 15 Sep 2016, at 19:37, Chen, Kevin <Kevin.Chen@neustar.biz<mailto:Kevin.Chen@neustar.biz>>
wrote:

Hi,

Has any one encountered an issue of missing output partition file in S3 ? My spark job writes
output to a S3 location. Occasionally, I noticed one partition file is missing. As a result,
one chunk of data was lost. If I rerun the same job, the problem usually goes away. This has
been happening pretty random. I observed once or twice a week on a daily run job. I am using
Spark 1.2.1.

Very much appreciated on any input, suggestion of fix/workaround.




This doesn't sound good

Without making any promises about being able to fix this,  I would like to understand the
setup to see if there is something that could be done to address this

  1.  Which S3 installation? US East or elsewhere
  2.  Which s3 client: s3n or s3a. If on hadoop 2.7+, can you switch to S3a if you haven't
already (exception, if you are using AWS EMR you have to stick with their s3:// client)
  3.  Are you running in-EC2 or remotely?
  4.  How big are the datasets being generated?
  5.  Do you have speculative execution turned on
  6.  which committer? is the external "DirectCommitter", or the classic Hadoop FileOutputCommitter?
If so &you are using Hadoop 2.7.x, can you try the v2 algorithm (hadoop.mapreduce.fileoutputcommitter.algorithm.version
2)

I should warn that the stance of myself and colleagues is "dont commit direct to S3", write
to HDFS and do a distcp when you finally copy out the data. S3 itself doesn't have enough
consistency for committing output to work in the presence of all race conditions and failure
modes. At least here you've noticed the problem; the thing people fear is not noticing that
a problem has arisen

-Steve

Mime
View raw message