interesting and possibly-related blog post from netflix earlier this year:

On Fri, Aug 1, 2014 at 8:09 AM, nit <> wrote:
@sean - I am using latest code from master branch, up to commit#
a7d145e98c55fa66a541293930f25d9cdc25f3b4 .

In my case I have multiple directories with 1024 files(in that sizes of
files may be different). For some directories I always get consistent
result... and for others I can reproduce the inconsistent behavior.

I am not much familiar with S3 protocol or s3 driver in spark. I am
wondering, how does s3 driver verifies that all files(and their content)
under a directory were correctly?

View this message in context:
Sent from the Apache Spark User List mailing list archive at