hadoop-mapreduce-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Allen Wittenauer (JIRA)" <j...@apache.org>
Subject [jira] [Resolved] (MAPREDUCE-2378) Reduce fails when running on 1 small file.
Date Thu, 17 Jul 2014 15:23:06 GMT

     [ https://issues.apache.org/jira/browse/MAPREDUCE-2378?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]

Allen Wittenauer resolved MAPREDUCE-2378.
-----------------------------------------

    Resolution: Cannot Reproduce

Closing this as 'cannot reproduce' as log4j has since been upgraded.  A few times, actually.

> Reduce fails when running on 1 small file. 
> -------------------------------------------
>
>                 Key: MAPREDUCE-2378
>                 URL: https://issues.apache.org/jira/browse/MAPREDUCE-2378
>             Project: Hadoop Map/Reduce
>          Issue Type: Bug
>    Affects Versions: 0.21.0
>         Environment: java version "1.6.0_07"
> Diablo Java(TM) SE Runtime Environment (build 1.6.0_07-b02)
> Diablo Java HotSpot(TM) 64-Bit Server VM (build 10.0-b23, mixed mode)
>            Reporter: Simon Dircks
>              Labels: 1, failed, file, log4j, reduce, single, small, tiny
>         Attachments: failed reduce task log.html
>
>
> If i run the wordcount example on 1 small (less than 2MB) file i get the following error:
> log4j:ERROR Failed to flush writer,
> java.io.InterruptedIOException
>         at java.io.FileOutputStream.writeBytes(Native Method)
>         at java.io.FileOutputStream.write(FileOutputStream.java:260)
>         at sun.nio.cs.StreamEncoder.writeBytes(StreamEncoder.java:202)
>         at sun.nio.cs.StreamEncoder.implFlushBuffer(StreamEncoder.java:272)
>         at sun.nio.cs.StreamEncoder.implFlush(StreamEncoder.java:276)
>         at sun.nio.cs.StreamEncoder.flush(StreamEncoder.java:122)
>         at java.io.OutputStreamWriter.flush(OutputStreamWriter.java:212)
>         at org.apache.log4j.helpers.QuietWriter.flush(QuietWriter.java:58)
>         at org.apache.log4j.WriterAppender.subAppend(WriterAppender.java:316)
>         at org.apache.log4j.WriterAppender.append(WriterAppender.java:160)
>         at org.apache.hadoop.mapred.TaskLogAppender.append(TaskLogAppender.java:58)
>         at org.apache.log4j.AppenderSkeleton.doAppend(AppenderSkeleton.java:251)
>         at org.apache.log4j.helpers.AppenderAttachableImpl.appendLoopOnAppenders(AppenderAttachableImpl.java:66)
>         at org.apache.log4j.Category.callAppenders(Category.java:206)
>         at org.apache.log4j.Category.forcedLog(Category.java:391)
>         at org.apache.log4j.Category.log(Category.java:856)
>         at org.apache.commons.logging.impl.Log4JLogger.info(Log4JLogger.java:199)
>         at org.apache.hadoop.mapreduce.task.reduce.ShuffleScheduler.freeHost(ShuffleScheduler.java:345)
>         at org.apache.hadoop.mapreduce.task.reduce.Fetcher.run(Fetcher.java:152)
> If i run the wordcount test with 2 files, it works fine. 
> I have actually repeated this with my own code. I am working on something that requires
me to map/reduce a small file and I had to work around the problem by splitting the file into
2 1MB pieces for my job to run. 
> All our jobs that run on 1 single larger file (over 1GB) work flawlessly. I am not exactly
sure the threshold, From the testing i have done it seems to be any file smaller than the
default HDFS block size (64MB) Sometimes it seems random in the 5-64MB range. But its 100%
for the 5MB and smaller files. 



--
This message was sent by Atlassian JIRA
(v6.2#6252)

Mime
View raw message