hadoop-mapreduce-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Aleksandr Balitsky (JIRA)" <j...@apache.org>
Subject [jira] [Created] (MAPREDUCE-6778) Provide way to limit MRJob's stdout/stderr size
Date Wed, 14 Sep 2016 08:00:36 GMT
Aleksandr Balitsky created MAPREDUCE-6778:

             Summary: Provide way to limit MRJob's stdout/stderr size
                 Key: MAPREDUCE-6778
                 URL: https://issues.apache.org/jira/browse/MAPREDUCE-6778
             Project: Hadoop Map/Reduce
          Issue Type: Improvement
          Components: nodemanager
    Affects Versions: 2.7.0
            Reporter: Aleksandr Balitsky
            Priority: Minor

We can run job with huge amount of stdout/stderr and causing undesired consequence. There
is already a Jira which is been open for while now:

The possible solution is to redirect Stdout's and Stderr's output to log4j in YarnChild.java
main method.
In this case System.out and System.err streams will be redirected to log4j logger with  appender
that will direct output in to stderr or stdout files with needed size limitation. Thereby
we are able to limit log's size on the fly, having one backup rolling file (thanks to ContainerRollingLogAppender).

One of the syslog's size limitation approaches works the same way.

So, we can set limitation via new properties in mapred-site.xml:

Advantages of such solution:
- it allows us to restrict file sizes during job execution.
- we can see logs during job execution.

- It will work only for MRs jobs.

Is it appropriate solution for solving this problem, or is there something better?

This message was sent by Atlassian JIRA

To unsubscribe, e-mail: mapreduce-dev-unsubscribe@hadoop.apache.org
For additional commands, e-mail: mapreduce-dev-help@hadoop.apache.org

View raw message