hadoop-common-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Owen O'Malley" <owen.omal...@gmail.com>
Subject Re: Developing cross-component patches post-split
Date Thu, 02 Jul 2009 06:43:39 GMT
On Wed, Jul 1, 2009 at 6:45 PM, Todd Lipcon<tlipcon@gmail.com> wrote:
> Agree with Phillip here. Requiring a new jar to be checked in anywhere after
> every common commit seems unscalable and nonperformant. For git users this
> will make the repository size baloon like crazy (the jar is 400KB and we
> have around 5300 commits so far = 2GB!).

This is silly. Obviously, just like the source the jars compress
across versions very well.

> I think it would be reasonable to require that developers check out a
> structure like:
> working-dir/
>  hadoop-common/
>  hadoop-mapred/
>  hadoop-hdfs/

-1 They are separate subprojects. In the medium term, mapreduce and
hdfs should compile and run against the released version common.
Checking in the jars is a temporary step while the interfaces in
common stabilize. Furthermore, I expect the volume in common should be
much lower than in mapreduce or hdfs.

-- Owen

View raw message