spark-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Koert Kuipers <ko...@tresata.com>
Subject Re: Use Hadoop-3.2 as a default Hadoop profile in 3.0.0?
Date Mon, 04 Nov 2019 19:24:03 GMT
i get that cdh and hdp backport a lot and in that way left 2.7 behind. but
they kept the public apis stable at the 2.7 level, because thats kind of
the point. arent those the hadoop apis spark uses?

On Mon, Nov 4, 2019 at 10:07 AM Steve Loughran <stevel@cloudera.com.invalid>
wrote:

>
>
> On Mon, Nov 4, 2019 at 12:39 AM Nicholas Chammas <
> nicholas.chammas@gmail.com> wrote:
>
>> On Fri, Nov 1, 2019 at 8:41 AM Steve Loughran <stevel@cloudera.com.invalid>
>> wrote:
>>
>>> It would be really good if the spark distributions shipped with later
>>> versions of the hadoop artifacts.
>>>
>>
>> I second this. If we need to keep a Hadoop 2.x profile around, why not
>> make it Hadoop 2.8 or something newer?
>>
>
> go for 2.9
>
>>
>> Koert Kuipers <koert@tresata.com> wrote:
>>
>>> given that latest hdp 2.x is still hadoop 2.7 bumping hadoop 2 profile
>>> to latest would probably be an issue for us.
>>
>>
>> When was the last time HDP 2.x bumped their minor version of Hadoop? Do
>> we want to wait for them to bump to Hadoop 2.8 before we do the same?
>>
>
> The internal builds of CDH and HDP are not those of ASF 2.7.x. A really
> large proportion of the later branch-2 patches are backported. 2,7 was left
> behind a long time ago
>
>
>
>

Mime
View raw message