hadoop-yarn-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Todd Lipcon <t...@cloudera.com>
Subject Re: Release numbering for branch-2 releases
Date Mon, 04 Feb 2013 22:36:12 GMT
On Mon, Feb 4, 2013 at 2:14 PM, Suresh Srinivas <suresh@hortonworks.com>wrote:

> Why? Can you please share some reasons?
> I actually think alpha and beta and stable/GA are much better way to set
> the expectation
> of the quality of a release. This has been practiced in software release
> cycle for a long time.
> Having an option to release alpha is good for releasing early and getting
> feedback from
> people who can try it out and at the same time warning other not so
> adventurous users on
> quality expectation.
My issue with the current scheme is that there is little definition as to
what alpha/beta/stable means. We're trying to boil down a complex issue
into a simple tag which doesn't well capture the various subtleties. For
example, different people may variously use the terms to describe:

- Quality/completeness: for example, missing docs, buggy UIs, difficult
setup/install, etc
- Safety: for example, potential bugs which may risk data loss
- Stability: for example, potential bugs which may risk uptime
- End-user API compatibility: will user-facing APIs change in this version?
(affecting those who write MR jobs)
- Framework-developer API compatibility: will YARN-internal APIs change in
this version? (affecting those who write non-MR YARN frameworks)
- Binary compatibility: can I continue to use my application (or YARN)
framework compiled against an old version with this version, without a
- Intra-cluster wire compatibility: can I rolling-upgrade from A to B?
- Client-server wire compatibility: can I use old clients to talk to an
upgraded cluster?

Depending on the user's expectations and needs, different factors above may
be significantly more or less important. And different portions of the
software may have different levels of stability in each of the areas. As
I've mentioned in previous threads, my experiences supporting production
Hadoop 1.x and Hadoop 2.x HDFS clusters has led me to believe that 2.x,
while being "alpha" is significantly less prone to data loss bugs than 1.x
in Hadoop. But, with some of the changes in the proposed 2.0.3-alpha, it
wouldn't be wire-protocol-stable.

How can we best devise a scheme that explains the various factors above in
a more detailed way than one big red warning sticker? What of the above
factors does the community think would be implied by "GA?"

Todd Lipcon
Software Engineer, Cloudera

  • Unnamed multipart/alternative (inline, None, 0 bytes)
View raw message