hadoop-common-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Yang Zhou (JIRA)" <j...@apache.org>
Subject [jira] Created: (HADOOP-6483) Provide Hadoop as a Service based on standards
Date Fri, 08 Jan 2010 02:54:16 GMT
Provide Hadoop as a Service based on standards

                 Key: HADOOP-6483
                 URL: https://issues.apache.org/jira/browse/HADOOP-6483
             Project: Hadoop Common
          Issue Type: New Feature
            Reporter: Yang Zhou

Hadoop as a Service provides a standards-based web services interface that layers on top of
Hadoop on Demand and allows Hadoop jobs to be submitted via popular schedulers, such as Sun
Grid Engine (SGE), Platform LSF, Microsoft HPC Server 2008 etc., to local or remote Hadoop
clusters.  This allows multiple Hadoop clusters within an organization to be efficiently shared
and provides flexibility, allowing remote Hadoop clusters, offered as Cloud services, to be
used for experimentation and burst capacity. HaaS hides complexity, allowing users to submit
many types of compute or data intensive work via a single scheduler without actually knowing
where it will be done. Additionally providing a standards-based front-end to Hadoop means
that users would be able to easily choose HaaS providers without being locked in, i.e. via
proprietary interfaces such as Amazon's map/reduce service.  

Our HaaS implementation uses the OGF High Performance Computing Basic Profile standard to
define interoperable job submission descriptions and management interfaces to Hadoop. It uses
Hadoop on Demand to provision capacity. Our HaaS implementation also supports files stage
in/out with protocols like FTP, SCP and GridFTP.

Our HaaS implementation also provides a suit of RESTful interface which  compliant with HPC-BP.

This message is automatically generated by JIRA.
You can reply to this email to add a comment to the issue online.

View raw message