hadoop-common-issues mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Sangjin Lee (JIRA)" <j...@apache.org>
Subject [jira] [Commented] (HADOOP-9639) truly shared cache for jars (jobjar/libjar)
Date Tue, 10 Dec 2013 23:16:10 GMT

    [ https://issues.apache.org/jira/browse/HADOOP-9639?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13844802#comment-13844802

Sangjin Lee commented on HADOOP-9639:

the upload mechanism assumes that rename() is atomic. This should be spelled out, to avoid
people trying to use blobstores as their cache infrastructure
It’s a great point. I’ll spell out this requirement in the design doc so there is no confusion.

obviously: add a specific exception to indicate some kind of race condition
I’m a little unsure as to which specific race you’re speaking of, or whether you’re
talking about a generic exception that can indicate any type of race condition. Could you
kindly clarify?

The shared cache enabled flags are obviously things that admins would have to right to set
and make final in yarn-site.xml files, clients to handle this without problems.
That’s a good point. Whether the cluster has the shared cache enabled should be a final
config, and clients should not be able to override it, regardless of whether they choose to
use it or not. I’ll add that clarification.

Regarding your comment on the security (or the use case of mixing cached/shared and uncached/private
resources), I do think that that use case is supported. It is true that using the shared cache
for some resource means that resource is public (available for others to see and use by identifying
checksum). A close analogy would be a jar that’s published to a maven repo via maven deploy.

However, it does not prevent a client from using the shared cache for some resources (e.g.
libjars) and the normal app-private or user-private distributed cache for others (e.g. job-specific
and sensitive configuration files), all within the same job. The shared cache would merely
enable you to take advantage of public sharing of certain resources you’re comfortable sharing.
I’ll add that note to the design doc.

Does that answer your question?
I (personally) think we should all just embrace the presence of 1+ ZK quorum on the cluster...
The main reason that we went without ZK was exactly what you mentioned; we did not want to
introduce the ZK requirement with a side feature such as this. Having said that, I agree that
using ZK for coordination would be more natural than leveraging atomicity from the filesystem
semantics. I also suspect that we could arrive at a ZK-based solution with fairly straightforward
changes to the core idea. In the interest of time, however, I would propose proceeding with
the current design. We could certainly consider adding/replacing the implementation with a
ZK-based one in the next revision.

HADOOP-9361 is attempting to formally define the semantics of a Hadoop-compatible filesystem.
If you could use that as the foundation assumptions & perhaps even notation for defining
your own behavior, the analysis on P7 could be proved more rigorously
I took a quick look at the formal definition you’re working on, and it looks quite interesting.
I will look at your definitions and notations to see if we can describe a more formal proof
and such at some point.

The semantics of `happens-before comes from [Lamport78] Time, Clocks and the Ordering of Events
in a Distributed System, so should be used as the citation as it is more appropriate than
memory models of Java or out-of-order CPUs.
Agreed on using that as a reference/citation. Will add that to the document.

Script-wise, I've been evolving a [generic YARN service launcher, which is nearly ready to
submit as YARN-679: if the cleaner service were implemented as a YARN service it could be
invoked as a run-one command line, or deployed in a YARN container service which provided
cron-like services
That looks interesting too. I’ll look at that when it gets merged, and reconcile it with
what we come up with here.

> truly shared cache for jars (jobjar/libjar)
> -------------------------------------------
>                 Key: HADOOP-9639
>                 URL: https://issues.apache.org/jira/browse/HADOOP-9639
>             Project: Hadoop Common
>          Issue Type: New Feature
>          Components: filecache
>    Affects Versions: 2.0.4-alpha
>            Reporter: Sangjin Lee
>            Assignee: Sangjin Lee
>         Attachments: shared_cache_design.pdf, shared_cache_design_v2.pdf, shared_cache_design_v3.pdf,
> Currently there is the distributed cache that enables you to cache jars and files so
that attempts from the same job can reuse them. However, sharing is limited with the distributed
cache because it is normally on a per-job basis. On a large cluster, sometimes copying of
jobjars and libjars becomes so prevalent that it consumes a large portion of the network bandwidth,
not to speak of defeating the purpose of "bringing compute to where data is". This is wasteful
because in most cases code doesn't change much across many jobs.
> I'd like to propose and discuss feasibility of introducing a truly shared cache so that
multiple jobs from multiple users can share and cache jars. This JIRA is to open the discussion.

This message was sent by Atlassian JIRA

View raw message