giraph-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Maja Kabiljo <>
Subject Re: [jira] [Commented] (GIRAPH-273) Aggregators shouldn't use Zookeeper
Date Sat, 01 Sep 2012 07:22:06 GMT
In the case you mentioned you already have a million connections, that's
why I don't see how 2k of them make a difference. Maybe I'm missing
something here.

The reason why this can be done without additional barrier is that
aggregated values which we receive from other workers can be treated in
the same way we treat values given in vertex.compute - we can just
aggregate them right away. Should be doable with the tree approach also -
we can send the values as soon as we are done with the computation and we
received values from our child in the tree, if any.

I guess we can also leave current implementation as one of the options,
that didn't occur to me, thanks. Since aggregators are written to the same
znode as some other data, that should be the least possible overhead for
cases with just a few simple value aggregators.

I'm not sure is the performance affected when a bit of aggregators are
added (another guy in the team is working on the application), but I don't
think we can get that far to notice it because of ZooKeeper memory limit.
Avery, can you take the question about our application? (I'm not sure what
are we allowed to share publicly what not :-))

On 9/1/12 12:25 AM, "Eli Reisman" <> wrote:

>With 2000 workers, thats 2000 extra connections in the system. We run
>Giraph/Netty on the same cluster as existing jobs that use Hadoop RPC so
>network resources are sometimes at a premium. These jobs are often running
>on the same boxes as our worker mappers are running, and the scheduling is
>not under our control or particularly suited to Giraph. I'm not too
>familiar with the aggregator code but it seems like if you have an idea
>an implementation that doesn't use a barrier, I agree with Avery that this
>doesn't preclude the tree option in that scenario either.
>On the other hand, if you have a specialized use case, maybe the easiest
>thing would be to do what it takes to make your map aggregator work
>you like and have it be command-line optional, and just leave the existing
>ZK implementation in place for the rest of the use cases. Have you had
>problems with needing more standard aggregators and ZK nodes not holding
>enough data, or is this map aggregator driving your need for this feature?
>Can I ask what algorithm you're implementing that requires a globally
>aggregated map at every superstep? Have you guys noticed performance or
>speed issues with the existing ZK implementation as you add aggregators to
>an application?
>Anyway I'm not firmly for or against any of this stuff, just curious. If
>you find an implementation that works for you that sounds great. If it was
>optional with the existing version or the tree available, that would
>probably save us some headache here when we share a cluster (which is
>almost all the time.)
>On Fri, Aug 31, 2012 at 12:55 PM, Maja Kabiljo (JIRA)
>>     [
>> Maja Kabiljo commented on GIRAPH-273:
>> -------------------------------------
>> That's true, we can implement several different approaches and decide
>> which one to use based on the current application needs.
>> > Aggregators shouldn't use Zookeeper
>> > -----------------------------------
>> >
>> >                 Key: GIRAPH-273
>> >                 URL:
>> >             Project: Giraph
>> >          Issue Type: Improvement
>> >            Reporter: Maja Kabiljo
>> >            Assignee: Maja Kabiljo
>> >
>> > We use Zookeeper znodes to transfer aggregated values from workers to
>> master and back. Zookeeper is supposed to be used for coordination, and
>> also has a memory limit which prevents users from having aggregators
>> large value objects. These are the reasons why we should implement
>> aggregators gathering and distribution in a different way.
>> --
>> This message is automatically generated by JIRA.
>> If you think it was sent incorrectly, please contact your JIRA
>> administrators
>> For more information on JIRA, see:

View raw message