Added: helix/site-content/0.7.1-docs/project-info.html URL: http://svn.apache.org/viewvc/helix/site-content/0.7.1-docs/project-info.html?rev=1624796&view=auto ============================================================================== --- helix/site-content/0.7.1-docs/project-info.html (added) +++ helix/site-content/0.7.1-docs/project-info.html Sun Sep 14 01:47:34 2014 @@ -0,0 +1,306 @@ + + + + + + + + Apache Helix - Project Information + + + + + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+ +
+
+ +
+
+
+
+
+ +

This document provides an overview of the various documents and links that are part of this project's general information. All of this content is automatically generated by Maven on behalf of the project.

+
+

Overview

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
DocumentDescription
AboutHelix is a generic cluster management framework used for the automatic management of partitioned, replicated and distributed resources hosted on a cluster of nodes.
Plugin ManagementThis document lists the plugins that are defined through pluginManagement.
Distribution ManagementThis document provides informations on the distribution management of this project.
Dependency InformationThis document describes how to to include this project as a dependency using various dependency management tools.
Dependency ConvergenceThis document presents the convergence of dependency versions across the entire project, and its sub modules.
Source RepositoryThis is a link to the online source repository that can be viewed via a web browser.
Mailing ListsThis document provides subscription and archive information for this project's mailing lists.
Issue TrackingThis is a link to the issue management system for this project. Issues (bugs, features, change requests) can be created and queried using this link.
Continuous IntegrationThis is a link to the definitions of all continuous integration processes that builds and tests code on a frequent, regular basis.
Project PluginsThis document lists the build plugins and the report plugins used by this project.
Project LicenseThis is a link to the definitions of project licenses.
Dependency ManagementThis document lists the dependencies that are defined through dependencyManagement.
Project TeamThis document provides information on the members of this project. These are the individuals who have contributed to the project in one form or another.
Project SummaryThis document lists other related information of this project
DependenciesThis document lists the project's dependencies and provides information on each dependency.
+
+
+
+
+
+
+ +
+ + + + +
+
+
+

Back to top

+ +

Reflow Maven skin by Andrius Velykis.

+ +
+
Apache Helix, Apache, the Apache feather logo, and the Apache Helix project logos are trademarks of The Apache Software Foundation. + All other marks mentioned may be trademarks or registered trademarks of their respective owners.
+ Privacy Policy +
+
+
+ + + + + + + + + + + + + + + + + + \ No newline at end of file Added: helix/site-content/0.7.1-docs/project-reports.html URL: http://svn.apache.org/viewvc/helix/site-content/0.7.1-docs/project-reports.html?rev=1624796&view=auto ============================================================================== --- helix/site-content/0.7.1-docs/project-reports.html (added) +++ helix/site-content/0.7.1-docs/project-reports.html Sun Sep 14 01:47:34 2014 @@ -0,0 +1,250 @@ + + + + + + + + Apache Helix - Generated Reports + + + + + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+ +
+
+ +
+
+
+
+
+ +

This document provides an overview of the various reports that are automatically generated by Maven . Each report is briefly described below.

+
+

Overview

+ + + + + + + + + + + + + +
DocumentDescription
SonarQuality analysis dashboard.
+
+
+
+
+
+
+ +
+ + + + +
+
+
+

Back to top

+ +

Reflow Maven skin by Andrius Velykis.

+ +
+
Apache Helix, Apache, the Apache feather logo, and the Apache Helix project logos are trademarks of The Apache Software Foundation. + All other marks mentioned may be trademarks or registered trademarks of their respective owners.
+ Privacy Policy +
+
+
+ + + + + + + + + + + + + + + + + + \ No newline at end of file Added: helix/site-content/0.7.1-docs/project-summary.html URL: http://svn.apache.org/viewvc/helix/site-content/0.7.1-docs/project-summary.html?rev=1624796&view=auto ============================================================================== --- helix/site-content/0.7.1-docs/project-summary.html (added) +++ helix/site-content/0.7.1-docs/project-summary.html Sun Sep 14 01:47:34 2014 @@ -0,0 +1,315 @@ + + + + + + + + Apache Helix - Project Summary + + + + + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+ +
+
+ +
+
+
+
+
+ + +
+

Project Information

+ + + + + + + + + + + + + + + + + + + + + + +
FieldValue
NameApache Helix :: Website :: 0.7.1
DescriptionHelix is a generic cluster management framework used for the automatic management of partitioned, replicated and distributed resources hosted on a cluster of nodes.
Homepagehttp://helix.apache.org/0.7.1-docs
+
+
+

Project Organization

+ + + + + + + + + + + + + + + + + + +
FieldValue
NameThe Apache Software Foundation
URLhttp://www.apache.org/
+
+
+

Build Information

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
FieldValue
GroupIdorg.apache.helix
ArtifactId0.7.1-docs
Version0.7.2-SNAPSHOT
Typebundle
JDK Rev1.6
+
+
+
+
+
+
+ +
+ + + + +
+
+
+

Back to top

+ +

Reflow Maven skin by Andrius Velykis.

+ +
+
Apache Helix, Apache, the Apache feather logo, and the Apache Helix project logos are trademarks of The Apache Software Foundation. + All other marks mentioned may be trademarks or registered trademarks of their respective owners.
+ Privacy Policy +
+
+
+ + + + + + + + + + + + + + + + + + \ No newline at end of file Added: helix/site-content/0.7.1-docs/recipes/lock_manager.html URL: http://svn.apache.org/viewvc/helix/site-content/0.7.1-docs/recipes/lock_manager.html?rev=1624796&view=auto ============================================================================== --- helix/site-content/0.7.1-docs/recipes/lock_manager.html (added) +++ helix/site-content/0.7.1-docs/recipes/lock_manager.html Sun Sep 14 01:47:34 2014 @@ -0,0 +1,466 @@ + + + + + + + + Apache Helix - Distributed Lock Manager + + + + + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+ +
+
+ +
+
+
+
+ +
+ +

Distributed locks are used to synchronize accesses shared resources. Most applications today use ZooKeeper to model distributed locks.

+

The simplest way to model a lock using ZooKeeper is (See ZooKeeper leader recipe for an exact and more advanced solution)

+
    +
  • Each process tries to create an emphemeral node
  • +
  • If the node is successfully created, the process acquires the lock
  • +
  • Otherwise, it will watch the ZNode and try to acquire the lock again if the current lock holder disappears
  • +
+

This is good enough if there is only one lock. But in practice, an application will need many such locks. Distributing and managing the locks among difference process becomes challenging. Extending such a solution to many locks will result in:

+
    +
  • Uneven distribution of locks among nodes; the node that starts first will acquire all the locks. Nodes that start later will be idle.
  • +
  • When a node fails, how the locks will be distributed among remaining nodes is not predicable.
  • +
  • When new nodes are added the current nodes don't relinquish the locks so that new nodes can acquire some locks
  • +
+

In other words we want a system to satisfy the following requirements.

+
    +
  • Distribute locks evenly among all nodes to get better hardware utilization
  • +
  • If a node fails, the locks that were acquired by that node should be evenly distributed among other nodes
  • +
  • If nodes are added, locks must be evenly re-distributed among nodes.
  • +
+

Helix provides a simple and elegant solution to this problem. Simply specify the number of locks and Helix will ensure that above constraints are satisfied.

+

To quickly see this working run the lock-manager-demo script where 12 locks are evenly distributed among three nodes, and when a node fails, the locks get re-distributed among remaining two nodes. Note that Helix does not re-shuffle the locks completely, instead it simply distributes the locks relinquished by dead node among 2 remaining nodes evenly.

+
+
+

Short Version

+

This version starts multiple threads within the same process to simulate a multi node deployment. Try the long version to get a better idea of how it works.

+
+
git clone https://git-wip-us.apache.org/repos/asf/helix.git
+cd helix
+git checkout tags/helix-0.7.1
+mvn clean install package -DskipTests
+cd recipes/distributed-lock-manager/target/distributed-lock-manager-pkg/bin
+chmod +x *
+./lock-manager-demo
+
+
+
+

Output

+
+
./lock-manager-demo
+STARTING localhost_12000
+STARTING localhost_12002
+STARTING localhost_12001
+STARTED localhost_12000
+STARTED localhost_12002
+STARTED localhost_12001
+localhost_12001 acquired lock:lock-group_3
+localhost_12000 acquired lock:lock-group_8
+localhost_12001 acquired lock:lock-group_2
+localhost_12001 acquired lock:lock-group_4
+localhost_12002 acquired lock:lock-group_1
+localhost_12002 acquired lock:lock-group_10
+localhost_12000 acquired lock:lock-group_7
+localhost_12001 acquired lock:lock-group_5
+localhost_12002 acquired lock:lock-group_11
+localhost_12000 acquired lock:lock-group_6
+localhost_12002 acquired lock:lock-group_0
+localhost_12000 acquired lock:lock-group_9
+lockName    acquired By
+======================================
+lock-group_0    localhost_12002
+lock-group_1    localhost_12002
+lock-group_10    localhost_12002
+lock-group_11    localhost_12002
+lock-group_2    localhost_12001
+lock-group_3    localhost_12001
+lock-group_4    localhost_12001
+lock-group_5    localhost_12001
+lock-group_6    localhost_12000
+lock-group_7    localhost_12000
+lock-group_8    localhost_12000
+lock-group_9    localhost_12000
+Stopping localhost_12000
+localhost_12000 Interrupted
+localhost_12001 acquired lock:lock-group_9
+localhost_12001 acquired lock:lock-group_8
+localhost_12002 acquired lock:lock-group_6
+localhost_12002 acquired lock:lock-group_7
+lockName    acquired By
+======================================
+lock-group_0    localhost_12002
+lock-group_1    localhost_12002
+lock-group_10    localhost_12002
+lock-group_11    localhost_12002
+lock-group_2    localhost_12001
+lock-group_3    localhost_12001
+lock-group_4    localhost_12001
+lock-group_5    localhost_12001
+lock-group_6    localhost_12002
+lock-group_7    localhost_12002
+lock-group_8    localhost_12001
+lock-group_9    localhost_12001
+
+
+
+
+
+
+
+

Long version

+

This provides more details on how to setup the cluster and where to plugin application code.

+
+

Start ZooKeeper

+
+
./start-standalone-zookeeper 2199
+
+
+
+
+

Create a Cluster

+
+
./helix-admin --zkSvr localhost:2199 --addCluster lock-manager-demo
+
+
+
+
+

Create a Lock Group

+

Create a lock group and specify the number of locks in the lock group.

+
+
./helix-admin --zkSvr localhost:2199  --addResource lock-manager-demo lock-group 6 OnlineOffline AUTO_REBALANCE
+
+
+
+
+

Start the Nodes

+

Create a Lock class that handles the callbacks.

+
+
public class Lock extends StateModel {
+  private String lockName;
+
+  public Lock(String lockName) {
+    this.lockName = lockName;
+  }
+
+  public void lock(Message m, NotificationContext context) {
+    System.out.println(" acquired lock:"+ lockName );
+  }
+
+  public void release(Message m, NotificationContext context) {
+    System.out.println(" releasing lock:"+ lockName );
+  }
+
+}
+
+
+

and a LockFactory that creates Locks

+
+
public class LockFactory extends StateModelFactory<Lock> {
+    /* Instantiates the lock handler, one per lockName */
+    public Lock create(String lockName) {
+        return new Lock(lockName);
+    }
+}
+
+
+

At node start up, simply join the cluster and Helix will invoke the appropriate callbacks on the appropriate Lock instance. One can start any number of nodes and Helix detects that a new node has joined the cluster and re-distributes the locks automatically.

+
+
public class LockProcess {
+  public static void main(String args) {
+    String zkAddress= "localhost:2199";
+    String clusterName = "lock-manager-demo";
+    //Give a unique id to each process, most commonly used format hostname_port
+    String instanceName ="localhost_12000";
+    ZKHelixAdmin helixAdmin = new ZKHelixAdmin(zkAddress);
+    //configure the instance and provide some metadata
+    InstanceConfig config = new InstanceConfig(instanceName);
+    config.setHostName("localhost");
+    config.setPort("12000");
+    admin.addInstance(clusterName, config);
+    //join the cluster
+    HelixManager manager;
+    manager = HelixManagerFactory.getHelixManager(clusterName,
+                                                  instanceName,
+                                                  InstanceType.PARTICIPANT,
+                                                  zkAddress);
+    manager.getStateMachineEngine().registerStateModelFactory("OnlineOffline", modelFactory);
+    manager.connect();
+    Thread.currentThread.join();
+  }
+}
+
+
+
+
+

Start the Controller

+

The controller can be started either as a separate process or can be embedded within each node process

+
+
Separate Process
+

This is recommended when number of nodes in the cluster > 100. For fault tolerance, you can run multiple controllers on different boxes.

+
+
./run-helix-controller --zkSvr localhost:2199 --cluster lock-manager-demo 2>&1 > /tmp/controller.log &
+
+
+
+
+
Embedded Within the Node Process
+

This is recommended when the number of nodes in the cluster is less than 100. To start a controller from each process, simply add the following lines to MyClass

+
+
public class LockProcess {
+  public static void main(String args) {
+    String zkAddress= "localhost:2199";
+    String clusterName = "lock-manager-demo";
+    // .
+    // .
+    manager.connect();
+    HelixManager controller;
+    controller = HelixControllerMain.startHelixController(zkAddress,
+                                                          clusterName,
+                                                          "controller",
+                                                          HelixControllerMain.STANDALONE);
+    Thread.currentThread.join();
+  }
+}
+
+
+
+
+
+
+
+
+
+
+ +
+ + + + +
+
+
+

Back to top

+ +

Reflow Maven skin by Andrius Velykis.

+ +
+
Apache Helix, Apache, the Apache feather logo, and the Apache Helix project logos are trademarks of The Apache Software Foundation. + All other marks mentioned may be trademarks or registered trademarks of their respective owners.
+ Privacy Policy +
+
+
+ + + + + + + + + + + + + + + + + + \ No newline at end of file Added: helix/site-content/0.7.1-docs/recipes/rabbitmq_consumer_group.html URL: http://svn.apache.org/viewvc/helix/site-content/0.7.1-docs/recipes/rabbitmq_consumer_group.html?rev=1624796&view=auto ============================================================================== --- helix/site-content/0.7.1-docs/recipes/rabbitmq_consumer_group.html (added) +++ helix/site-content/0.7.1-docs/recipes/rabbitmq_consumer_group.html Sun Sep 14 01:47:34 2014 @@ -0,0 +1,421 @@ + + + + + + + + Apache Helix - RabbitMQ Consumer Group + + + + + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+ +
+
+ +
+
+
+
+ +
+ +

RabbitMQ is well-known open source software the provides robust messaging for applications.

+

One of the commonly implemented recipes using this software is a work queue. http://www.rabbitmq.com/tutorials/tutorial-four-java.html describes the use case where

+
    +
  • A producer sends a message with a routing key
  • +
  • The message is routed to the queue whose binding key exactly matches the routing key of the message
  • +
  • There are multiple consumers and each consumer is interested in processing only a subset of the messages by binding to the interested keys
  • +
+

The example provided here describes how multiple consumers can be started to process all the messages.

+

While this works, in production systems one needs the following:

+
    +
  • Ability to handle failures: when a consumer fails, another consumer must be started or the other consumers must start processing these messages that should have been processed by the failed consumer
  • +
  • When the existing consumers cannot keep up with the task generation rate, new consumers will be added. The tasks must be redistributed among all the consumers
  • +
+

In this recipe, we demonstrate handling of consumer failures and new consumer additions using Helix.

+

Mapping this usecase to Helix is pretty easy as the binding key/routing key is equivalent to a partition.

+

Let’s take an example. Lets say the queue has 6 partitions, and we have 2 consumers to process all the queues. What we want is all 6 queues to be evenly divided among 2 consumers. Eventually when the system scales, we add more consumers to keep up. This will make each consumer process tasks from 2 queues. Now let’s say that a consumer failed, reducing the number of active consumers to 2. This means each consumer must process 3 queues.

+

We showcase how such a dynamic application can be developed using Helix. Even though we use RabbitMQ as the pub/sub system one can extend this solution to other pub/sub systems.

+
+

Try It

+
+
git clone https://git-wip-us.apache.org/repos/asf/helix.git
+cd helix
+git checkout tags/helix-0.7.1
+mvn clean install package -DskipTests
+cd recipes/rabbitmq-consumer-group/bin
+chmod +x *
+export HELIX_PKG_ROOT=`pwd`/helix-core/target/helix-core-pkg
+export HELIX_RABBITMQ_ROOT=`pwd`/recipes/rabbitmq-consumer-group/
+chmod +x $HELIX_PKG_ROOT/bin/*
+chmod +x $HELIX_RABBITMQ_ROOT/bin/*
+
+
+
+

Install RabbitMQ

+

Setting up RabbitMQ on a local box is straightforward. You can find the instructions here http://www.rabbitmq.com/download.html

+
+
+

Start ZK

+

Start ZooKeeper at port 2199

+
+
$HELIX_PKG_ROOT/bin/start-standalone-zookeeper 2199
+
+
+
+
+

Setup the Consumer Group Cluster

+

This will setup the cluster by creating a “rabbitmq-consumer-group” cluster and adds a “topic” with “6” queues.

+
+
$HELIX_RABBITMQ_ROOT/bin/setup-cluster.sh localhost:2199
+
+
+
+
+

Add Consumers

+

Start 2 consumers in 2 different terminals. Each consumer is given a unique ID.

+
+
//start-consumer.sh zookeeperAddress (e.g. localhost:2181) consumerId , rabbitmqServer (e.g. localhost)
+$HELIX_RABBITMQ_ROOT/bin/start-consumer.sh localhost:2199 0 localhost
+$HELIX_RABBITMQ_ROOT/bin/start-consumer.sh localhost:2199 1 localhost
+
+
+
+
+
+

Start the Helix Controller

+

Now start a Helix controller that starts managing the “rabbitmq-consumer-group” cluster.

+
+
$HELIX_RABBITMQ_ROOT/bin/start-cluster-manager.sh localhost:2199
+
+
+
+
+

Send Messages to the Topic

+

Start sending messages to the topic. This script randomly selects a routing key (1-6) and sends the message to topic. Based on the key, messages gets routed to the appropriate queue.

+
+
$HELIX_RABBITMQ_ROOT/bin/send-message.sh localhost 20
+
+
+

After running this, you should see all 20 messages being processed by 2 consumers.

+
+
+

Add Another Consumer

+

Once a new consumer is started, Helix detects it. In order to balance the load between 3 consumers, it deallocates 1 partition from the existing consumers and allocates it to the new consumer. We see that each consumer is now processing only 2 queues. Helix makes sure that old nodes are asked to stop consuming before the new consumer is asked to start consuming for a given partition. But the transitions for each partition can happen in parallel.

+
+
$HELIX_RABBITMQ_ROOT/bin/start-consumer.sh localhost:2199 2 localhost
+
+
+

Send messages again to the topic

+
+
$HELIX_RABBITMQ_ROOT/bin/send-message.sh localhost 100
+
+
+

You should see that messages are now received by all 3 consumers.

+
+
+

Stop a Consumer

+

In any terminal press CTRL^C and notice that Helix detects the consumer failure and distributes the 2 partitions that were processed by failed consumer to the remaining 2 active consumers.

+
+
+
+

How does this work?

+

Find the entire code here.

+
+

Cluster Setup

+

This step creates ZNode on ZooKeeper for the cluster and adds the state model. We use online offline state model since there is no need for other states. The consumer is either processing a queue or it is not.

+

It creates a resource called “rabbitmq-consumer-group” with 6 partitions. The execution mode is set to AUTO_REBALANCE. This means that the Helix controls the assignment of partition to consumers and automatically distributes the partitions evenly among the active consumers. When a consumer is added or removed, it ensures that a minimum number of partitions are shuffled.

+
+
zkclient = new ZkClient(zkAddr, ZkClient.DEFAULT_SESSION_TIMEOUT,
+    ZkClient.DEFAULT_CONNECTION_TIMEOUT, new ZNRecordSerializer());
+ZKHelixAdmin admin = new ZKHelixAdmin(zkclient);
+
+// add cluster
+admin.addCluster(clusterName, true);
+
+// add state model definition
+StateModelConfigGenerator generator = new StateModelConfigGenerator();
+admin.addStateModelDef(clusterName, "OnlineOffline",
+    new StateModelDefinition(generator.generateConfigForOnlineOffline()));
+
+// add resource "topic" which has 6 partitions
+String resourceName = "rabbitmq-consumer-group";
+admin.addResource(clusterName, resourceName, 6, "OnlineOffline", "AUTO_REBALANCE");
+
+
+
+
+
+

Starting the Consumers

+

The only thing consumers need to know is the ZooKeeper address, cluster name and consumer ID. It does not need to know anything else.

+
+
_manager = HelixManagerFactory.getZKHelixManager(_clusterName,
+                                                 _consumerId,
+                                                 InstanceType.PARTICIPANT,
+                                                 _zkAddr);
+
+StateMachineEngine stateMach = _manager.getStateMachineEngine();
+ConsumerStateModelFactory modelFactory =
+    new ConsumerStateModelFactory(_consumerId, _mqServer);
+stateMach.registerStateModelFactory("OnlineOffline", modelFactory);
+
+_manager.connect();
+
+
+

Once the consumer has registered the state model and the controller is started, the consumer starts getting callbacks (onBecomeOnlineFromOffline) for the partition it needs to host. All it needs to do as part of the callback is to start consuming messages from the appropriate queue. Similarly, when the controller deallocates a partitions from a consumer, it fires onBecomeOfflineFromOnline for the same partition. As a part of this transition, the consumer will stop consuming from a that queue.

+
+
@Transition(to = "ONLINE", from = "OFFLINE")
+public void onBecomeOnlineFromOffline(Message message, NotificationContext context) {
+  LOG.debug(_consumerId + " becomes ONLINE from OFFLINE for " + _partition);
+  if (_thread == null) {
+    LOG.debug("Starting ConsumerThread for " + _partition + "...");
+    _thread = new ConsumerThread(_partition, _mqServer, _consumerId);
+    _thread.start();
+    LOG.debug("Starting ConsumerThread for " + _partition + " done");
+
+  }
+}
+
+@Transition(to = "OFFLINE", from = "ONLINE")
+public void onBecomeOfflineFromOnline(Message message, NotificationContext context)
+    throws InterruptedException {
+  LOG.debug(_consumerId + " becomes OFFLINE from ONLINE for " + _partition);
+  if (_thread != null) {
+    LOG.debug("Stopping " + _consumerId + " for " + _partition + "...");
+    _thread.interrupt();
+    _thread.join(2000);
+    _thread = null;
+    LOG.debug("Stopping " +  _consumerId + " for " + _partition + " done");
+  }
+}
+
+
+
+
+
+
+
+
+ +
+ + + + +
+
+
+

Back to top

+ +

Reflow Maven skin by Andrius Velykis.

+ +
+
Apache Helix, Apache, the Apache feather logo, and the Apache Helix project logos are trademarks of The Apache Software Foundation. + All other marks mentioned may be trademarks or registered trademarks of their respective owners.
+ Privacy Policy +
+
+
+ + + + + + + + + + + + + + + + + + \ No newline at end of file