helix-commits mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From ka...@apache.org
Subject [07/31] Redesign documentation for 0.6.2, 0.7.0, and trunk
Date Thu, 02 Jan 2014 00:14:07 GMT
http://git-wip-us.apache.org/repos/asf/incubator-helix/blob/4a4510d1/site-releases/0.6.2-incubating/src/site/markdown/recipes/rabbitmq_consumer_group.md
----------------------------------------------------------------------
diff --git a/site-releases/0.6.2-incubating/src/site/markdown/recipes/rabbitmq_consumer_group.md b/site-releases/0.6.2-incubating/src/site/markdown/recipes/rabbitmq_consumer_group.md
index 9edc2cb..16aad98 100644
--- a/site-releases/0.6.2-incubating/src/site/markdown/recipes/rabbitmq_consumer_group.md
+++ b/site-releases/0.6.2-incubating/src/site/markdown/recipes/rabbitmq_consumer_group.md
@@ -19,40 +19,40 @@ under the License.
 
 
 RabbitMQ Consumer Group
-=======================
+-----------------------
 
-[RabbitMQ](http://www.rabbitmq.com/) is a well known Open source software the provides robust messaging for applications.
+[RabbitMQ](http://www.rabbitmq.com/) is well-known open source software the provides robust messaging for applications.
 
-One of the commonly implemented recipes using this software is a work queue.  http://www.rabbitmq.com/tutorials/tutorial-four-java.html describes the use case where
+One of the commonly implemented recipes using this software is a work queue.  [http://www.rabbitmq.com/tutorials/tutorial-four-java.html](http://www.rabbitmq.com/tutorials/tutorial-four-java.html) describes the use case where
 
-* A producer sends a message with a routing key. 
-* The message is routed to the queue whose binding key exactly matches the routing key of the message.	
+* A producer sends a message with a routing key
+* The message is routed to the queue whose binding key exactly matches the routing key of the message
 * There are multiple consumers and each consumer is interested in processing only a subset of the messages by binding to the interested keys
 
 The example provided [here](http://www.rabbitmq.com/tutorials/tutorial-four-java.html) describes how multiple consumers can be started to process all the messages.
 
-While this works, in production systems one needs the following 
+While this works, in production systems one needs the following:
 
-* Ability to handle failures: when a consumers fails another consumer must be started or the other consumers must start processing these messages that should have been processed by the failed consumer.
-* When the existing consumers cannot keep up with the task generation rate, new consumers will be added. The tasks must be redistributed among all the consumers. 
+* Ability to handle failures: when a consumer fails, another consumer must be started or the other consumers must start processing these messages that should have been processed by the failed consumer
+* When the existing consumers cannot keep up with the task generation rate, new consumers will be added. The tasks must be redistributed among all the consumers
 
 In this recipe, we demonstrate handling of consumer failures and new consumer additions using Helix.
 
-Mapping this usecase to Helix is pretty easy as the binding key/routing key is equivalent to a partition. 
+Mapping this usecase to Helix is pretty easy as the binding key/routing key is equivalent to a partition.
 
-Let's take an example. Lets say the queue has 6 partitions, and we have 2 consumers to process all the queues. 
-What we want is all 6 queues to be evenly divided among 2 consumers. 
+Let's take an example. Lets say the queue has 6 partitions, and we have 2 consumers to process all the queues.
+What we want is all 6 queues to be evenly divided among 2 consumers.
 Eventually when the system scales, we add more consumers to keep up. This will make each consumer process tasks from 2 queues.
-Now let's say that a consumer failed which reduces the number of active consumers to 2. This means each consumer must process 3 queues.
+Now let's say that a consumer failed, reducing the number of active consumers to 2. This means each consumer must process 3 queues.
 
-We showcase how such a dynamic App can be developed using Helix. Even though we use rabbitmq as the pub/sub system one can extend this solution to other pub/sub systems.
+We showcase how such a dynamic application can be developed using Helix. Even though we use RabbitMQ as the pub/sub system one can extend this solution to other pub/sub systems.
 
-Try it
-======
+### Try It
 
 ```
 git clone https://git-wip-us.apache.org/repos/asf/incubator-helix.git
 cd incubator-helix
+git checkout tags/helix-0.6.2-incubating
 mvn clean install package -DskipTests
 cd recipes/rabbitmq-consumer-group/bin
 chmod +x *
@@ -62,63 +62,60 @@ chmod +x $HELIX_PKG_ROOT/bin/*
 chmod +x $HELIX_RABBITMQ_ROOT/bin/*
 ```
 
-
-Install Rabbit MQ
-----------------
+#### Install RabbitMQ
 
 Setting up RabbitMQ on a local box is straightforward. You can find the instructions here
 http://www.rabbitmq.com/download.html
 
-Start ZK
---------
-Start zookeeper at port 2199
+#### Start ZK
+
+Start ZooKeeper at port 2199
 
 ```
 $HELIX_PKG_ROOT/bin/start-standalone-zookeeper 2199
 ```
 
-Setup the consumer group cluster
---------------------------------
-This will setup the cluster by creating a "rabbitmq-consumer-group" cluster and adds a "topic" with "6" queues. 
+#### Setup the Consumer Group Cluster
+
+This will setup the cluster by creating a "rabbitmq-consumer-group" cluster and adds a "topic" with "6" queues.
 
 ```
-$HELIX_RABBITMQ_ROOT/bin/setup-cluster.sh localhost:2199 
+$HELIX_RABBITMQ_ROOT/bin/setup-cluster.sh localhost:2199
 ```
 
-Add consumers
--------------
-Start 2 consumers in 2 different terminals. Each consumer is given a unique id.
+#### Add Consumers
+
+Start 2 consumers in 2 different terminals. Each consumer is given a unique ID.
 
 ```
 //start-consumer.sh zookeeperAddress (e.g. localhost:2181) consumerId , rabbitmqServer (e.g. localhost)
-$HELIX_RABBITMQ_ROOT/bin/start-consumer.sh localhost:2199 0 localhost 
-$HELIX_RABBITMQ_ROOT/bin/start-consumer.sh localhost:2199 1 localhost 
+$HELIX_RABBITMQ_ROOT/bin/start-consumer.sh localhost:2199 0 localhost
+$HELIX_RABBITMQ_ROOT/bin/start-consumer.sh localhost:2199 1 localhost
 
 ```
 
-Start HelixController
---------------------
+#### Start the Helix Controller
+
 Now start a Helix controller that starts managing the "rabbitmq-consumer-group" cluster.
 
 ```
 $HELIX_RABBITMQ_ROOT/bin/start-cluster-manager.sh localhost:2199
 ```
 
-Send messages to the Topic
---------------------------
+#### Send Messages to the Topic
 
-Start sending messages to the topic. This script randomly selects a routing key (1-6) and sends the message to topic. 
+Start sending messages to the topic. This script randomly selects a routing key (1-6) and sends the message to topic.
 Based on the key, messages gets routed to the appropriate queue.
 
 ```
 $HELIX_RABBITMQ_ROOT/bin/send-message.sh localhost 20
 ```
 
-After running this, you should see all 20 messages being processed by 2 consumers. 
+After running this, you should see all 20 messages being processed by 2 consumers.
 
-Add another consumer
---------------------
-Once a new consumer is started, helix detects it. In order to balance the load between 3 consumers, it deallocates 1 partition from the existing consumers and allocates it to the new consumer. We see that
+#### Add Another Consumer
+
+Once a new consumer is started, Helix detects it. In order to balance the load between 3 consumers, it deallocates 1 partition from the existing consumers and allocates it to the new consumer. We see that
 each consumer is now processing only 2 queues.
 Helix makes sure that old nodes are asked to stop consuming before the new consumer is asked to start consuming for a given partition. But the transitions for each partition can happen in parallel.
 
@@ -126,7 +123,7 @@ Helix makes sure that old nodes are asked to stop consuming before the new consu
 $HELIX_RABBITMQ_ROOT/bin/start-consumer.sh localhost:2199 2 localhost
 ```
 
-Send messages again to the topic.
+Send messages again to the topic
 
 ```
 $HELIX_RABBITMQ_ROOT/bin/send-message.sh localhost 100
@@ -134,94 +131,83 @@ $HELIX_RABBITMQ_ROOT/bin/send-message.sh localhost 100
 
 You should see that messages are now received by all 3 consumers.
 
-Stop a consumer
----------------
+#### Stop a Consumer
+
 In any terminal press CTRL^C and notice that Helix detects the consumer failure and distributes the 2 partitions that were processed by failed consumer to the remaining 2 active consumers.
 
 
-How does it work
-================
+### How does this work?
 
-Find the entire code [here](https://git-wip-us.apache.org/repos/asf?p=incubator-helix.git;a=tree;f=recipes/rabbitmq-consumer-group/src/main/java/org/apache/helix/recipes/rabbitmq). 
- 
-Cluster setup
--------------
-This step creates znode on zookeeper for the cluster and adds the state model. We use online offline state model since there is no need for other states. The consumer is either processing a queue or it is not.
+Find the entire code [here](https://git-wip-us.apache.org/repos/asf?p=incubator-helix.git;a=tree;f=recipes/rabbitmq-consumer-group/src/main/java/org/apache/helix/recipes/rabbitmq).
 
-It creates a resource called "rabbitmq-consumer-group" with 6 partitions. The execution mode is set to FULL_AUTO. This means that the Helix controls the assignment of partition to consumers and automatically distributes the partitions evenly among the active consumers. When a consumer is added or removed, it ensures that a minimum number of partitions are shuffled.
+#### Cluster Setup
 
-```
-      zkclient = new ZkClient(zkAddr, ZkClient.DEFAULT_SESSION_TIMEOUT,
-          ZkClient.DEFAULT_CONNECTION_TIMEOUT, new ZNRecordSerializer());
-      ZKHelixAdmin admin = new ZKHelixAdmin(zkclient);
-      
-      // add cluster
-      admin.addCluster(clusterName, true);
+This step creates ZNode on ZooKeeper for the cluster and adds the state model. We use online offline state model since there is no need for other states. The consumer is either processing a queue or it is not.
 
-      // add state model definition
-      StateModelConfigGenerator generator = new StateModelConfigGenerator();
-      admin.addStateModelDef(clusterName, "OnlineOffline",
-          new StateModelDefinition(generator.generateConfigForOnlineOffline()));
+It creates a resource called "rabbitmq-consumer-group" with 6 partitions. The execution mode is set to AUTO_REBALANCE. This means that the Helix controls the assignment of partition to consumers and automatically distributes the partitions evenly among the active consumers. When a consumer is added or removed, it ensures that a minimum number of partitions are shuffled.
 
-      // add resource "topic" which has 6 partitions
-      String resourceName = "rabbitmq-consumer-group";
-      admin.addResource(clusterName, resourceName, 6, "OnlineOffline", "FULL_AUTO");
 ```
+zkclient = new ZkClient(zkAddr, ZkClient.DEFAULT_SESSION_TIMEOUT,
+    ZkClient.DEFAULT_CONNECTION_TIMEOUT, new ZNRecordSerializer());
+ZKHelixAdmin admin = new ZKHelixAdmin(zkclient);
+
+// add cluster
+admin.addCluster(clusterName, true);
 
-Starting the consumers
-----------------------
-The only thing consumers need to know is the zkaddress, cluster name and consumer id. It does not need to know anything else.
+// add state model definition
+StateModelConfigGenerator generator = new StateModelConfigGenerator();
+admin.addStateModelDef(clusterName, "OnlineOffline",
+    new StateModelDefinition(generator.generateConfigForOnlineOffline()));
 
+// add resource "topic" which has 6 partitions
+String resourceName = "rabbitmq-consumer-group";
+admin.addResource(clusterName, resourceName, 6, "OnlineOffline", "AUTO_REBALANCE");
 ```
-   _manager =
-          HelixManagerFactory.getZKHelixManager(_clusterName,
-                                                _consumerId,
-                                                InstanceType.PARTICIPANT,
-                                                _zkAddr);
 
-      StateMachineEngine stateMach = _manager.getStateMachineEngine();
-      ConsumerStateModelFactory modelFactory =
-          new ConsumerStateModelFactory(_consumerId, _mqServer);
-      stateMach.registerStateModelFactory("OnlineOffline", modelFactory);
+### Starting the Consumers
 
-      _manager.connect();
+The only thing consumers need to know is the ZooKeeper address, cluster name and consumer ID. It does not need to know anything else.
 
 ```
+_manager = HelixManagerFactory.getZKHelixManager(_clusterName,
+                                                 _consumerId,
+                                                 InstanceType.PARTICIPANT,
+                                                 _zkAddr);
 
-Once the consumer has registered the statemodel and the controller is started, the consumer starts getting callbacks (onBecomeOnlineFromOffline) for the partition it needs to host. All it needs to do as part of the callback is to start consuming messages from the appropriate queue. Similarly, when the controller deallocates a partitions from a consumer, it fires onBecomeOfflineFromOnline for the same partition. 
-As a part of this transition, the consumer will stop consuming from a that queue.
+StateMachineEngine stateMach = _manager.getStateMachineEngine();
+ConsumerStateModelFactory modelFactory =
+    new ConsumerStateModelFactory(_consumerId, _mqServer);
+stateMach.registerStateModelFactory("OnlineOffline", modelFactory);
 
+_manager.connect();
 ```
- @Transition(to = "ONLINE", from = "OFFLINE")
-  public void onBecomeOnlineFromOffline(Message message, NotificationContext context)
-  {
-    LOG.debug(_consumerId + " becomes ONLINE from OFFLINE for " + _partition);
-
-    if (_thread == null)
-    {
-      LOG.debug("Starting ConsumerThread for " + _partition + "...");
-      _thread = new ConsumerThread(_partition, _mqServer, _consumerId);
-      _thread.start();
-      LOG.debug("Starting ConsumerThread for " + _partition + " done");
-
-    }
-  }
-
-  @Transition(to = "OFFLINE", from = "ONLINE")
-  public void onBecomeOfflineFromOnline(Message message, NotificationContext context)
-      throws InterruptedException
-  {
-    LOG.debug(_consumerId + " becomes OFFLINE from ONLINE for " + _partition);
 
-    if (_thread != null)
-    {
-      LOG.debug("Stopping " + _consumerId + " for " + _partition + "...");
+Once the consumer has registered the state model and the controller is started, the consumer starts getting callbacks (onBecomeOnlineFromOffline) for the partition it needs to host. All it needs to do as part of the callback is to start consuming messages from the appropriate queue. Similarly, when the controller deallocates a partitions from a consumer, it fires onBecomeOfflineFromOnline for the same partition.
+As a part of this transition, the consumer will stop consuming from a that queue.
 
-      _thread.interrupt();
-      _thread.join(2000);
-      _thread = null;
-      LOG.debug("Stopping " +  _consumerId + " for " + _partition + " done");
+```
+@Transition(to = "ONLINE", from = "OFFLINE")
+public void onBecomeOnlineFromOffline(Message message, NotificationContext context) {
+  LOG.debug(_consumerId + " becomes ONLINE from OFFLINE for " + _partition);
+  if (_thread == null) {
+    LOG.debug("Starting ConsumerThread for " + _partition + "...");
+    _thread = new ConsumerThread(_partition, _mqServer, _consumerId);
+    _thread.start();
+    LOG.debug("Starting ConsumerThread for " + _partition + " done");
 
-    }
   }
-```
\ No newline at end of file
+}
+
+@Transition(to = "OFFLINE", from = "ONLINE")
+public void onBecomeOfflineFromOnline(Message message, NotificationContext context)
+    throws InterruptedException {
+  LOG.debug(_consumerId + " becomes OFFLINE from ONLINE for " + _partition);
+  if (_thread != null) {
+    LOG.debug("Stopping " + _consumerId + " for " + _partition + "...");
+    _thread.interrupt();
+    _thread.join(2000);
+    _thread = null;
+    LOG.debug("Stopping " +  _consumerId + " for " + _partition + " done");
+  }
+}
+```

http://git-wip-us.apache.org/repos/asf/incubator-helix/blob/4a4510d1/site-releases/0.6.2-incubating/src/site/markdown/recipes/rsync_replicated_file_store.md
----------------------------------------------------------------------
diff --git a/site-releases/0.6.2-incubating/src/site/markdown/recipes/rsync_replicated_file_store.md b/site-releases/0.6.2-incubating/src/site/markdown/recipes/rsync_replicated_file_store.md
index f8a74a0..5b3d6db 100644
--- a/site-releases/0.6.2-incubating/src/site/markdown/recipes/rsync_replicated_file_store.md
+++ b/site-releases/0.6.2-incubating/src/site/markdown/recipes/rsync_replicated_file_store.md
@@ -17,25 +17,26 @@ specific language governing permissions and limitations
 under the License.
 -->
 
-Near real time rsync replicated file system
-===========================================
+Near-Realtime Rsync Replicated File System
+------------------------------------------
 
-Quickdemo
----------
+### Quick Demo
 
 * This demo starts 3 instances with id's as ```localhost_12001, localhost_12002, localhost_12003```
 * Each instance stores its files under ```/tmp/<id>/filestore```
-* ``` localhost_12001 ``` is designated as the master and ``` localhost_12002 and localhost_12003``` are the slaves.
-* Files written to master are replicated to the slaves automatically. In this demo, a.txt and b.txt are written to ```/tmp/localhost_12001/filestore``` and it gets replicated to other folders.
-* When the master is stopped, ```localhost_12002``` is promoted to master. 
+* ```localhost_12001``` is designated as the master, and ```localhost_12002``` and ```localhost_12003``` are the slaves
+* Files written to the master are replicated to the slaves automatically. In this demo, a.txt and b.txt are written to ```/tmp/localhost_12001/filestore``` and they get replicated to other folders.
+* When the master is stopped, ```localhost_12002``` is promoted to master.
 * The other slave ```localhost_12003``` stops replicating from ```localhost_12001``` and starts replicating from new master ```localhost_12002```
 * Files written to new master ```localhost_12002``` are replicated to ```localhost_12003```
-* In the end state of this quick demo, ```localhost_12002``` is the master and ```localhost_12003``` is the slave. Manually create files under ```/tmp/localhost_12002/filestore``` and see that appears in ```/tmp/localhost_12003/filestore```
-* Ignore the interrupted exceptions on the console :-).
+* In the end state of this quick demo, ```localhost_12002``` is the master and ```localhost_12003``` is the slave. Manually create files under ```/tmp/localhost_12002/filestore``` and see that appear in ```/tmp/localhost_12003/filestore```
+* Ignore the interrupted exceptions on the console :-)
 
 
 ```
 git clone https://git-wip-us.apache.org/repos/asf/incubator-helix.git
+cd incubator-helix
+git checkout tags/helix-0.6.2-incubating
 cd recipes/rsync-replicated-file-system/
 mvn clean install package -DskipTests
 cd target/rsync-replicated-file-system-pkg/bin
@@ -44,103 +45,99 @@ chmod +x *
 
 ```
 
-Overview
---------
+### Overview
 
-There are many applications that require storage for storing large number of relatively small data files. Examples include media stores to store small videos, images, mail attachments etc. Each of these objects is typically kilobytes, often no larger than a few megabytes. An additional distinguishing feature of these usecases is also that files are typically only added or deleted, rarely updated. When there are updates, they are rare and do not have any concurrency requirements.
+There are many applications that require storage for storing large number of relatively small data files. Examples include media stores to store small videos, images, mail attachments etc. Each of these objects is typically kilobytes, often no larger than a few megabytes. An additional distinguishing feature of these use cases is that files are typically only added or deleted, rarely updated. When there are updates, they do not have any concurrency requirements.
+
+These are much simpler requirements than what general purpose distributed file system have to satisfy; these would include concurrent access to files, random access for reads and updates, posix compliance, and others. To satisfy those requirements, general DFSs are also pretty complex that are expensive to build and maintain.
 
-These are much simpler requirements than what general purpose distributed file system have to satisfy including concurrent access to files, random access for reads and updates, posix compliance etc. To satisfy those requirements, general DFSs are also pretty complex that are expensive to build and maintain.
- 
 A different implementation of a distributed file system includes HDFS which is inspired by Google's GFS. This is one of the most widely used distributed file system that forms the main data storage platform for Hadoop. HDFS is primary aimed at processing very large data sets and distributes files across a cluster of commodity servers by splitting up files in fixed size chunks. HDFS is not particularly well suited for storing a very large number of relatively tiny files.
 
 ### File Store
 
 It's possible to build a vastly simpler system for the class of applications that have simpler requirements as we have pointed out.
 
-* Large number of files but each file is relatively small.
-* Access is limited to create, delete and get entire files.
-* No updates to files that are already created (or it's feasible to delete the old file and create a new one).
- 
+* Large number of files but each file is relatively small
+* Access is limited to create, delete and get entire files
+* No updates to files that are already created (or it's feasible to delete the old file and create a new one)
+
 
 We call this system a Partitioned File Store (PFS) to distinguish it from other distributed file systems. This system needs to provide the following features:
 
 * CRD access to large number of small files
-* Scalability: Files should be distributed across a large number of commodity servers based on the storage requirement.
-* Fault-tolerance: Each file should be replicated on multiple servers so that individual server failures do not reduce availability.
-* Elasticity: It should be possible to add capacity to the cluster easily.
- 
+* Scalability: Files should be distributed across a large number of commodity servers based on the storage requirement
+* Fault-tolerance: Each file should be replicated on multiple servers so that individual server failures do not reduce availability
+* Elasticity: It should be possible to add capacity to the cluster easily
 
-Apache Helix is a generic cluster management framework that makes it very easy to provide the scalability, fault-tolerance and elasticity features. 
-Rsync can be easily used as a replication channel between servers so that each file gets replicated on multiple servers.
 
-Design
-------
+Apache Helix is a generic cluster management framework that makes it very easy to provide scalability, fault-tolerance and elasticity features.
+rsync can be easily used as a replication channel between servers so that each file gets replicated on multiple servers.
 
-High level 
+### Design
 
-* Partition the file system based on the file name. 
-* At any time a single writer can write, we call this a master.
-* For redundancy, we need to have additional replicas called slave. Slaves can optionally serve reads.
-* Slave replicates data from the master.
-* When a master fails, slave gets promoted to master.
+#### High Level
 
-### Transaction log
+* Partition the file system based on the file name
+* At any time a single writer can write, we call this a master
+* For redundancy, we need to have additional replicas called slave. Slaves can optionally serve reads
+* Slave replicates data from the master
+* When a master fails, a slave gets promoted to master
 
-Every write on the master will result in creation/deletion of one or more files. In order to maintain timeline consistency slaves need to apply the changes in the same order. 
-To facilitate this, the master logs each transaction in a file and each transaction is associated with an 64 bit id in which the 32 LSB represents a sequence number and MSB represents the generation number.
-Sequence gets incremented on every transaction and and generation is increment when a new master is elected. 
+#### Transaction Log
 
-### Replication
+Every write on the master will result in creation/deletion of one or more files. In order to maintain timeline consistency slaves need to apply the changes in the same order
+To facilitate this, the master logs each transaction in a file and each transaction is associated with an 64 bit ID in which the 32 LSB represents a sequence number and MSB represents the generation number
+The sequence number gets incremented on every transaction and the generation is incremented when a new master is elected
 
-Replication is required to slave to keep up with the changes on the master. Every time the slave applies a change it checkpoints the last applied transaction id. 
-During restarts, this allows the slave to pull changes from the last checkpointed id. Similar to master, the slave logs each transaction to the transaction logs but instead of generating new transaction id, it uses the same id generated by the master.
+#### Replication
 
+Replication is required for slaves to keep up with changes on the master. Every time the slave applies a change it checkpoints the last applied transaction ID.
+During restarts, this allows the slave to pull changes from the last checkpointed ID. Similar to master, the slave logs each transaction to the transaction logs but instead of generating new transaction ID, it uses the same ID generated by the master.
 
-### Fail over
 
-When a master fails, a new slave will be promoted to master. If the prev master node is reachable, then the new master will flush all the 
-changes from previous master before taking up mastership. The new master will record the end transaction id of the current generation and then starts new generation 
-with sequence starting from 1. After this the master will begin accepting writes. 
+#### Failover
 
+When a master fails, a new slave will be promoted to master. If the previous master node is reachable, then the new master will flush all the
+changes from previous the master before taking up mastership. The new master will record the end transaction ID of the current generation and then start a new generation
+with sequence starting from 1. After this the master will begin accepting writes.
 
 ![Partitioned File Store](../images/PFS-Generic.png)
 
 
 
-Rsync based solution
--------------------
+### Rsync-based Solution
 
 ![Rsync based File Store](../images/RSYNC_BASED_PFS.png)
 
 
-This application demonstrate a file store that uses rsync as the replication mechanism. One can envision a similar system where instead of using rsync, 
+This application demonstrates a file store that uses rsync as the replication mechanism. One can envision a similar system where instead of using rsync, one
 can implement a custom solution to notify the slave of the changes and also provide an api to pull the change files.
-#### Concept
-* file_store_dir: Root directory for the actual data files 
-* change_log_dir: The transaction logs are generated under this folder.
-* check_point_dir: The slave stores the check points ( last processed transaction) here.
+
+#### Concepts
+* file_store_dir: Root directory for the actual data files
+* change_log_dir: The transaction logs are generated under this folder
+* check_point_dir: The slave stores the check points ( last processed transaction) here
 
 #### Master
-* File server: This component support file uploads and downloads and writes the files to ```file_store_dir```. This is not included in this application. Idea is that most applications have different ways of implementing this component and has some business logic associated with it. It is not hard to come up with such a component if needed.
-* File store watcher: This component watches the ```file_store_dir``` directory on the local file system for any changes and notifies the registered listeners of the changes.
-* Change Log Generator: This registers as a listener of File System Watcher and on each notification logs the changes into a file under ```change_log_dir```. 
+* File server: This component supports file uploads and downloads and writes the files to ```file_store_dir```. This is not included in this application. The idea is that most applications have different ways of implementing this component and have some associated business logic. It is not hard to come up with such a component if needed.
+* File store watcher: This component watches the ```file_store_dir``` directory on the local file system for any changes and notifies the registered listeners of the changes
+* Change log generator: This registers as a listener of the file store watcher and on each notification logs the changes into a file under ```change_log_dir```
 
-####Slave
-* File server: This component on the slave will only support reads.
-* Cluster state observer: Slave observes the cluster state and is able to know who is the current master. 
+#### Slave
+* File server: This component on the slave will only support reads
+* Cluster state observer: Slave observes the cluster state and is able to know who is the current master
 * Replicator: This has two subcomponents
     - Periodic rsync of change log: This is a background process that periodically rsyncs the ```change_log_dir``` of the master to its local directory
     - Change Log Watcher: This watches the ```change_log_dir``` for changes and notifies the registered listeners of the change
-    - On demand rsync invoker: This is registered as a listener to change log watcher and on every change invokes rsync to sync only the changed file.
-
+    - On demand rsync invoker: This is registered as a listener to change log watcher and on every change invokes rsync to sync only the changed file
 
 #### Coordination
 
 The coordination between nodes is done by Helix. Helix does the partition management and assigns the partition to multiple nodes based on the replication factor. It elects one the nodes as master and designates others as slaves.
-It provides notifications to each node in the form of state transitions ( Offline to Slave, Slave to Master). It also provides notification when there is change is cluster state. 
-This allows the slave to stop replicating from current master and start replicating from new master. 
+It provides notifications to each node in the form of state transitions (Offline to Slave, Slave to Master). It also provides notifications when there is change is cluster state.
+This allows the slave to stop replicating from current master and start replicating from new master.
 
-In this application, we have only one partition but its very easy to extend it to support multiple partitions. By partitioning the file store, one can add new nodes and Helix will automatically 
+In this application, we have only one partition but its very easy to extend it to support multiple partitions. By partitioning the file store, one can add new nodes and Helix will automatically
 re-distribute partitions among the nodes. To summarize, Helix provides partition management, fault tolerance and facilitates automated cluster expansion.
 
 

http://git-wip-us.apache.org/repos/asf/incubator-helix/blob/4a4510d1/site-releases/0.6.2-incubating/src/site/markdown/recipes/service_discovery.md
----------------------------------------------------------------------
diff --git a/site-releases/0.6.2-incubating/src/site/markdown/recipes/service_discovery.md b/site-releases/0.6.2-incubating/src/site/markdown/recipes/service_discovery.md
index 8e06ead..d8f0132 100644
--- a/site-releases/0.6.2-incubating/src/site/markdown/recipes/service_discovery.md
+++ b/site-releases/0.6.2-incubating/src/site/markdown/recipes/service_discovery.md
@@ -19,95 +19,90 @@ under the License.
 Service Discovery
 -----------------
 
-One of the common usage of zookeeper is enable service discovery. 
-The basic idea is that when a server starts up it advertises its configuration/metadata such as host name port etc on zookeeper. 
-This allows clients to dynamically discover the servers that are currently active. One can think of this like a service registry to which a server registers when it starts and 
-is automatically deregistered when it shutdowns or crashes. In many cases it serves as an alternative to vips.
+One of the common usage of ZooKeeper is to enable service discovery.
+The basic idea is that when a server starts up it advertises its configuration/metadata such as its hostname and port on ZooKeeper.
+This allows clients to dynamically discover the servers that are currently active. One can think of this like a service registry to which a server registers when it starts and
+is automatically deregistered when it shutdowns or crashes. In many cases it serves as an alternative to VIPs.
 
-The core idea behind this is to use zookeeper ephemeral nodes. The ephemeral nodes are created when the server registers and all its metadata is put into a znode. 
-When the server shutdowns, zookeeper automatically removes this znode. 
+The core idea behind this is to use ZooKeeper ephemeral nodes. The ephemeral nodes are created when the server registers and all its metadata is put into a ZNode.
+When the server shutdowns, ZooKeeper automatically removes this ZNode.
 
-There are two ways the clients can dynamically discover the active servers
+There are two ways the clients can dynamically discover the active servers:
 
-#### ZOOKEEPER WATCH
+### ZooKeeper Watch
 
-Clients can set a child watch under specific path on zookeeper. 
-When a new service is registered/deregistered, zookeeper notifies the client via watchevent and the client can read the list of services. Even though this looks trivial, 
-there are lot of things one needs to keep in mind like ensuring that you first set the watch back on zookeeper before reading data from zookeeper.
+Clients can set a child watch under specific path on ZooKeeper.
+When a new service is registered/deregistered, ZooKeeper notifies the client via a watch event and the client can read the list of services. Even though this looks trivial,
+there are lot of things one needs to keep in mind like ensuring that you first set the watch back on ZooKeeper before reading data.
 
 
-#### POLL
+### Poll
 
-Another approach is for the client to periodically read the zookeeper path and get the list of services.
+Another approach is for the client to periodically read the ZooKeeper path and get the list of services.
 
+Both approaches have pros and cons, for example setting a watch might trigger herd effect if there are large number of clients. This is problematic, especially when servers are starting up.
+But the advantage to setting watches is that clients are immediately notified of a change which is not true in case of polling.
+In some cases, having both watches and polls makes sense; watch allows one to get notifications as soon as possible while poll provides a safety net if a watch event is missed because of code bug or ZooKeeper fails to notify.
 
-Both approaches have pros and cons, for example setting a watch might trigger herd effect if there are large number of clients. This is worst especially when servers are starting up. 
-But good thing about setting watch is that clients are immediately notified of a change which is not true in case of polling. 
-In some cases, having both WATCH and POLL makes sense, WATCH allows one to get notifications as soon as possible while POLL provides a safety net if a watch event is missed because of code bug or zookeeper fails to notify.
+### Other Developer Considerations
+* What happens when the ZooKeeper session expires? All the watches and ephemeral nodes previously added or created by this server are lost. One needs to add the watches again, recreate the ephemeral nodes, and so on.
+* Due to network issues or Java GC pauses session expiry might happen again and again; this phenomenon is known as flapping. It\'s important for the server to detect this and deregister itself.
 
-##### Other important scenarios to take care of
-* What happens when zookeeper session expires. All the watches/ephemeral nodes previously added/created by this server are lost. 
-One needs to add the watches again , recreate the ephemeral nodes etc.
-* Due to network issues or java GC pauses session expiry might happen again and again also known as flapping. Its important for the server to detect this and deregister itself.
+### Other Operational Considerations
+* What if the node is behaving badly? One might kill the server, but it will lose the ability to debug. It would be nice to have the ability to mark a server as disabled and clients know that a node is disabled and will not contact that node.
 
-##### Other operational things to consider
-* What if the node is behaving badly, one might kill the server but will lose the ability to debug. 
-It would be nice to have the ability to mark a server as disabled and clients know that a node is disabled and will not contact that node.
- 
-#### Configuration ownership
+### Configuration Ownership
 
-This is an important aspect that is often ignored in the initial stages of your development. In common, service discovery pattern means that servers start up with some configuration and then simply puts its configuration/metadata in zookeeper. While this works well in the beginning, 
-configuration management becomes very difficult since the servers themselves are statically configured. Any change in server configuration implies restarting of the server. Ideally, it will be nice to have the ability to change configuration dynamically without having to restart a server. 
+This is an important aspect that is often ignored in the initial stages of your development. Typically, the service discovery pattern means that servers start up with some configuration which it simply puts into ZooKeeper. While this works well in the beginning, configuration management becomes very difficult since the servers themselves are statically configured. Any change in server configuration implies restarting the server. Ideally, it will be nice to have the ability to change configuration dynamically without having to restart a server.
 
-Ideally you want a hybrid solution, a node starts with minimal configuration and gets the rest of configuration from zookeeper.
+Ideally you want a hybrid solution, a node starts with minimal configuration and gets the rest of configuration from ZooKeeper.
 
-h3. How to use Helix to achieve this
+### Using Helix for Service Discovery
 
-Even though Helix has higher level abstraction in terms of statemachine, constraints and objectives, 
-service discovery is one of things that existed since we started. 
-The controller uses the exact mechanism we described above to discover when new servers join the cluster.
-We create these znodes under /CLUSTERNAME/LIVEINSTANCES. 
-Since at any time there is only one controller, we use ZK watch to track the liveness of a server.
+Even though Helix has a higher-level abstraction in terms of state machines, constraints and objectives, service discovery is one of things has been a prevalent use case from the start.
+The controller uses the exact mechanism we described above to discover when new servers join the cluster. We create these ZNodes under /CLUSTERNAME/LIVEINSTANCES.
+Since at any time there is only one controller, we use a ZK watch to track the liveness of a server.
 
-This recipe, simply demonstrate how one can re-use that part for implementing service discovery. This demonstrates multiple MODE's of service discovery
+This recipe simply demonstrates how one can re-use that part for implementing service discovery. This demonstrates multiple modes of service discovery:
 
 * POLL: The client reads from zookeeper at regular intervals 30 seconds. Use this if you have 100's of clients
-* WATCH: The client sets up watcher and gets notified of the changes. Use this if you have 10's of clients.
-* NONE: This does neither of the above, but reads directly from zookeeper when ever needed.
+* WATCH: The client sets up watcher and gets notified of the changes. Use this if you have 10's of clients
+* NONE: This does neither of the above, but reads directly from zookeeper when ever needed
 
-Helix provides these additional features compared to other implementations available else where
+Helix provides these additional features compared to other implementations available elsewhere:
 
-* It has the concept of disabling a node which means that a badly behaving node, can be disabled using helix admin api.
-* It automatically detects if a node connects/disconnects from zookeeper repeatedly and disables the node.
-* Configuration management  
-    * Allows one to set configuration via admin api at various granulaties like cluster, instance, resource, partition 
-    * Configuration can be dynamically changed.
-    * Notifies the server when configuration changes.
+* It has the concept of disabling a node which means that a badly behaving node can be disabled using the Helix admin API
+* It automatically detects if a node connects/disconnects from zookeeper repeatedly and disables the node
+* Configuration management
+    * Allows one to set configuration via the admin API at various granulaties like cluster, instance, resource, partition
+    * Configurations can be dynamically changed
+    * The server is notified when configurations change
 
 
-##### checkout and build
+### Checkout and Build
 
 ```
 git clone https://git-wip-us.apache.org/repos/asf/incubator-helix.git
 cd incubator-helix
+git checkout tags/helix-0.6.2-incubating
 mvn clean install package -DskipTests
 cd recipes/service-discovery/target/service-discovery-pkg/bin
 chmod +x *
 ```
 
-##### start zookeeper
+### Start ZooKeeper
 
 ```
 ./start-standalone-zookeeper 2199
 ```
 
-#### Run the demo
+### Run the Demo
 
 ```
 ./service-discovery-demo.sh
 ```
 
-#### Output
+### Output
 
 ```
 START:Service discovery demo mode:WATCH
@@ -186,6 +181,4 @@ START:Service discovery demo mode:NONE
 	Registering service:host.x.y.z_12000
 END:Service discovery demo mode:NONE
 =============================================
-
 ```
-

http://git-wip-us.apache.org/repos/asf/incubator-helix/blob/4a4510d1/site-releases/0.6.2-incubating/src/site/markdown/recipes/task_dag_execution.md
----------------------------------------------------------------------
diff --git a/site-releases/0.6.2-incubating/src/site/markdown/recipes/task_dag_execution.md b/site-releases/0.6.2-incubating/src/site/markdown/recipes/task_dag_execution.md
index f0474e4..f6bfde5 100644
--- a/site-releases/0.6.2-incubating/src/site/markdown/recipes/task_dag_execution.md
+++ b/site-releases/0.6.2-incubating/src/site/markdown/recipes/task_dag_execution.md
@@ -17,20 +17,18 @@ specific language governing permissions and limitations
 under the License.
 -->
 
-# Distributed task execution
+Distributed Task Execution
+--------------------------
 
-
-This recipe is intended to demonstrate how task dependencies can be modeled using primitives provided by Helix. A given task can be run with desired parallelism and will start only when up-stream dependencies are met. The demo executes the task DAG described below using 10 workers. Although the demo starts the workers as threads, there is no requirement that all the workers need to run in the same process. In reality, these workers run on many different boxes on a cluster.  When worker fails, Helix takes care of 
-re-assigning a failed task partition to a new worker. 
+This recipe is intended to demonstrate how task dependencies can be modeled using primitives provided by Helix. A given task can be run with the desired amount of parallelism and will start only when upstream dependencies are met. The demo executes the task DAG described below using 10 workers. Although the demo starts the workers as threads, there is no requirement that all the workers need to run in the same process. In reality, these workers run on many different boxes on a cluster.  When worker fails, Helix takes care of re-assigning a failed task partition to a new worker.
 
 Redis is used as a result store. Any other suitable implementation for TaskResultStore can be plugged in.
 
-### Workflow 
-
+### Workflow
 
-#### Input 
+#### Input
 
-10000 impression events and around 100 click events are pre-populated in task result store (redis). 
+10000 impression events and around 100 click events are pre-populated in task result store (redis).
 
 * **ImpEvent**: format: id,isFraudulent,country,gender
 
@@ -55,45 +53,45 @@ Redis is used as a result store. Any other suitable implementation for TaskResul
 + **report**: Reads from all aggregates generated by previous stages and prints them. Depends on: **impCountsByGender, impCountsByCountry, clickCountsByGender,clickCountsByGender**
 
 
-### Creating DAG
+### Creating a DAG
 
-Each stage is represented as a Node along with the upstream dependency and desired parallelism.  Each stage is modelled as a resource in Helix using OnlineOffline state model. As part of Offline to Online transition, we watch the external view of upstream resources and wait for them to transition to online state. See Task.java for additional info.
+Each stage is represented as a Node along with the upstream dependency and desired parallelism.  Each stage is modeled as a resource in Helix using OnlineOffline state model. As part of an Offline to Online transition, we watch the external view of upstream resources and wait for them to transition to the online state. See Task.java for additional info.
 
 ```
-
-  Dag dag = new Dag();
-  dag.addNode(new Node("filterImps", 10, ""));
-  dag.addNode(new Node("filterClicks", 5, ""));
-  dag.addNode(new Node("impClickJoin", 10, "filterImps,filterClicks"));
-  dag.addNode(new Node("impCountsByGender", 10, "filterImps"));
-  dag.addNode(new Node("impCountsByCountry", 10, "filterImps"));
-  dag.addNode(new Node("clickCountsByGender", 5, "impClickJoin"));
-  dag.addNode(new Node("clickCountsByCountry", 5, "impClickJoin"));		
-  dag.addNode(new Node("report",1,"impCountsByGender,impCountsByCountry,clickCountsByGender,clickCountsByCountry"));
-
-
+Dag dag = new Dag();
+dag.addNode(new Node("filterImps", 10, ""));
+dag.addNode(new Node("filterClicks", 5, ""));
+dag.addNode(new Node("impClickJoin", 10, "filterImps,filterClicks"));
+dag.addNode(new Node("impCountsByGender", 10, "filterImps"));
+dag.addNode(new Node("impCountsByCountry", 10, "filterImps"));
+dag.addNode(new Node("clickCountsByGender", 5, "impClickJoin"));
+dag.addNode(new Node("clickCountsByCountry", 5, "impClickJoin"));
+dag.addNode(new Node("report",1,"impCountsByGender,impCountsByCountry,clickCountsByGender,clickCountsByCountry"));
 ```
 
-### DEMO
+### Demo
 
 In order to run the demo, use the following steps
 
 See http://redis.io/topics/quickstart on how to install redis server
 
 ```
-
 Start redis e.g:
 ./redis-server --port 6379
 
 git clone https://git-wip-us.apache.org/repos/asf/incubator-helix.git
+cd incubator-helix
+git checkout helix-0.6.2-incubating
 cd recipes/task-execution
 mvn clean install package -DskipTests
 cd target/task-execution-pkg/bin
 chmod +x task-execution-demo.sh
-./task-execution-demo.sh 2181 localhost 6379 
+./task-execution-demo.sh 2181 localhost 6379
 
 ```
 
+Here\'s a visual representation of the DAG.
+
 ```
 
 
@@ -130,7 +128,7 @@ chmod +x task-execution-demo.sh
 
 (credit for above ascii art: http://www.asciiflow.com)
 
-### OUTPUT
+#### Output
 
 ```
 Done populating dummy data
@@ -198,7 +196,4 @@ Impression counts per gender
 {F=3325, UNKNOWN=3259, M=3296}
 Click counts per gender
 {F=33, UNKNOWN=32, M=35}
-
-
 ```
-

http://git-wip-us.apache.org/repos/asf/incubator-helix/blob/4a4510d1/site-releases/0.6.2-incubating/src/site/markdown/tutorial_admin.md
----------------------------------------------------------------------
diff --git a/site-releases/0.6.2-incubating/src/site/markdown/tutorial_admin.md b/site-releases/0.6.2-incubating/src/site/markdown/tutorial_admin.md
index 9c24b43..01ddb85 100644
--- a/site-releases/0.6.2-incubating/src/site/markdown/tutorial_admin.md
+++ b/site-releases/0.6.2-incubating/src/site/markdown/tutorial_admin.md
@@ -21,45 +21,46 @@ under the License.
   <title>Tutorial - Admin Operations</title>
 </head>
 
-# [Helix Tutorial](./Tutorial.html): Admin Operations
+## [Helix Tutorial](./Tutorial.html): Admin Operations
 
-Helix provides a set of admin api for cluster management operations. They are supported via:
+Helix provides a set of admin APIs for cluster management operations. They are supported via:
 
-* _Java API_
-* _Commandline interface_
-* _REST interface via helix-admin-webapp_
+* Java API
+* Command Line Interface
+* REST Interface via helix-admin-webapp
 
 ### Java API
 See interface [_org.apache.helix.HelixAdmin_](http://helix.incubator.apache.org/javadocs/0.6.2-incubating/reference/org/apache/helix/HelixAdmin.html)
 
-### Command-line interface
-The command-line tool comes with helix-core package:
+### Command Line Interface
+The command line tool comes with helix-core package:
 
-Get the command-line tool:
+Get the command line tool:
 
-``` 
-  - git clone https://git-wip-us.apache.org/repos/asf/incubator-helix.git
-  - cd incubator-helix
-  - ./build
-  - cd helix-core/target/helix-core-pkg/bin
-  - chmod +x *.sh
+```
+git clone https://git-wip-us.apache.org/repos/asf/incubator-helix.git
+cd incubator-helix
+git checkout tags/helix-0.6.2-incubating
+./build
+cd helix-core/target/helix-core-pkg/bin
+chmod +x *.sh
 ```
 
 Get help:
 
 ```
-  - ./helix-admin.sh --help
+./helix-admin.sh --help
 ```
 
 All other commands have this form:
 
 ```
-  ./helix-admin.sh --zkSvr <ZookeeperServerAddress> <command> <parameters>
+./helix-admin.sh --zkSvr <ZookeeperServerAddress> <command> <parameters>
 ```
 
-Admin commands and brief description:
+#### Supported Commands
 
-| Command syntax | Description |
+| Command Syntax | Description |
 | -------------- | ----------- |
 | _\-\-activateCluster \<clusterName controllerCluster true/false\>_ | Enable/disable a cluster in distributed controller mode |
 | _\-\-addCluster \<clusterName\>_ | Add a new cluster |
@@ -102,17 +103,18 @@ Admin commands and brief description:
 | _\-\-swapInstance \<clusterName oldInstance newInstance\>_ | Swap an old instance with a new instance |
 | _\-\-zkSvr \<ZookeeperServerAddress\>_ | Provide zookeeper address |
 
-### REST interface
+### REST Interface
 
 The REST interface comes wit helix-admin-webapp package:
 
-``` 
-  - git clone https://git-wip-us.apache.org/repos/asf/incubator-helix.git
-  - cd incubator-helix 
-  - ./build
-  - cd helix-admin-webapp/target/helix-admin-webapp-pkg/bin
-  - chmod +x *.sh
-  - ./run-rest-admin.sh --zkSvr <zookeeperAddress> --port <port> // make sure zookeeper is running
+```
+git clone https://git-wip-us.apache.org/repos/asf/incubator-helix.git
+cd incubator-helix
+git checkout tags/helix-0.6.2-incubating
+./build
+cd helix-admin-webapp/target/helix-admin-webapp-pkg/bin
+chmod +x *.sh
+./run-rest-admin.sh --zkSvr <zookeeperAddress> --port <port> // make sure ZooKeeper is running
 ```
 
 #### URL and support methods
@@ -121,75 +123,75 @@ The REST interface comes wit helix-admin-webapp package:
     * List all clusters
 
     ```
-      curl http://localhost:8100/clusters
+    curl http://localhost:8100/clusters
     ```
 
     * Add a cluster
-    
+
     ```
-      curl -d 'jsonParameters={"command":"addCluster","clusterName":"MyCluster"}' -H "Content-Type: application/json" http://localhost:8100/clusters
+    curl -d 'jsonParameters={"command":"addCluster","clusterName":"MyCluster"}' -H "Content-Type: application/json" http://localhost:8100/clusters
     ```
 
 * _/clusters/{clusterName}_
     * List cluster information
-    
+
     ```
-      curl http://localhost:8100/clusters/MyCluster
+    curl http://localhost:8100/clusters/MyCluster
     ```
 
     * Enable/disable a cluster in distributed controller mode
-    
+
     ```
-      curl -d 'jsonParameters={"command":"activateCluster","grandCluster":"MyControllerCluster","enabled":"true"}' -H "Content-Type: application/json" http://localhost:8100/clusters/MyCluster
+    curl -d 'jsonParameters={"command":"activateCluster","grandCluster":"MyControllerCluster","enabled":"true"}' -H "Content-Type: application/json" http://localhost:8100/clusters/MyCluster
     ```
 
     * Remove a cluster
-    
+
     ```
-      curl -X DELETE http://localhost:8100/clusters/MyCluster
+    curl -X DELETE http://localhost:8100/clusters/MyCluster
     ```
-    
+
 * _/clusters/{clusterName}/resourceGroups_
     * List all resources in a cluster
-    
+
     ```
-      curl http://localhost:8100/clusters/MyCluster/resourceGroups
+    curl http://localhost:8100/clusters/MyCluster/resourceGroups
     ```
-    
+
     * Add a resource to cluster
-    
+
     ```
-      curl -d 'jsonParameters={"command":"addResource","resourceGroupName":"MyDB","partitions":"8","stateModelDefRef":"MasterSlave" }' -H "Content-Type: application/json" http://localhost:8100/clusters/MyCluster/resourceGroups
+    curl -d 'jsonParameters={"command":"addResource","resourceGroupName":"MyDB","partitions":"8","stateModelDefRef":"MasterSlave" }' -H "Content-Type: application/json" http://localhost:8100/clusters/MyCluster/resourceGroups
     ```
 
 * _/clusters/{clusterName}/resourceGroups/{resourceName}_
     * List resource information
-    
+
     ```
-      curl http://localhost:8100/clusters/MyCluster/resourceGroups/MyDB
+    curl http://localhost:8100/clusters/MyCluster/resourceGroups/MyDB
     ```
-    
+
     * Drop a resource
-    
+
     ```
-      curl -X DELETE http://localhost:8100/clusters/MyCluster/resourceGroups/MyDB
+    curl -X DELETE http://localhost:8100/clusters/MyCluster/resourceGroups/MyDB
     ```
 
     * Reset all erroneous partitions of a resource
-    
+
     ```
-      curl -d 'jsonParameters={"command":"resetResource"}' -H "Content-Type: application/json" http://localhost:8100/clusters/MyCluster/resourceGroups/MyDB
+    curl -d 'jsonParameters={"command":"resetResource"}' -H "Content-Type: application/json" http://localhost:8100/clusters/MyCluster/resourceGroups/MyDB
     ```
 
 * _/clusters/{clusterName}/resourceGroups/{resourceName}/idealState_
     * Rebalance a resource
-    
+
     ```
-      curl -d 'jsonParameters={"command":"rebalance","replicas":"3"}' -H "Content-Type: application/json" http://localhost:8100/clusters/MyCluster/resourceGroups/MyDB/idealState
+    curl -d 'jsonParameters={"command":"rebalance","replicas":"3"}' -H "Content-Type: application/json" http://localhost:8100/clusters/MyCluster/resourceGroups/MyDB/idealState
     ```
 
     * Add an ideal state
-    
+
     ```
     echo jsonParameters={
     "command":"addIdealState"
@@ -215,193 +217,192 @@ The REST interface comes wit helix-admin-webapp package:
     > newIdealState.json
     curl -d @'./newIdealState.json' -H 'Content-Type: application/json' http://localhost:8100/clusters/MyCluster/resourceGroups/MyDB/idealState
     ```
-    
+
     * Add resource property
-    
+
     ```
-      curl -d 'jsonParameters={"command":"addResourceProperty","REBALANCE_TIMER_PERIOD":"500"}' -H "Content-Type: application/json" http://localhost:8100/clusters/MyCluster/resourceGroups/MyDB/idealState
+    curl -d 'jsonParameters={"command":"addResourceProperty","REBALANCE_TIMER_PERIOD":"500"}' -H "Content-Type: application/json" http://localhost:8100/clusters/MyCluster/resourceGroups/MyDB/idealState
     ```
-    
+
 * _/clusters/{clusterName}/resourceGroups/{resourceName}/externalView_
     * Show resource external view
-    
+
     ```
-      curl http://localhost:8100/clusters/MyCluster/resourceGroups/MyDB/externalView
+    curl http://localhost:8100/clusters/MyCluster/resourceGroups/MyDB/externalView
     ```
 * _/clusters/{clusterName}/instances_
     * List all instances
-    
+
     ```
-      curl http://localhost:8100/clusters/MyCluster/instances
+    curl http://localhost:8100/clusters/MyCluster/instances
     ```
 
     * Add an instance
-    
+
     ```
     curl -d 'jsonParameters={"command":"addInstance","instanceNames":"localhost_1001"}' -H "Content-Type: application/json" http://localhost:8100/clusters/MyCluster/instances
     ```
-    
+
     * Swap an instance
-    
+
     ```
-      curl -d 'jsonParameters={"command":"swapInstance","oldInstance":"localhost_1001", "newInstance":"localhost_1002"}' -H "Content-Type: application/json" http://localhost:8100/clusters/MyCluster/instances
+    curl -d 'jsonParameters={"command":"swapInstance","oldInstance":"localhost_1001", "newInstance":"localhost_1002"}' -H "Content-Type: application/json" http://localhost:8100/clusters/MyCluster/instances
     ```
 * _/clusters/{clusterName}/instances/{instanceName}_
     * Show instance information
-    
+
     ```
-      curl http://localhost:8100/clusters/MyCluster/instances/localhost_1001
+    curl http://localhost:8100/clusters/MyCluster/instances/localhost_1001
     ```
-    
+
     * Enable/disable an instance
-    
+
     ```
-      curl -d 'jsonParameters={"command":"enableInstance","enabled":"false"}' -H "Content-Type: application/json" http://localhost:8100/clusters/MyCluster/instances/localhost_1001
+    curl -d 'jsonParameters={"command":"enableInstance","enabled":"false"}' -H "Content-Type: application/json" http://localhost:8100/clusters/MyCluster/instances/localhost_1001
     ```
 
     * Drop an instance
-    
+
     ```
-      curl -X DELETE http://localhost:8100/clusters/MyCluster/instances/localhost_1001
+    curl -X DELETE http://localhost:8100/clusters/MyCluster/instances/localhost_1001
     ```
-    
+
     * Disable/enable partitions on an instance
-    
+
     ```
-      curl -d 'jsonParameters={"command":"enablePartition","resource": "MyDB","partition":"MyDB_0",  "enabled" : "false"}' -H "Content-Type: application/json" http://localhost:8100/clusters/MyCluster/instances/localhost_1001
+    curl -d 'jsonParameters={"command":"enablePartition","resource": "MyDB","partition":"MyDB_0",  "enabled" : "false"}' -H "Content-Type: application/json" http://localhost:8100/clusters/MyCluster/instances/localhost_1001
     ```
-    
+
     * Reset an erroneous partition on an instance
-    
+
     ```
-      curl -d 'jsonParameters={"command":"resetPartition","resource": "MyDB","partition":"MyDB_0"}' -H "Content-Type: application/json" http://localhost:8100/clusters/MyCluster/instances/localhost_1001
+    curl -d 'jsonParameters={"command":"resetPartition","resource": "MyDB","partition":"MyDB_0"}' -H "Content-Type: application/json" http://localhost:8100/clusters/MyCluster/instances/localhost_1001
     ```
 
     * Reset all erroneous partitions on an instance
-    
+
     ```
-      curl -d 'jsonParameters={"command":"resetInstance"}' -H "Content-Type: application/json" http://localhost:8100/clusters/MyCluster/instances/localhost_1001
+    curl -d 'jsonParameters={"command":"resetInstance"}' -H "Content-Type: application/json" http://localhost:8100/clusters/MyCluster/instances/localhost_1001
     ```
 
 * _/clusters/{clusterName}/configs_
     * Get user cluster level config
-    
+
     ```
-      curl http://localhost:8100/clusters/MyCluster/configs/cluster
+    curl http://localhost:8100/clusters/MyCluster/configs/cluster
     ```
-    
+
     * Set user cluster level config
-    
+
     ```
-      curl -d 'jsonParameters={"command":"setConfig","configs":"key1=value1,key2=value2"}' -H "Content-Type: application/json" http://localhost:8100/clusters/MyCluster/configs/cluster
+    curl -d 'jsonParameters={"command":"setConfig","configs":"key1=value1,key2=value2"}' -H "Content-Type: application/json" http://localhost:8100/clusters/MyCluster/configs/cluster
     ```
 
     * Remove user cluster level config
-    
+
     ```
     curl -d 'jsonParameters={"command":"removeConfig","configs":"key1,key2"}' -H "Content-Type: application/json" http://localhost:8100/clusters/MyCluster/configs/cluster
     ```
-    
+
     * Get/set/remove user participant level config
-    
+
     ```
-      curl -d 'jsonParameters={"command":"setConfig","configs":"key1=value1,key2=value2"}' -H "Content-Type: application/json" http://localhost:8100/clusters/MyCluster/configs/participant/localhost_1001
+    curl -d 'jsonParameters={"command":"setConfig","configs":"key1=value1,key2=value2"}' -H "Content-Type: application/json" http://localhost:8100/clusters/MyCluster/configs/participant/localhost_1001
     ```
-    
+
     * Get/set/remove resource level config
-    
+
     ```
     curl -d 'jsonParameters={"command":"setConfig","configs":"key1=value1,key2=value2"}' -H "Content-Type: application/json" http://localhost:8100/clusters/MyCluster/configs/resource/MyDB
     ```
 
 * _/clusters/{clusterName}/controller_
     * Show controller information
-    
+
     ```
-      curl http://localhost:8100/clusters/MyCluster/Controller
+    curl http://localhost:8100/clusters/MyCluster/Controller
     ```
-    
+
     * Enable/disable cluster
-    
+
     ```
-      curl -d 'jsonParameters={"command":"enableCluster","enabled":"false"}' -H "Content-Type: application/json" http://localhost:8100/clusters/MyCluster/Controller
+    curl -d 'jsonParameters={"command":"enableCluster","enabled":"false"}' -H "Content-Type: application/json" http://localhost:8100/clusters/MyCluster/Controller
     ```
 
 * _/zkPath/{path}_
     * Get information for zookeeper path
-    
+
     ```
-      curl http://localhost:8100/zkPath/MyCluster
+    curl http://localhost:8100/zkPath/MyCluster
     ```
 
 * _/clusters/{clusterName}/StateModelDefs_
     * Show all state model definitions
-    
+
     ```
-      curl http://localhost:8100/clusters/MyCluster/StateModelDefs
+    curl http://localhost:8100/clusters/MyCluster/StateModelDefs
     ```
 
     * Add a state mdoel definition
-    
-    ```
-      echo jsonParameters={
-        "command":"addStateModelDef"
-       }&newStateModelDef={
-          "id" : "OnlineOffline",
-          "simpleFields" : {
-            "INITIAL_STATE" : "OFFLINE"
-          },
-          "listFields" : {
-            "STATE_PRIORITY_LIST" : [ "ONLINE", "OFFLINE", "DROPPED" ],
-            "STATE_TRANSITION_PRIORITYLIST" : [ "OFFLINE-ONLINE", "ONLINE-OFFLINE", "OFFLINE-DROPPED" ]
-          },
-          "mapFields" : {
-            "DROPPED.meta" : {
-              "count" : "-1"
-            },
-            "OFFLINE.meta" : {
-              "count" : "-1"
-            },
-            "OFFLINE.next" : {
-              "DROPPED" : "DROPPED",
-              "ONLINE" : "ONLINE"
-            },
-            "ONLINE.meta" : {
-              "count" : "R"
-            },
-            "ONLINE.next" : {
-              "DROPPED" : "OFFLINE",
-              "OFFLINE" : "OFFLINE"
-            }
-          }
+
+    ```
+    echo jsonParameters={
+      "command":"addStateModelDef"
+    }&newStateModelDef={
+      "id" : "OnlineOffline",
+      "simpleFields" : {
+        "INITIAL_STATE" : "OFFLINE"
+      },
+      "listFields" : {
+        "STATE_PRIORITY_LIST" : [ "ONLINE", "OFFLINE", "DROPPED" ],
+        "STATE_TRANSITION_PRIORITYLIST" : [ "OFFLINE-ONLINE", "ONLINE-OFFLINE", "OFFLINE-DROPPED" ]
+      },
+      "mapFields" : {
+        "DROPPED.meta" : {
+          "count" : "-1"
+        },
+        "OFFLINE.meta" : {
+          "count" : "-1"
+        },
+        "OFFLINE.next" : {
+          "DROPPED" : "DROPPED",
+          "ONLINE" : "ONLINE"
+        },
+        "ONLINE.meta" : {
+          "count" : "R"
+        },
+        "ONLINE.next" : {
+          "DROPPED" : "OFFLINE",
+          "OFFLINE" : "OFFLINE"
         }
-        > newStateModelDef.json
-        curl -d @'./untitled.txt' -H 'Content-Type: application/json' http://localhost:8100/clusters/MyCluster/StateModelDefs
+      }
+    }
+    > newStateModelDef.json
+    curl -d @'./untitled.txt' -H 'Content-Type: application/json' http://localhost:8100/clusters/MyCluster/StateModelDefs
     ```
 
 * _/clusters/{clusterName}/StateModelDefs/{stateModelDefName}_
     * Show a state model definition
-    
+
     ```
-      curl http://localhost:8100/clusters/MyCluster/StateModelDefs/OnlineOffline
+    curl http://localhost:8100/clusters/MyCluster/StateModelDefs/OnlineOffline
     ```
 
 * _/clusters/{clusterName}/constraints/{constraintType}_
     * Show all contraints
-    
+
     ```
-      curl http://localhost:8100/clusters/MyCluster/constraints/MESSAGE_CONSTRAINT
+    curl http://localhost:8100/clusters/MyCluster/constraints/MESSAGE_CONSTRAINT
     ```
 
     * Set a contraint
-    
+
     ```
-       curl -d 'jsonParameters={"constraintAttributes":"RESOURCE=MyDB,CONSTRAINT_VALUE=1"}' -H "Content-Type: application/json" http://localhost:8100/clusters/MyCluster/constraints/MESSAGE_CONSTRAINT/MyConstraint
+    curl -d 'jsonParameters={"constraintAttributes":"RESOURCE=MyDB,CONSTRAINT_VALUE=1"}' -H "Content-Type: application/json" http://localhost:8100/clusters/MyCluster/constraints/MESSAGE_CONSTRAINT/MyConstraint
     ```
-    
+
     * Remove a constraint
-    
+
     ```
-      curl -X DELETE http://localhost:8100/clusters/MyCluster/constraints/MESSAGE_CONSTRAINT/MyConstraint
+    curl -X DELETE http://localhost:8100/clusters/MyCluster/constraints/MESSAGE_CONSTRAINT/MyConstraint
     ```
-

http://git-wip-us.apache.org/repos/asf/incubator-helix/blob/4a4510d1/site-releases/0.6.2-incubating/src/site/markdown/tutorial_controller.md
----------------------------------------------------------------------
diff --git a/site-releases/0.6.2-incubating/src/site/markdown/tutorial_controller.md b/site-releases/0.6.2-incubating/src/site/markdown/tutorial_controller.md
index 8e7e7ad..ad782e5 100644
--- a/site-releases/0.6.2-incubating/src/site/markdown/tutorial_controller.md
+++ b/site-releases/0.6.2-incubating/src/site/markdown/tutorial_controller.md
@@ -21,7 +21,74 @@ under the License.
   <title>Tutorial - Controller</title>
 </head>
 
-# [Helix Tutorial](./Tutorial.html): Controller
+## [Helix Tutorial](./Tutorial.html): Controller
+
+Next, let\'s implement the controller.  This is the brain of the cluster.  Helix makes sure there is exactly one active controller running the cluster.
+
+### Start a Connection
+
+The Helix manager requires the following parameters:
+
+* clusterName: A logical name to represent the group of nodes
+* instanceName: A logical name of the process creating the manager instance. Generally this is host:port
+* instanceType: Type of the process. This can be one of the following types, in this case use CONTROLLER:
+    * CONTROLLER: Process that controls the cluster, any number of controllers can be started but only one will be active at any given time
+    * PARTICIPANT: Process that performs the actual task in the distributed system
+    * SPECTATOR: Process that observes the changes in the cluster
+    * ADMIN: To carry out system admin actions
+* zkConnectString: Connection string to ZooKeeper. This is of the form host1:port1,host2:port2,host3:port3
+
+```
+manager = HelixManagerFactory.getZKHelixManager(clusterName,
+                                                instanceName,
+                                                instanceType,
+                                                zkConnectString);
+```
+
+### Controller Code
+
+The Controller needs to know about all changes in the cluster. Helix takes care of this with the default implementation.
+If you need additional functionality, see GenericHelixController on how to configure the pipeline.
+
+```
+manager = HelixManagerFactory.getZKHelixManager(clusterName,
+                                                instanceName,
+                                                InstanceType.CONTROLLER,
+                                                zkConnectString);
+manager.connect();
+GenericHelixController controller = new GenericHelixController();
+manager.addConfigChangeListener(controller);
+manager.addLiveInstanceChangeListener(controller);
+manager.addIdealStateChangeListener(controller);
+manager.addExternalViewChangeListener(controller);
+manager.addControllerListener(controller);
+```
+The snippet above shows how the controller is started. You can also start the controller using command line interface.
+
+```
+cd helix/helix-core/target/helix-core-pkg/bin
+./run-helix-controller.sh --zkSvr <Zookeeper ServerAddress (Required)>  --cluster <Cluster name (Required)>
+```
+
+### Controller Deployment Modes
+
+Helix provides multiple options to deploy the controller.
+
+#### STANDALONE
+
+The Controller can be started as a separate process to manage a cluster. This is the recommended approach. However, since one controller can be a single point of failure, multiple controller processes are required for reliability.  Even if multiple controllers are running, only one will be actively managing the cluster at any time and is decided by a leader-election process. If the leader fails, another leader will take over managing the cluster.
+
+Even though we recommend this method of deployment, it has the drawback of having to manage an additional service for each cluster. See the Controller as a Service option.
+
+#### EMBEDDED
+
+If setting up a separate controller process is not viable, then it is possible to embed the controller as a library in each of the participants.
+
+#### CONTROLLER AS A SERVICE
+
+One of the cool features we added in Helix was to use a set of controllers to manage a large number of clusters.
+
+For example if you have X clusters to be managed, instead of deploying X*3 (3 controllers for fault tolerance) controllers for each cluster, one can deploy just 3 controllers.  Each controller can manage X/3 clusters.  If any controller fails, the remaining two will manage X/2 clusters.
 
 Next, let\'s implement the controller.  This is the brain of the cluster.  Helix makes sure there is exactly one active controller running the cluster.
 
@@ -29,15 +96,15 @@ Next, let\'s implement the controller.  This is the brain of the cluster.  Helix
 
 
 It requires the following parameters:
- 
+
 * clusterName: A logical name to represent the group of nodes
 * instanceName: A logical name of the process creating the manager instance. Generally this is host:port.
 * instanceType: Type of the process. This can be one of the following types, in this case use CONTROLLER:
     * CONTROLLER: Process that controls the cluster, any number of controllers can be started but only one will be active at any given time.
-    * PARTICIPANT: Process that performs the actual task in the distributed system. 
+    * PARTICIPANT: Process that performs the actual task in the distributed system.
     * SPECTATOR: Process that observes the changes in the cluster.
     * ADMIN: To carry out system admin actions.
-* zkConnectString: Connection string to Zookeeper. This is of the form host1:port1,host2:port2,host3:port3. 
+* zkConnectString: Connection string to Zookeeper. This is of the form host1:port1,host2:port2,host3:port3.
 
 ```
       manager = HelixManagerFactory.getZKHelixManager(clusterName,
@@ -65,13 +132,13 @@ If you need additional functionality, see GenericHelixController on how to confi
      manager.addControllerListener(controller);
 ```
 The snippet above shows how the controller is started. You can also start the controller using command line interface.
-  
+
 ```
 cd helix/helix-core/target/helix-core-pkg/bin
 ./run-helix-controller.sh --zkSvr <Zookeeper ServerAddress (Required)>  --cluster <Cluster name (Required)>
 ```
 
-### Controller deployment modes
+### Controller Deployment Modes
 
 Helix provides multiple options to deploy the controller.
 
@@ -87,8 +154,6 @@ If setting up a separate controller process is not viable, then it is possible t
 
 #### CONTROLLER AS A SERVICE
 
-One of the cool features we added in Helix is to use a set of controllers to manage a large number of clusters. 
+One of the cool features we added in Helix is to use a set of controllers to manage a large number of clusters.
 
 For example if you have X clusters to be managed, instead of deploying X*3 (3 controllers for fault tolerance) controllers for each cluster, one can deploy just 3 controllers.  Each controller can manage X/3 clusters.  If any controller fails, the remaining two will manage X/2 clusters.
-
-

http://git-wip-us.apache.org/repos/asf/incubator-helix/blob/4a4510d1/site-releases/0.6.2-incubating/src/site/markdown/tutorial_health.md
----------------------------------------------------------------------
diff --git a/site-releases/0.6.2-incubating/src/site/markdown/tutorial_health.md b/site-releases/0.6.2-incubating/src/site/markdown/tutorial_health.md
index e1a7f3c..03b1dcc 100644
--- a/site-releases/0.6.2-incubating/src/site/markdown/tutorial_health.md
+++ b/site-releases/0.6.2-incubating/src/site/markdown/tutorial_health.md
@@ -21,15 +21,15 @@ under the License.
   <title>Tutorial - Customizing Heath Checks</title>
 </head>
 
-# [Helix Tutorial](./Tutorial.html): Customizing Health Checks
+## [Helix Tutorial](./Tutorial.html): Customizing Health Checks
 
-In this chapter, we\'ll learn how to customize the health check, based on metrics of your distributed system.  
+In this chapter, we\'ll learn how to customize health checks based on metrics of your distributed system.
 
 ### Health Checks
 
 Note: _this in currently in development mode, not yet ready for production._
 
-Helix provides the ability for each node in the system to report health metrics on a periodic basis. 
+Helix provides the ability for each node in the system to report health metrics on a periodic basis.
 
 Helix supports multiple ways to aggregate these metrics:
 
@@ -40,7 +40,7 @@ Helix supports multiple ways to aggregate these metrics:
 
 Helix persists the aggregated value only.
 
-Applications can define a threshold on the aggregate values according to the SLAs, and when the SLA is violated Helix will fire an alert. 
+Applications can define a threshold on the aggregate values according to the SLAs, and when the SLA is violated Helix will fire an alert.
 Currently Helix only fires an alert, but in a future release we plan to use these metrics to either mark the node dead or load balance the partitions.
 This feature will be valuable for distributed systems that support multi-tenancy and have a large variation in work load patterns.  In addition, this can be used to detect skewed partitions (hotspots) and rebalance the cluster.
 

http://git-wip-us.apache.org/repos/asf/incubator-helix/blob/4a4510d1/site-releases/0.6.2-incubating/src/site/markdown/tutorial_messaging.md
----------------------------------------------------------------------
diff --git a/site-releases/0.6.2-incubating/src/site/markdown/tutorial_messaging.md b/site-releases/0.6.2-incubating/src/site/markdown/tutorial_messaging.md
index e1f0385..15eb99f 100644
--- a/site-releases/0.6.2-incubating/src/site/markdown/tutorial_messaging.md
+++ b/site-releases/0.6.2-incubating/src/site/markdown/tutorial_messaging.md
@@ -21,51 +21,50 @@ under the License.
   <title>Tutorial - Messaging</title>
 </head>
 
-# [Helix Tutorial](./Tutorial.html): Messaging
+## [Helix Tutorial](./Tutorial.html): Messaging
 
-In this chapter, we\'ll learn about messaging, a convenient feature in Helix for sending messages between nodes of a cluster.  This is an interesting feature which is quite useful in practice. It is common that nodes in a distributed system require a mechanism to interact with each other.  
+In this chapter, we\'ll learn about messaging, a convenient feature in Helix for sending messages between nodes of a cluster.  This is an interesting feature that is quite useful in practice. It is common that nodes in a distributed system require a mechanism to interact with each other.
 
 ### Example: Bootstrapping a Replica
 
 Consider a search system  where the index replica starts up and it does not have an index. A typical solution is to get the index from a common location, or to copy the index from another replica.
 
-Helix provides a messaging API for intra-cluster communication between nodes in the system.  Helix provides a mechanism to specify the message recipient in terms of resource, partition, and state rather than specifying hostnames.  Helix ensures that the message is delivered to all of the required recipients. In this particular use case, the instance can specify the recipient criteria as all replicas of the desired partition to bootstrap.
-Since Helix is aware of the global state of the system, it can send the message to appropriate nodes. Once the nodes respond, Helix provides the bootstrapping replica with all the responses.
+Helix provides a messaging API for intra-cluster communication between nodes in the system.  This API provides a mechanism to specify the message recipient in terms of resource, partition, and state rather than specifying hostnames.  Helix ensures that the message is delivered to all of the required recipients. In this particular use case, the instance can specify the recipient criteria as all replicas of the desired partition to bootstrap.
+Since Helix is aware of the global state of the system, it can send the message to the appropriate nodes. Once the nodes respond, Helix provides the bootstrapping replica with all the responses.
 
 This is a very generic API and can also be used to schedule various periodic tasks in the cluster, such as data backups, log cleanup, etc.
 System Admins can also perform ad-hoc tasks, such as on-demand backups or a system command (such as rm -rf ;) across all nodes of the cluster
 
 ```
-      ClusterMessagingService messagingService = manager.getMessagingService();
-
-      // Construct the Message
-      Message requestBackupUriRequest = new Message(
-          MessageType.USER_DEFINE_MSG, UUID.randomUUID().toString());
-      requestBackupUriRequest
-          .setMsgSubType(BootstrapProcess.REQUEST_BOOTSTRAP_URL);
-      requestBackupUriRequest.setMsgState(MessageState.NEW);
-
-      // Set the Recipient criteria: all nodes that satisfy the criteria will receive the message
-      Criteria recipientCriteria = new Criteria();
-      recipientCriteria.setInstanceName("%");
-      recipientCriteria.setRecipientInstanceType(InstanceType.PARTICIPANT);
-      recipientCriteria.setResource("MyDB");
-      recipientCriteria.setPartition("");
-
-      // Should be processed only by process(es) that are active at the time of sending the message
-      //   This means if the recipient is restarted after message is sent, it will not be processe.
-      recipientCriteria.setSessionSpecific(true);
-
-      // wait for 30 seconds
-      int timeout = 30000;
-
-      // the handler that will be invoked when any recipient responds to the message.
-      BootstrapReplyHandler responseHandler = new BootstrapReplyHandler();
-
-      // this will return only after all recipients respond or after timeout
-      int sentMessageCount = messagingService.sendAndWait(recipientCriteria,
-          requestBackupUriRequest, responseHandler, timeout);
+ClusterMessagingService messagingService = manager.getMessagingService();
+
+// Construct the Message
+Message requestBackupUriRequest = new Message(
+    MessageType.USER_DEFINE_MSG, UUID.randomUUID().toString());
+requestBackupUriRequest
+    .setMsgSubType(BootstrapProcess.REQUEST_BOOTSTRAP_URL);
+requestBackupUriRequest.setMsgState(MessageState.NEW);
+
+// Set the Recipient criteria: all nodes that satisfy the criteria will receive the message
+Criteria recipientCriteria = new Criteria();
+recipientCriteria.setInstanceName("%");
+recipientCriteria.setRecipientInstanceType(InstanceType.PARTICIPANT);
+recipientCriteria.setResource("MyDB");
+recipientCriteria.setPartition("");
+
+// Should be processed only by process(es) that are active at the time of sending the message
+// This means if the recipient is restarted after message is sent, it will not be processe.
+recipientCriteria.setSessionSpecific(true);
+
+// wait for 30 seconds
+int timeout = 30000;
+
+// the handler that will be invoked when any recipient responds to the message.
+BootstrapReplyHandler responseHandler = new BootstrapReplyHandler();
+
+// this will return only after all recipients respond or after timeout
+int sentMessageCount = messagingService.sendAndWait(recipientCriteria,
+    requestBackupUriRequest, responseHandler, timeout);
 ```
 
-See HelixManager.DefaultMessagingService in [Javadocs](http://helix.incubator.apache.org/javadocs/0.6.2-incubating/reference/org/apache/helix/messaging/DefaultMessagingService.html) for more info.
-
+See HelixManager.DefaultMessagingService in the [Javadocs](http://helix.incubator.apache.org/javadocs/0.6.2-incubating/reference/org/apache/helix/messaging/DefaultMessagingService.html) for more information.

http://git-wip-us.apache.org/repos/asf/incubator-helix/blob/4a4510d1/site-releases/0.6.2-incubating/src/site/markdown/tutorial_participant.md
----------------------------------------------------------------------
diff --git a/site-releases/0.6.2-incubating/src/site/markdown/tutorial_participant.md b/site-releases/0.6.2-incubating/src/site/markdown/tutorial_participant.md
index d2812da..cb38e45 100644
--- a/site-releases/0.6.2-incubating/src/site/markdown/tutorial_participant.md
+++ b/site-releases/0.6.2-incubating/src/site/markdown/tutorial_participant.md
@@ -21,85 +21,82 @@ under the License.
   <title>Tutorial - Participant</title>
 </head>
 
-# [Helix Tutorial](./Tutorial.html): Participant
+## [Helix Tutorial](./Tutorial.html): Participant
 
-In this chapter, we\'ll learn how to implement a Participant, which is a primary functional component of a distributed system.
+In this chapter, we\'ll learn how to implement a __Participant__, which is a primary functional component of a distributed system.
 
 
-### Start the Helix agent
+### Start a Connection
 
-The Helix agent is a common component that connects each system component with the controller.
+The Helix manager is a common component that connects each system component with the controller.
 
 It requires the following parameters:
- 
+
 * clusterName: A logical name to represent the group of nodes
-* instanceName: A logical name of the process creating the manager instance. Generally this is host:port.
+* instanceName: A logical name of the process creating the manager instance. Generally this is host:port
 * instanceType: Type of the process. This can be one of the following types, in this case, use PARTICIPANT
-    * CONTROLLER: Process that controls the cluster, any number of controllers can be started but only one will be active at any given time.
-    * PARTICIPANT: Process that performs the actual task in the distributed system. 
-    * SPECTATOR: Process that observes the changes in the cluster.
-    * ADMIN: To carry out system admin actions.
-* zkConnectString: Connection string to Zookeeper. This is of the form host1:port1,host2:port2,host3:port3. 
+    * CONTROLLER: Process that controls the cluster, any number of controllers can be started but only one will be active at any given time
+    * PARTICIPANT: Process that performs the actual task in the distributed system
+    * SPECTATOR: Process that observes the changes in the cluster
+    * ADMIN: To carry out system admin actions
+* zkConnectString: Connection string to ZooKeeper. This is of the form host1:port1,host2:port2,host3:port3
 
-After the Helix manager instance is created, only thing that needs to be registered is the state model factory. 
-The methods of the State Model will be called when controller sends transitions to the Participant.  In this example, we'll use the OnlineOffline factory.  Other options include:
+After the Helix manager instance is created, the only thing that needs to be registered is the state model factory.
+The methods of the state model will be called when controller sends transitions to the participant.  In this example, we'll use the OnlineOffline factory.  Other options include:
 
 * MasterSlaveStateModelFactory
 * LeaderStandbyStateModelFactory
 * BootstrapHandler
-* _An application defined state model factory_
 
 
 ```
-      manager = HelixManagerFactory.getZKHelixManager(clusterName,
-                                                          instanceName,
-                                                          InstanceType.PARTICIPANT,
-                                                          zkConnectString);
-     StateMachineEngine stateMach = manager.getStateMachineEngine();
-
-     //create a stateModelFactory that returns a statemodel object for each partition. 
-     stateModelFactory = new OnlineOfflineStateModelFactory();     
-     stateMach.registerStateModelFactory(stateModelType, stateModelFactory);
-     manager.connect();
+manager = HelixManagerFactory.getZKHelixManager(clusterName,
+                                                instanceName,
+                                                InstanceType.PARTICIPANT,
+                                                zkConnectString);
+StateMachineEngine stateMach = manager.getStateMachineEngine();
+
+//create a stateModelFactory that returns a statemodel object for each partition.
+stateModelFactory = new OnlineOfflineStateModelFactory();
+stateMach.registerStateModelFactory(stateModelType, stateModelFactory);
+manager.connect();
 ```
 
+### Example State Model Factory
+
 Helix doesn\'t know what it means to change from OFFLINE\-\-\>ONLINE or ONLINE\-\-\>OFFLINE.  The following code snippet shows where you insert your system logic for these two state transitions.
 
 ```
 public class OnlineOfflineStateModelFactory extends
-        StateModelFactory<StateModel> {
-    @Override
-    public StateModel createNewStateModel(String stateUnitKey) {
-        OnlineOfflineStateModel stateModel = new OnlineOfflineStateModel();
-        return stateModel;
+    StateModelFactory<StateModel> {
+  @Override
+  public StateModel createNewStateModel(String stateUnitKey) {
+    OnlineOfflineStateModel stateModel = new OnlineOfflineStateModel();
+    return stateModel;
+  }
+  @StateModelInfo(states = "{'OFFLINE','ONLINE'}", initialState = "OFFINE")
+  public static class OnlineOfflineStateModel extends StateModel {
+    @Transition(from = "OFFLINE", to = "ONLINE")
+    public void onBecomeOnlineFromOffline(Message message,
+        NotificationContext context) {
+      System.out.println("OnlineOfflineStateModel.onBecomeOnlineFromOffline()");
+
+      ////////////////////////////////////////////////////////////////////////////////////////////////
+      // Application logic to handle transition                                                     //
+      // For example, you might start a service, run initialization, etc                            //
+      ////////////////////////////////////////////////////////////////////////////////////////////////
     }
-    @StateModelInfo(states = "{'OFFLINE','ONLINE'}", initialState = "OFFINE")
-    public static class OnlineOfflineStateModel extends StateModel {
-
-        @Transition(from = "OFFLINE", to = "ONLINE")
-        public void onBecomeOnlineFromOffline(Message message,
-                NotificationContext context) {
-
-            System.out.println("OnlineOfflineStateModel.onBecomeOnlineFromOffline()");
 
-            ////////////////////////////////////////////////////////////////////////////////////////////////
-            // Application logic to handle transition                                                     //
-            // For example, you might start a service, run initialization, etc                            //
-            ////////////////////////////////////////////////////////////////////////////////////////////////
-        }
+    @Transition(from = "ONLINE", to = "OFFLINE")
+    public void onBecomeOfflineFromOnline(Message message,
+        NotificationContext context) {
+      System.out.println("OnlineOfflineStateModel.onBecomeOfflineFromOnline()");
 
-        @Transition(from = "ONLINE", to = "OFFLINE")
-        public void onBecomeOfflineFromOnline(Message message,
-                NotificationContext context) {
-
-            System.out.println("OnlineOfflineStateModel.onBecomeOfflineFromOnline()");
-
-            ////////////////////////////////////////////////////////////////////////////////////////////////
-            // Application logic to handle transition                                                     //
-            // For example, you might shutdown a service, log this event, or change monitoring settings   //
-            ////////////////////////////////////////////////////////////////////////////////////////////////
-        }
+      ////////////////////////////////////////////////////////////////////////////////////////////////
+      // Application logic to handle transition                                                     //
+      // For example, you might shutdown a service, log this event, or change monitoring settings   //
+      ////////////////////////////////////////////////////////////////////////////////////////////////
     }
+  }
 }
 ```
-


Mime
View raw message