helix-commits mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From kisho...@apache.org
Subject git commit: Adding documentation explaining idealstate in detail, fix from Alexandre
Date Tue, 07 May 2013 03:01:22 GMT
Updated Branches:
  refs/heads/master 2450601a1 -> 3e801a25b


Adding documentation explaining idealstate in detail, fix from Alexandre


Project: http://git-wip-us.apache.org/repos/asf/incubator-helix/repo
Commit: http://git-wip-us.apache.org/repos/asf/incubator-helix/commit/3e801a25
Tree: http://git-wip-us.apache.org/repos/asf/incubator-helix/tree/3e801a25
Diff: http://git-wip-us.apache.org/repos/asf/incubator-helix/diff/3e801a25

Branch: refs/heads/master
Commit: 3e801a25bb4c0d9693052ea7e3ab64fdd02a6569
Parents: 2450601
Author: Kishore Gopalakrishna <g.kishore@gmail.com>
Authored: Mon May 6 20:01:08 2013 -0700
Committer: Kishore Gopalakrishna <g.kishore@gmail.com>
Committed: Mon May 6 20:01:08 2013 -0700

----------------------------------------------------------------------
 src/site/markdown/Concepts.md |  269 ++++++++++++++++++++++++++++++++++++
 src/site/markdown/Features.md |  179 +++++++++++++++++++-----
 src/site/markdown/index.md    |    2 +
 src/site/site.xml             |    1 +
 4 files changed, 413 insertions(+), 38 deletions(-)
----------------------------------------------------------------------


http://git-wip-us.apache.org/repos/asf/incubator-helix/blob/3e801a25/src/site/markdown/Concepts.md
----------------------------------------------------------------------
diff --git a/src/site/markdown/Concepts.md b/src/site/markdown/Concepts.md
new file mode 100644
index 0000000..9fb8eb7
--- /dev/null
+++ b/src/site/markdown/Concepts.md
@@ -0,0 +1,269 @@
+<!---
+Licensed to the Apache Software Foundation (ASF) under one
+or more contributor license agreements.  See the NOTICE file
+distributed with this work for additional information
+regarding copyright ownership.  The ASF licenses this file
+to you under the Apache License, Version 2.0 (the
+"License"); you may not use this file except in compliance
+with the License.  You may obtain a copy of the License at
+
+  http://www.apache.org/licenses/LICENSE-2.0
+
+Unless required by applicable law or agreed to in writing,
+software distributed under the License is distributed on an
+"AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+KIND, either express or implied.  See the License for the
+specific language governing permissions and limitations
+under the License.
+-->
+
+Helix is based on the simple fact that a given task has the following attributes associated
with it 
+
+* Location of the task, for example it runs on  Node N1
+* State, for examples its running, stopped etc.
+
+A task is referred to as a 'resource'. 
+
+### IDEALSTATE
+
+Ideal state simply allows one to map tasks to location and state. A standard way of expressing
this in Helix is
+
+```
+  "TASK_NAME" : {
+    "LOCATION" : "STATE"
+  }
+
+```
+Consider a simple case where you want to launch a task 'myTask' on node 'N1'. The idealstate
for this can be expressed as follows
+
+```
+{
+  "id" : "MyTask",
+  "mapFields" : {
+    "myTask" : {
+      "N1" : "ONLINE",
+    }
+  }
+}
+```
+#### PARTITION
+
+If this task get too big to fit on one box, you might want to divide it into subTasks. Each
subTask is referred to as 'partition' in Helix. Lets say you want to divide the task into
3 subTasks/partitions, the idealstate can be changed as shown below. 
+
+'myTask_0', 'myTask_1', 'myTask_2' are logical names representing the partitions of myTask.
Each tasks runs on N1,N2 and N3 respectively.
+
+```
+{
+  "id" : "myTask",
+  "simpleFields" : {
+    "NUM_PARTITIONS" : "3",
+  }
+ "mapFields" : {
+    "myTask_0" : {
+      "N1" : "ONLINE",
+    },
+    "myTask_1" : {
+      "N2" : "ONLINE",
+    },
+    "myTask_2" : {
+      "N3" : "ONLINE",
+    }
+  }
+}
+```
+
+#### REPLICA
+
+Partitioning allows one to split the data/task into multiple subparts. But lets say the request
rate each partition increases. The common solution is to have multiple copies for each partition.
Helix refers to the copy of a partition as 'replica'. Adding replica also increases the availability
of the system during failures. One can see this methodology employed often in Search systems.
The index is divided into shard and each shard has multiple copies.
+
+Lets say you want to add one additional replica for each task. The idealstate can simply
be changed as shown below. 
+
+For increasing the availability of the system, its better to place replica of given partition
on different nodes.
+
+```
+{
+  "id" : "myIndex",
+  "simpleFields" : {
+    "NUM_PARTITIONS" : "3",
+    "REPLICAS" : "2",
+  },
+ "mapFields" : {
+    "myIndex_0" : {
+      "N1" : "ONLINE",
+      "N2" : "ONLINE"
+    },
+    "myIndex_1" : {
+      "N2" : "ONLINE",
+      "N3" : "ONLINE"
+    },
+    "myIndex_2" : {
+      "N3" : "ONLINE",
+      "N1" : "ONLINE"
+    }
+  }
+}
+```
+
+#### STATE 
+
+Now lets take a slightly complicated scenario where a task represents a database.  Unlike
an index which is in general read only, database supports both reads and write. Keeping the
data consistent among the replica is crucial in distributed data stores. One commonly applied
technique is to assign one replica as MASTER and remaining as SLAVE. All writes go to MASTER
and are then replicated to SLAVE.
+
+Helix allows one to assign different states to each replica. Lets say you have two mysql
instances N1 and N2 where one will serve as MASTER and another as SLAVE. The ideal state can
be changed to
+
+```
+{
+  "id" : "myDB",
+  "simpleFields" : {
+    "NUM_PARTITIONS" : "1",
+    "REPLICAS" : "2",
+  },
+  "mapFields" : {
+    "myDB" : {
+      "N1" : "MASTER",
+      "N2" : "SLAVE",
+    }
+  }
+}
+
+```
+
+
+### STATE MACHINE and TRANSITIONS
+
+Idealstate allows one to exactly specify the desired state of the cluster. Given an idealstate,
Helix takes up the responsibility of ensuring that cluster reaches idealstate. Helix CONTROLLER
reads the idealstate and then commands the PARTICIPANT to take appropriate actions to move
from one state to another until it matches Idealstate. These actions are referred to as 'transitions'
in Helix.
+
+Next logical question is, how does the CONTROLLER compute the transitions required to get
to idealstate. This is where finite state machine concept comes in. Helix allows applications
to plug in FSM. A state machine consists of the following
+
+* STATE : Describes the role of a replica
+* TRANSITION: An action that allows a replica to move from one STATE to another, thus changing
its role.
+
+Here is an example of MASTERSLAVE state machine,
+
+
+```
+          OFFLINE  | SLAVE  |  MASTER  
+         _____________________________
+        |          |        |         |
+OFFLINE |   N/A    | SLAVE  | SLAVE   |
+        |__________|________|_________|
+        |          |        |         |
+SLAVE   |  OFFLINE |   N/A  | MASTER  |
+        |__________|________|_________|
+        |          |        |         |
+MASTER  | SLAVE    | SLAVE  |   N/A   |
+        |__________|________|_________|
+
+```
+
+Helix allows each resource to be associated with one state machine. This means you can have
one resource as a index and another as database in the same cluster. One can associate each
resource with a state machine as follows
+
+```
+{
+  "id" : "myDB",
+  "simpleFields" : {
+    "NUM_PARTITIONS" : "1",
+    "REPLICAS" : "2",
+    "STATE_MODEL_DEF_REF" : "MasterSlave",
+  },
+  "mapFields" : {
+    "myDB" : {
+      "N1" : "MASTER",
+      "N2" : "SLAVE",
+    }
+  }
+}
+
+```
+
+### CURRENT STATE
+
+Currentstate of a resource simply represents its actual state at a PARTICIPANT. In the below
example, 
+
+* 'INSTANCE_NAME' : Unique name representing the process.
+* 'SESSION_ID': Id that is automatically assigned every time a process joins the cluster.

+
+```
+{
+  "id":"MyResource"
+  ,"simpleFields":{
+    ,"SESSION_ID":"13d0e34675e0002"
+    ,"INSTANCE_NAME":"node1"
+    ,"STATE_MODEL_DEF":"MasterSlave"
+  }
+  ,"mapFields":{
+    "MyResource_0":{
+      "CURRENT_STATE":"SLAVE"
+    }
+    ,"MyResource_1":{
+      "CURRENT_STATE":"MASTER"
+    }
+    ,"MyResource_2":{
+      "CURRENT_STATE":"MASTER"
+    }
+  }
+}
+```
+Each node in the cluster has its own Current state.
+
+### EXTERNAL VIEW
+
+In order to communicate with the PARTICIPANTs, external clients need to know the current
state of each of the PARTICIPANT. The external clients are referred to as SPECTATORS. In order
to make the life of SPECTATOR simple, Helix provides EXTERNALVIEW that is an aggregated view
of the current state across all nodes. The EXTERNALVIEW has similar format as IDEALSTATE.
+
+```
+{
+  "id":"MyResource",
+  "mapFields":{
+    "MyResource_0":{
+      "N1":"SLAVE",
+      "N2":"MASTER",
+      "N3":"OFFLINE"
+    },
+    "MyResource_1":{
+      "N1":"MASTER",
+      "N2":"SLAVE",
+      "N3":"ERROR"
+    },
+    "MyResource_2":{
+      "N1":"MASTER",
+      "N2":"SLAVE",
+      "N3":"SLAVE"
+    }
+  }
+}
+```
+
+### REBALANCER
+
+The core component of Helix is the CONTROLLER which runs the REBALANCER algorithm on every
cluster event. Cluster event can be one of the following
+
+* Nodes start/stop and soft/hard failures
+* New nodes are added/removed
+* Ideal state changes
+
+There are few more like config changes etc but the key point to take away is there are many
ways to trigger the rebalancer.
+
+When a rebalancer is run it simply does the following
+
+* Compares the idealstate and current state
+* Computes the transitions required to reach the idealstate.
+* Issues the transitions to PARTICIPANT
+
+The above steps happen for every change in the system. Once current state matches the idealstate
the system is considered stable which implies  'IDEALSTATE = CURRENTSTATE = EXTERNALVIEW'
+
+### DYNAMIC IDEALSTATE
+
+One of the things that makes Helix powerful is that idealstate can be changed dynamically.
This means one can listen to cluster events like node failures and dynamically change the
ideal state. Helix will then take care of triggering the respective transitions in the system.
+
+Helix comes with few algorithms to automatically compute the idealstate based on the constraints.
For e.g. if you have a resource 3 partitions and 2 replicas, Helix can automatically compute
the idealstate based on the nodes that are currently active. See features page to find out
more about various execution modes of Helix like AUTO_REBALANCE, AUTO and CUSTOM. 
+
+
+
+
+
+
+
+
+
+
+
+

http://git-wip-us.apache.org/repos/asf/incubator-helix/blob/3e801a25/src/site/markdown/Features.md
----------------------------------------------------------------------
diff --git a/src/site/markdown/Features.md b/src/site/markdown/Features.md
index 3ed36b8..4f78812 100644
--- a/src/site/markdown/Features.md
+++ b/src/site/markdown/Features.md
@@ -18,53 +18,156 @@ under the License.
 -->
 
 
-Partition Placement
--------------------
+### CONFIGURING IDEALSTATE
+
+
+Read concepts page for definition of Idealstate.
+
 The placement of partitions in a DDS is very critical for reliability and scalability of
the system. 
 For example, when a node fails, it is important that the partitions hosted on that node are
reallocated evenly among the remaining nodes. Consistent hashing is one such algorithm that
can guarantee this.
-Helix by default comes with a variant of consistent hashing based of the RUSH algorithm.
This means given a number of partitions, replicas and number of nodes Helix does the automatic
assignment of partition to nodes such that
+Helix by default comes with a variant of consistent hashing based of the RUSH algorithm.

+
+This means given a number of partitions, replicas and number of nodes Helix does the automatic
assignment of partition to nodes such that
 
 * Each node has the same number of partitions and replicas of the same partition do not stay
on the same node.
 * When a node fails, the partitions will be equally distributed among the remaining nodes
 * When new nodes are added, the number of partitions moved will be minimized along with satisfying
the above two criteria.
 
-In simple terms, partition assignment can be defined as the mapping of Replica,State to a
Node in the cluster. For example, lets say the system as 2 partitions(P1,P2) and each partition
has 2 replicas and there are 2 nodes(N1,N2) in the system and two possible states Master and
Slave 
 
-The partition assignment table can look like 
+Helix provides multiple ways to control the placement and state of a replica. 
 
-    P1 -> {N1:M, N2:S}
-    P2 -> {N1:S, N2:M}
-    
-This means Partition P1 must be a Master at N1 and Slave at N2 and vice versa for P2
+```
+
+            |AUTO REBALANCE|   AUTO     |   CUSTOM  |       
+            -----------------------------------------
+   LOCATION | HELIX        |  APP       |  APP      |
+            -----------------------------------------
+      STATE | HELIX        |  HELIX     |  APP      |
+            -----------------------------------------
+```
+
+#### HELIX EXECUTION MODE 
 
-Helix provides multiple ways to control the partition placement. See Execution modes section
for more info on this.
 
-IdealState execution modes 
---------------------------
 Idealstate is defined as the state of the DDS when all nodes are up and running and healthy.

 Helix uses this as the target state of the system and computes the appropriate transitions
needed in the system to bring it to a stable state. 
 
 Helix supports 3 different execution modes which allows application to explicitly control
the placement and state of the replica.
 
 ##### AUTO_REBALANCE
-When the idealstate mode is set to AUTO_REBALANCE, Helix controls both the location of the
replica along with the state. This option is useful for applications where creation of a replica
is not expensive. 
-A typical example is evenly distributing a group of tasks among the currently alive processes.
For example, if there are 60 tasks and 4 nodes, Helix assigns 15 tasks to each node. 
+
+When the idealstate mode is set to AUTO_REBALANCE, Helix controls both the location of the
replica along with the state. This option is useful for applications where creation of a replica
is not expensive. Example
+
+```
+{
+  "id" : "MyResource",
+  "simpleFields" : {
+    "IDEAL_STATE_MODE" : "AUTO_REBALANCE",
+    "NUM_PARTITIONS" : "3",
+    "REPLICAS" : "2",
+    "STATE_MODEL_DEF_REF" : "MasterSlave",
+  }
+  "listFields" : {
+    "MyResource_0" : [],
+    "MyResource_1" : [],
+    "MyResource_2" : []
+  },
+  "mapFields" : {
+  }
+}
+```
+
+If there are 3 nodes in the cluster, then Helix will internally compute the ideal state as

+
+```
+{
+  "id" : "MyResource",
+  "simpleFields" : {
+    "NUM_PARTITIONS" : "3",
+    "REPLICAS" : "2",
+    "STATE_MODEL_DEF_REF" : "MasterSlave",
+  },
+  "mapFields" : {
+    "MyResource_0" : {
+      "N1" : "MASTER",
+      "N2" : "SLAVE",
+    },
+    "MyResource_1" : {
+      "N2" : "MASTER",
+      "N3" : "SLAVE",
+    },
+    "MyResource_2" : {
+      "N3" : "MASTER",
+      "N1" : "SLAVE",
+    }
+  }
+}
+```
+
+Another typical example is evenly distributing a group of tasks among the currently alive
processes. For example, if there are 60 tasks and 4 nodes, Helix assigns 15 tasks to each
node. 
 When one node fails Helix redistributes its 15 tasks to the remaining 3 nodes. Similarly,
if a node is added, Helix re-allocates 3 tasks from each of the 4 nodes to the 5th node. 
 
 #### AUTO
-When the idealstate mode is set to AUTO, Helix only controls STATE of the replicas where
as the location of the partition is controlled by application. 
-For example the application can say P1->{N1,N2,N3} which means P1 should only exist N1,N2,N3.
In this mode when N1 fails, unlike in AUTO-REBALANCE mode the partition is not moved from
N1 to others nodes in the cluster. 
-But Helix might decide to change the state of P1 in N2 and N3 based on the system constraints.
For example, if a system constraint specified that there should be 1 Master and if the Master
failed, then N2 will be made the master.
+
+When the idealstate mode is set to AUTO, Helix only controls STATE of the replicas where
as the location of the partition is controlled by application. Example: The below idealstate
indicates thats 'MyResource_0' must be only on node1 and node2.  But gives the control of
assigning the STATE to Helix.
+
+```
+{
+  "id" : "MyResource",
+  "simpleFields" : {
+    "IDEAL_STATE_MODE" : "AUTO",
+    "NUM_PARTITIONS" : "3",
+    "REPLICAS" : "2",
+    "STATE_MODEL_DEF_REF" : "MasterSlave",
+  }
+  "listFields" : {
+    "MyResource_0" : [node1, node2],
+    "MyResource_1" : [node2, node3],
+    "MyResource_2" : [node3, node1]
+  },
+  "mapFields" : {
+  }
+}
+```
+In this mode when node1 fails, unlike in AUTO-REBALANCE mode the partition is not moved from
node1 to others nodes in the cluster. Instead, Helix will decide to change the state of MyResource_0
in N2 based on the system constraints. For example, if a system constraint specified that
there should be 1 Master and if the Master failed, then node2 will be made the new master.

 
 #### CUSTOM
+
 Helix offers a third mode called CUSTOM, in which application can completely control the
placement and state of each replica. Applications will have to implement an interface that
Helix will invoke when the cluster state changes. 
-Within this callback, the application can recompute the partition assignment mapping. Helix
will then issue transitions to get the system to the final state. Note that Helix will ensure
that system constraints are not violated at any time.
-For example, the current state of the system might be P1 -> {N1:M,N2:S} and the application
changes the ideal state to P2 -> {N1:S,N2:M}. Helix will not blindly issue M-S to N1 and
S-M to N2 in parallel since it might result in a transient state where both N1 and N2 are
masters.
-Helix will issue S-M to N2 only when N1 has changed its state to S.
+Within this callback, the application can recompute the idealstate. Helix will then issue
appropriate transitions such that Idealstate and Currentstate converges.
+
+```
+{
+  "id" : "MyResource",
+  "simpleFields" : {
+      "IDEAL_STATE_MODE" : "CUSTOM",
+    "NUM_PARTITIONS" : "3",
+    "REPLICAS" : "2",
+    "STATE_MODEL_DEF_REF" : "MasterSlave",
+  },
+  "mapFields" : {
+    "MyResource_0" : {
+      "N1" : "MASTER",
+      "N2" : "SLAVE",
+    },
+    "MyResource_1" : {
+      "N2" : "MASTER",
+      "N3" : "SLAVE",
+    },
+    "MyResource_2" : {
+      "N3" : "MASTER",
+      "N1" : "SLAVE",
+    }
+  }
+}
+```
+
+For example, the current state of the system might be 'MyResource_0' -> {N1:MASTER,N2:SLAVE}
and the application changes the ideal state to 'MyResource_0' -> {N1:SLAVE,N2:MASTER}.
Helix will not blindly issue MASTER-->SLAVE to N1 and SLAVE-->MASTER to N2 in parallel
since it might result in a transient state where both N1 and N2 are masters.
+Helix will first issue MASTER-->SLAVE to N1 and after its completed it will issue SLAVE-->MASTER
to N2. 
  
 
-State Machine Configuration
----------------------------
+### State Machine Configuration
+
 Helix comes with 3 default state models that are most commonly used. Its possible to have
multiple state models in a cluster. 
 Every resource that is added should have a reference to the state model. 
 
@@ -85,8 +188,8 @@ STATE TRANSITION PRIORITY
 Helix tries to fire as many transitions as possible in parallel to reach the stable state
without violating constraints. By default Helix simply sorts the transitions alphabetically
and fires as many as it can without violating the constraints. 
 One can control this by overriding the priority order.
  
-Config management
------------------
+### Config management
+
 Helix allows applications to store application specific properties. The configuration can
have different scopes.
 
 * Cluster
@@ -98,8 +201,8 @@ Helix also provides notifications when any configs are changed. This allows
appl
 
 See HelixManager.getConfigAccessor for more info
 
-Intra cluster messaging api
----------------------------
+### Intra cluster messaging api
+
 This is an interesting feature which is quite useful in practice. Often times, nodes in DDS
requires a mechanism to interact with each other. One such requirement is a process of bootstrapping
a replica.
 
 Consider a search system use case where the index replica starts up and it does not have
an index. One of the commonly used solutions is to get the index from a common location or
to copy the index from another replica.
@@ -120,7 +223,7 @@ System Admins can also perform adhoc tasks like on demand backup or execute
a sy
       requestBackupUriRequest.setMsgState(MessageState.NEW);
       //SET THE RECIPIENT CRITERIA, All nodes that satisfy the criteria will receive the
message
       Criteria recipientCriteria = new Criteria();
-      recipientCriteria.setInstanceName("*");
+      recipientCriteria.setInstanceName("%");
       recipientCriteria.setRecipientInstanceType(InstanceType.PARTICIPANT);
       recipientCriteria.setResource("MyDB");
       recipientCriteria.setPartition("");
@@ -139,15 +242,15 @@ System Admins can also perform adhoc tasks like on demand backup or
execute a sy
 See HelixManager.getMessagingService for more info.
 
 
-Application specific property storage
--------------------------------------
+### Application specific property storage
+
 There are several usecases where applications needs support for distributed data structures.
Helix uses Zookeeper to store the application data and hence provides notifications when the
data changes. 
 One value add Helix provides is the ability to specify cache the data and also write through
cache. This is more efficient than reading from ZK every time.
 
 See HelixManager.getHelixPropertyStore
 
-Throttling
-----------
+### Throttling
+
 Since all state changes in the system are triggered through transitions, Helix can control
the number of transitions that can happen in parallel. Some of the transitions may be light
weight but some might involve moving data around which is quite expensive.
 Helix allows applications to set threshold on transitions. The threshold can be set at the
multiple scopes.
 
@@ -158,8 +261,8 @@ Helix allows applications to set threshold on transitions. The threshold
can be
 
 See HelixManager.getHelixAdmin.addMessageConstraint() 
 
-Health monitoring and alerting
-------------------------------
+### Health monitoring and alerting
+
 This in currently in development mode, not yet productionized.
 
 Helix provides ability for each node in the system to report health metrics on a periodic
basis. 
@@ -171,24 +274,24 @@ This feature will be valuable in for distributed systems that support
multi-tena
 This feature is not yet stable and do not recommend to be used in production.
 
 
-Controller deployment modes
----------------------------
+### Controller deployment modes
+
 Read Architecture wiki for more details on the Role of a controller. In simple words, it
basically controls the participants in the cluster by issuing transitions.
 
 Helix provides multiple options to deploy the controller.
 
-STANDALONE
+#### STANDALONE
 
 Controller can be started as a separate process to manage a cluster. This is the recommended
approach. How ever since one controller can be a single point of failure, multiple controller
processes are required for reliability.
 Even if multiple controllers are running only one will be actively managing the cluster at
any time and is decided by a leader election process. If the leader fails, another leader
will resume managing the cluster.
 
 Even though we recommend this method of deployment, it has the drawback of having to manage
an additional service for each cluster. See Controller As a Service option.
 
-EMBEDDED
+#### EMBEDDED
 
 If setting up a separate controller process is not viable, then it is possible to embed the
controller as a library in each of the participant. 
 
-CONTROLLER AS A SERVICE
+#### CONTROLLER AS A SERVICE
 
 One of the cool feature we added in helix was use a set of controllers to manage a large
number of clusters. 
 For example if you have X clusters to be managed, instead of deploying X*3(3 controllers
for fault tolerance) controllers for each cluster, one can deploy only 3 controllers. Each
controller can manage X/3 clusters. 

http://git-wip-us.apache.org/repos/asf/incubator-helix/blob/3e801a25/src/site/markdown/index.md
----------------------------------------------------------------------
diff --git a/src/site/markdown/index.md b/src/site/markdown/index.md
index f012281..273a9cf 100644
--- a/src/site/markdown/index.md
+++ b/src/site/markdown/index.md
@@ -21,7 +21,9 @@ under the License.
 Pages
 ---------------
 * [Quickstart](./Quickstart.html)
+* [Core concepts](./Concepts.html)
 * [Architecture](./Architecture.html)
+* [Tutorial](./Tutorial.html)
 * [Features](./Features.html)
 * [ApiUsage](./ApiUsage.html)
 * [Javadocs](./apidocs/index.html)

http://git-wip-us.apache.org/repos/asf/incubator-helix/blob/3e801a25/src/site/site.xml
----------------------------------------------------------------------
diff --git a/src/site/site.xml b/src/site/site.xml
index 2a3c0fa..9b15248 100644
--- a/src/site/site.xml
+++ b/src/site/site.xml
@@ -59,6 +59,7 @@
     <menu name="Helix">
       <item name="Introduction" href="./index.html"/>
       <item name="Quick Start" href="./Quickstart.html"/>
+      <item name="Core concept" href="./Concepts.html"/>
       <item name="Tutorial" href="./Tutorial.html"/>
       <item name="Architecture" href="./Architecture.html"/>
       <item name="Features" href="./Features.html"/>


Mime
View raw message