activemq-commits mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From clebertsuco...@apache.org
Subject [10/16] activemq-artemis git commit: ARTEMIS-1912 big doc refactor
Date Thu, 07 Jun 2018 15:26:52 GMT
http://git-wip-us.apache.org/repos/asf/activemq-artemis/blob/2b5d8f3b/docs/user-manual/en/ha.md
----------------------------------------------------------------------
diff --git a/docs/user-manual/en/ha.md b/docs/user-manual/en/ha.md
index 2fc0585..e78b1e4 100644
--- a/docs/user-manual/en/ha.md
+++ b/docs/user-manual/en/ha.md
@@ -52,14 +52,14 @@ This of course means there will be no Backup Strategy and is the default
 if none is provided, however this is used to configure `scale-down`
 which we will cover in a later chapter.
 
-> **Note**
+> **Note:**
 >
 > The `ha-policy` configurations replaces any current HA configuration
 > in the root of the `broker.xml` configuration. All old
 > configuration is now deprecated although best efforts will be made to
 > honour it if configured this way.
 
-> **Note**
+> **Note:**
 >
 > Only persistent message data will survive failover. Any non persistent
 > message data will not be available after failover.
@@ -115,7 +115,7 @@ synchronizing the data with its live server. The time it will take for
 this to happen will depend on the amount of data to be synchronized and
 the connection speed.
 
-> **Note**
+> **Note:**
 >
 > In general, synchronization occurs in parallel with current network traffic so
 > this won't cause any blocking on current clients. However, there is a critical
@@ -137,37 +137,37 @@ Cluster Connection also defines how backup servers will find the remote
 live servers to pair with. Refer to [Clusters](clusters.md) for details on how this is done,
 and how to configure a cluster connection. Notice that:
 
--   Both live and backup servers must be part of the same cluster.
-    Notice that even a simple live/backup replicating pair will require
-    a cluster configuration.
+- Both live and backup servers must be part of the same cluster.
+  Notice that even a simple live/backup replicating pair will require
+  a cluster configuration.
 
--   Their cluster user and password must match.
+- Their cluster user and password must match.
 
 Within a cluster, there are two ways that a backup server will locate a
 live server to replicate from, these are:
 
--   `specifying a node group`. You can specify a group of live servers
-    that a backup server can connect to. This is done by configuring
-    `group-name` in either the `master` or the `slave` element of the
-    `broker.xml`. A Backup server will only connect to a
-    live server that shares the same node group name
+- `specifying a node group`. You can specify a group of live servers
+  that a backup server can connect to. This is done by configuring
+  `group-name` in either the `master` or the `slave` element of the
+  `broker.xml`. A Backup server will only connect to a
+  live server that shares the same node group name
 
--   `connecting to any live`. This will be the behaviour if `group-name`
-    is not configured allowing a backup server to connect to any live
-    server
+- `connecting to any live`. This will be the behaviour if `group-name`
+  is not configured allowing a backup server to connect to any live
+  server
 
-> **Note**
+> **Note:**
 >
 > A `group-name` example: suppose you have 5 live servers and 6 backup
 > servers:
 >
-> -   `live1`, `live2`, `live3`: with `group-name=fish`
+> - `live1`, `live2`, `live3`: with `group-name=fish`
 >
-> -   `live4`, `live5`: with `group-name=bird`
+> - `live4`, `live5`: with `group-name=bird`
 >
-> -   `backup1`, `backup2`, `backup3`, `backup4`: with `group-name=fish`
+> - `backup1`, `backup2`, `backup3`, `backup4`: with `group-name=fish`
 >
-> -   `backup5`, `backup6`: with `group-name=bird`
+> - `backup5`, `backup6`: with `group-name=bird`
 >
 > After joining the cluster the backups with `group-name=fish` will
 > search for live servers with `group-name=fish` to pair with. Since
@@ -183,7 +183,7 @@ until it finds a live server that has no current backup configured. If
 no live server is available it will wait until the cluster topology
 changes and repeats the process.
 
-> **Note**
+> **Note:**
 >
 > This is an important distinction from a shared-store backup, if a
 > backup starts and does not find a live server, the server will just
@@ -240,101 +240,44 @@ The backup server must be similarly configured but as a `slave`
 The following table lists all the `ha-policy` configuration elements for
 HA strategy Replication for `master`:
 
-<table summary="HA Replication Master Policy" border="1">
-    <colgroup>
-        <col/>
-        <col/>
-    </colgroup>
-    <thead>
-    <tr>
-        <th>Name</th>
-        <th>Description</th>
-    </tr>
-    </thead>
-    <tbody>
-    <tr>
-        <td>`check-for-live-server`</td>
-        <td>Whether to check the cluster for a (live) server using our own server ID
-        when starting up. This option is only necessary for performing 'fail-back'
-        on replicating servers.</td>
-    </tr>
-    <tr>
-        <td>`cluster-name`</td>
-        <td>Name of the cluster configuration to use for replication. This setting is
-        only necessary if you configure multiple cluster connections. If configured then
-        the connector configuration of the cluster configuration with this name will be
-        used when connecting to the cluster to discover if a live server is already running,
-        see `check-for-live-server`. If unset then the default cluster connections configuration
-        is used (the first one configured).</td>
-    </tr>
-    <tr>
-        <td>`group-name`</td>
-        <td>If set, backup servers will only pair with live servers with matching group-name.</td>
-    </tr>
-    <tr>
-        <td>`initial-replication-sync-timeout`</td>
-        <td>The amount of time the replicating server will wait at the completion of the initial
-        replication process for the replica to acknowledge it has received all the necessary
-        data. The default is 30,000 milliseconds. <strong>Note</strong>: during this interval any
-        journal related operations will be blocked.</td>
-    </tr>
-    </tbody>
-</table>
+- `check-for-live-server`
+
+  Whether to check the cluster for a (live) server using our own server ID when starting up. This option is only necessary for performing 'fail-back' on replicating servers.
+
+- `cluster-name`
+
+  Name of the cluster configuration to use for replication. This setting is only necessary if you configure multiple cluster connections. If configured then the connector configuration of the cluster configuration with this name will be used when connecting to the cluster to discover if a live server is already running, see `check-for-live-server`. If unset then the default cluster connections configuration is used (the first one configured).
+
+- `group-name`
+
+  If set, backup servers will only pair with live servers with matching group-name.
+
+- `initial-replication-sync-timeout`
+
+  The amount of time the replicating server will wait at the completion of the initial replication process for the replica to acknowledge it has received all the necessary data. The default is 30,000 milliseconds. **Note:** during this interval any journal related operations will be blocked.
 
 The following table lists all the `ha-policy` configuration elements for
 HA strategy Replication for `slave`:
 
-<table summary="HA Replication Slave Policy" border="1">
-    <colgroup>
-        <col/>
-        <col/>
-    </colgroup>
-    <thead>
-    <tr>
-        <th>Name</th>
-        <th>Description</th>
-    </tr>
-    </thead>
-    <tbody>
-    <tr>
-        <td>`cluster-name`</td>
-        <td>Name of the cluster configuration to use for replication.
-        This setting is only necessary if you configure multiple cluster
-        connections. If configured then the connector configuration of
-        the cluster configuration with this name will be used when
-        connecting to the cluster to discover if a live server is already
-        running, see `check-for-live-server`. If unset then the default
-        cluster connections configuration is used (the first one configured)</td>
-    </tr>
-    <tr>
-        <td>`group-name`</td>
-        <td>If set, backup servers will only pair with live servers with matching group-name</td>
-    </tr>
-    <tr>
-        <td>`max-saved-replicated-journals-size`</td>
-        <td>This specifies how many times a replicated backup server
-        can restart after moving its files on start. Once there are
-        this number of backup journal files the server will stop permanently
-        after if fails back.</td>
-    </tr>
-    <tr>
-        <td>`allow-failback`</td>
-        <td>Whether a server will automatically stop when a another places a
-        request to take over its place. The use case is when the backup has
-        failed over</td>
-    </tr>
-    <tr>
-        <td>`initial-replication-sync-timeout`</td>
-        <td>After failover and the slave has become live, this is
-        set on the new live server. It represents the amount of time
-        the replicating server will wait at the completion of the
-        initial replication process for the replica to acknowledge
-        it has received all the necessary data. The default is
-        30,000 milliseconds. <strong>Note</strong>: during this interval any
-        journal related operations will be blocked.</td>
-    </tr>
-    </tbody>
-</table>
+- `cluster-name`
+
+  Name of the cluster configuration to use for replication. This setting is only necessary if you configure multiple cluster connections. If configured then the connector configuration of the cluster configuration with this name will be used when connecting to the cluster to discover if a live server is already running, see `check-for-live-server`. If unset then the default cluster connections configuration is used (the first one configured)
+
+- `group-name`
+
+  If set, backup servers will only pair with live servers with matching group-name
+
+- `max-saved-replicated-journals-size`
+
+  This specifies how many times a replicated backup server can restart after moving its files on start. Once there are this number of backup journal files the server will stop permanently after if fails back.
+
+- `allow-failback`
+
+  Whether a server will automatically stop when a another places a request to take over its place. The use case is when the backup has failed over
+
+- `initial-replication-sync-timeout`
+
+  After failover and the slave has become live, this is set on the new live server. It represents the amount of time the replicating server will wait at the completion of the initial replication process for the replica to acknowledge it has received all the necessary data. The default is 30,000 milliseconds. **Note:** during this interval any journal related operations will be blocked.
 
 ### Shared Store
 
@@ -402,7 +345,7 @@ In order for live - backup groups to operate properly with a shared
 store, both servers must have configured the location of journal
 directory to point to the *same shared location* (as explained in [Configuring the message journal](persistence.md))
 
-> **Note**
+> **Note:**
 >
 > todo write something about GFS
 
@@ -504,67 +447,24 @@ automatically by setting the following property in the
 The following table lists all the `ha-policy` configuration elements for
 HA strategy shared store for `master`:
 
-<table summary="HA Shared Store Master Policy" border="1">
-    <colgroup>
-        <col/>
-        <col/>
-    </colgroup>
-    <thead>
-    <tr>
-        <th>Name</th>
-        <th>Description</th>
-    </tr>
-    </thead>
-    <tbody>
-    <tr>
-        <td>`failover-on-server-shutdown`</td>
-        <td>If set to true then when this server is stopped
-        normally the backup will become live assuming failover.
-        If false then the backup server will remain passive.
-        Note that if false you want failover to occur the you
-        can use the the management API as explained at [Management](management.md)</td>
-    </tr>
-    <tr>
-        <td>`wait-for-activation`</td>
-        <td>If set to true then server startup will wait until it is activated.
-        If set to false then server startup will be done in the background.
-        Default is true.</td>
-    </tr>
-    </tbody>
-</table>
+- `failover-on-server-shutdown`
+
+  If set to true then when this server is stopped normally the backup will become live assuming failover. If false then the backup server will remain passive. Note that if false you want failover to occur the you can use the the management API as explained at [Management](management.md).
+
+- `wait-for-activation`
+
+  If set to true then server startup will wait until it is activated. If set to false then server startup will be done in the background. Default is true.
 
 The following table lists all the `ha-policy` configuration elements for
 HA strategy Shared Store for `slave`:
 
-<table summary="HA Shared Store Slave Policy" border="1">
-    <colgroup>
-        <col/>
-        <col/>
-    </colgroup>
-    <thead>
-    <tr>
-        <th>Name</th>
-        <th>Description</th>
-    </tr>
-    </thead>
-    <tbody>
-    <tr>
-        <td>`failover-on-server-shutdown`</td>
-        <td>In the case of a backup that has become live. then
-        when set to true then when this server is stopped normally
-        the backup will become liveassuming failover. If false then
-        the backup server will remain passive. Note that if false
-        you want failover to occur the you can use the the management
-        API as explained at [Management](management.md)</td>
-    </tr>
-    <tr>
-        <td>`allow-failback`</td>
-        <td>Whether a server will automatically stop when a another
-        places a request to take over its place. The use case is
-        when the backup has failed over.</td>
-    </tr>
-    </tbody>
-</table>
+- `failover-on-server-shutdown`
+
+  In the case of a backup that has become live. then when set to true then when this server is stopped normally the backup will become liveassuming failover. If false then the backup server will remain passive. Note that if false you want failover to occur the you can use the the management API as explained at [Management](management.md).
+
+- `allow-failback`
+
+  Whether a server will automatically stop when a another places a request to take over its place. The use case is when the backup has failed over.
 
 #### Colocated Backup Servers
 
@@ -613,7 +513,7 @@ say 100 (which is the default) and a connector is using port 61616 then
 this will be set to 5545 for the first server created, 5645 for the
 second and so on.
 
-> **Note**
+> **Note:**
 >
 > for INVM connectors and Acceptors the id will have
 > `colocated_backup_n` appended, where n is the backup server number.
@@ -648,40 +548,25 @@ creating server but have the new backups name appended.
 
 The following table lists all the `ha-policy` configuration elements for colocated policy:
 
-<table summary="HA Replication Colocation Policy" border="1">
-    <colgroup>
-        <col/>
-        <col/>
-    </colgroup>
-    <thead>
-    <tr>
-        <th>Name</th>
-        <th>Description</th>
-    </tr>
-    </thead>
-    <tbody>
-    <tr>
-        <td>`request-backup`</td>
-        <td>If true then the server will request a backup on another node</td>
-    </tr>
-    <tr>
-        <td>`backup-request-retries`</td>
-        <td>How many times the live server will try to request a backup, -1 means for ever.</td>
-    </tr>
-    <tr>
-        <td>`backup-request-retry-interval`</td>
-        <td>How long to wait for retries between attempts to request a backup server.</td>
-    </tr>
-    <tr>
-        <td>`max-backups`</td>
-        <td>How many backups a live server can create</td>
-    </tr>
-    <tr>
-        <td>`backup-port-offset`</td>
-        <td>The offset to use for the Connectors and Acceptors when creating a new backup server.</td>
-    </tr>
-    </tbody>
-</table>
+- `request-backup`
+
+  If true then the server will request a backup on another node
+
+- `backup-request-retries`
+
+  How many times the live server will try to request a backup, -1 means for ever.
+
+- `backup-request-retry-interval`
+
+  How long to wait for retries between attempts to request a backup server.
+
+- `max-backups`
+
+  How many backups a live server can create
+
+- `backup-port-offset`
+
+  The offset to use for the Connectors and Acceptors when creating a new backup server.
 
 ### Scaling Down
 
@@ -814,9 +699,9 @@ be high enough to deal with the time needed to scale down.
 
 Apache ActiveMQ Artemis defines two types of client failover:
 
--   Automatic client failover
+- Automatic client failover
 
--   Application-level client failover
+- Application-level client failover
 
 Apache ActiveMQ Artemis also provides 100% transparent automatic reattachment of
 connections to the same server (e.g. in case of transient network
@@ -970,7 +855,7 @@ response will come back. In this case it is not easy for the client to
 determine whether the transaction commit was actually processed on the
 live server before failure occurred.
 
-> **Note**
+> **Note:**
 >
 > If XA is being used either via JMS or through the core API then an
 > `XAException.XA_RETRY` is thrown. This is to inform Transaction
@@ -988,7 +873,7 @@ retried, duplicate detection will ensure that any durable messages
 resent in the transaction will be ignored on the server to prevent them
 getting sent more than once.
 
-> **Note**
+> **Note:**
 >
 > By catching the rollback exceptions and retrying, catching unblocked
 > calls and enabling duplicate detection, once and only once delivery
@@ -1025,28 +910,13 @@ following:
 
 JMSException error codes
 
-<table summary="HA Replication Colocation Policy" border="1">
-    <colgroup>
-        <col/>
-        <col/>
-    </colgroup>
-    <thead>
-    <tr>
-        <th>Error code</th>
-        <th>Description</th>
-    </tr>
-    </thead>
-    <tbody>
-    <tr>
-        <td>FAILOVER</td>
-        <td>Failover has occurred and we have successfully reattached or reconnected.</td>
-    </tr>
-    <tr>
-        <td>DISCONNECT</td>
-        <td>No failover has occurred and we are disconnected.</td>
-    </tr>
-    </tbody>
-</table>
+- `FAILOVER`
+
+  Failover has occurred and we have successfully reattached or reconnected.
+
+- `DISCONNECT`
+
+  No failover has occurred and we are disconnected.
 
 ### Application-Level Failover
 
@@ -1063,7 +933,7 @@ connection failure is detected. In your `ExceptionListener`, you would
 close your old JMS connections, potentially look up new connection
 factory instances from JNDI and creating new connections.
 
-For a working example of application-level failover, please see [the examples](examples.md) chapter.
+For a working example of application-level failover, please see [the Application-Layer Failover Example](examples.md#application-layer-failover).
 
 If you are using the core API, then the procedure is very similar: you
 would set a `FailureListener` on the core `ClientSession` instances.

http://git-wip-us.apache.org/repos/asf/activemq-artemis/blob/2b5d8f3b/docs/user-manual/en/images/architecture1.jpg
----------------------------------------------------------------------
diff --git a/docs/user-manual/en/images/architecture1.jpg b/docs/user-manual/en/images/architecture1.jpg
index d2b9de4..170dd5c 100644
Binary files a/docs/user-manual/en/images/architecture1.jpg and b/docs/user-manual/en/images/architecture1.jpg differ

http://git-wip-us.apache.org/repos/asf/activemq-artemis/blob/2b5d8f3b/docs/user-manual/en/images/architecture2.jpg
----------------------------------------------------------------------
diff --git a/docs/user-manual/en/images/architecture2.jpg b/docs/user-manual/en/images/architecture2.jpg
index 391c1c0..cf30eeb 100644
Binary files a/docs/user-manual/en/images/architecture2.jpg and b/docs/user-manual/en/images/architecture2.jpg differ

http://git-wip-us.apache.org/repos/asf/activemq-artemis/blob/2b5d8f3b/docs/user-manual/en/images/architecture3.jpg
----------------------------------------------------------------------
diff --git a/docs/user-manual/en/images/architecture3.jpg b/docs/user-manual/en/images/architecture3.jpg
index 7dccab7..8a45d0b 100644
Binary files a/docs/user-manual/en/images/architecture3.jpg and b/docs/user-manual/en/images/architecture3.jpg differ

http://git-wip-us.apache.org/repos/asf/activemq-artemis/blob/2b5d8f3b/docs/user-manual/en/intercepting-operations.md
----------------------------------------------------------------------
diff --git a/docs/user-manual/en/intercepting-operations.md b/docs/user-manual/en/intercepting-operations.md
index b518ff3..43e186d 100644
--- a/docs/user-manual/en/intercepting-operations.md
+++ b/docs/user-manual/en/intercepting-operations.md
@@ -1,17 +1,18 @@
 # Intercepting Operations
 
-Apache ActiveMQ Artemis supports *interceptors* to intercept packets entering and
-exiting the server. Incoming and outgoing interceptors are be called for
-any packet entering or exiting the server respectively. This allows
-custom code to be executed, e.g. for auditing packets, filtering or
-other reasons. Interceptors can change the packets they intercept. This
-makes interceptors powerful, but also potentially dangerous.
+Apache ActiveMQ Artemis supports *interceptors* to intercept packets entering
+and exiting the server. Incoming and outgoing interceptors are be called for
+any packet entering or exiting the server respectively. This allows custom code
+to be executed, e.g. for auditing packets, filtering or other reasons.
+Interceptors can change the packets they intercept. This makes interceptors
+powerful, but also potentially dangerous.
 
 ## Implementing The Interceptors
 
 All interceptors are protocol specific.
 
-An interceptor for the core protocol must implement the interface `Interceptor`:
+An interceptor for the core protocol must implement the interface
+`Interceptor`:
 
 ```java
 package org.apache.activemq.artemis.api.core.interceptor;
@@ -33,10 +34,10 @@ public interface StompFrameInterceptor extends BaseInterceptor<StompFrame>
 }
 ```
 
-Likewise for MQTT protocol, an interceptor must implement the interface `MQTTInterceptor`:
+Likewise for MQTT protocol, an interceptor must implement the interface
+`MQTTInterceptor`:
  
-```java
-package org.apache.activemq.artemis.core.protocol.mqtt;
+```java package org.apache.activemq.artemis.core.protocol.mqtt;
 
 public interface MQTTInterceptor extends BaseInterceptor<MqttMessage>
 {
@@ -46,16 +47,14 @@ public interface MQTTInterceptor extends BaseInterceptor<MqttMessage>
 
 The returned boolean value is important:
 
--   if `true` is returned, the process continues normally
+- if `true` is returned, the process continues normally
 
--   if `false` is returned, the process is aborted, no other
-    interceptors will be called and the packet will not be processed
-    further by the server.
+- if `false` is returned, the process is aborted, no other interceptors will be
+  called and the packet will not be processed further by the server.
 
 ## Configuring The Interceptors
 
-Both incoming and outgoing interceptors are configured in
-`broker.xml`:
+Both incoming and outgoing interceptors are configured in `broker.xml`:
 
 ```xml
 <remoting-incoming-interceptors>
@@ -69,39 +68,41 @@ Both incoming and outgoing interceptors are configured in
 </remoting-outgoing-interceptors>
 ```
 
-See the documentation on [adding runtime dependencies](using-server.md) to 
+See the documentation on [adding runtime dependencies](using-server.md) to
 understand how to make your interceptor available to the broker.
 
 ## Interceptors on the Client Side
 
-The interceptors can also be run on the Apache ActiveMQ Artemit client side to intercept packets
-either sent by the client to the server or by the server to the client.
-This is done by adding the interceptor to the `ServerLocator` with the
-`addIncomingInterceptor(Interceptor)` or
+The interceptors can also be run on the Apache ActiveMQ Artemit client side to
+intercept packets either sent by the client to the server or by the server to
+the client.  This is done by adding the interceptor to the `ServerLocator` with
+the `addIncomingInterceptor(Interceptor)` or
 `addOutgoingInterceptor(Interceptor)` methods.
 
-As noted above, if an interceptor returns `false` then the sending of
-the packet is aborted which means that no other interceptors are be
-called and the packet is not be processed further by the client.
-Typically this process happens transparently to the client (i.e. it has
-no idea if a packet was aborted or not). However, in the case of an
-outgoing packet that is sent in a `blocking` fashion a
-`ActiveMQException` will be thrown to the caller. The exception is
-thrown because blocking sends provide reliability and it is considered
-an error for them not to succeed. `Blocking` sends occurs when, for
+As noted above, if an interceptor returns `false` then the sending of the
+packet is aborted which means that no other interceptors are be called and the
+packet is not be processed further by the client.  Typically this process
+happens transparently to the client (i.e. it has no idea if a packet was
+aborted or not). However, in the case of an outgoing packet that is sent in a
+`blocking` fashion a `ActiveMQException` will be thrown to the caller. The
+exception is thrown because blocking sends provide reliability and it is
+considered an error for them not to succeed. `Blocking` sends occurs when, for
 example, an application invokes `setBlockOnNonDurableSend(true)` or
-`setBlockOnDurableSend(true)` on its `ServerLocator` or if an
-application is using a JMS connection factory retrieved from JNDI that
-has either `block-on-durable-send` or `block-on-non-durable-send` set to
-`true`. Blocking is also used for packets dealing with transactions
-(e.g. commit, roll-back, etc.). The `ActiveMQException` thrown will
-contain the name of the interceptor that returned false.
+`setBlockOnDurableSend(true)` on its `ServerLocator` or if an application is
+using a JMS connection factory retrieved from JNDI that has either
+`block-on-durable-send` or `block-on-non-durable-send` set to `true`. Blocking
+is also used for packets dealing with transactions (e.g. commit, roll-back,
+etc.). The `ActiveMQException` thrown will contain the name of the interceptor
+that returned false.
+
+As on the server, the client interceptor classes (and their dependencies) must
+be added to the classpath to be properly instantiated and invoked.
 
-As on the server, the client interceptor classes (and their
-dependencies) must be added to the classpath to be properly instantiated
-and invoked.
+## Examples
 
-## Example
+See the following examples which show how to use interceptors:
 
-See [the examples chapter](examples.md) for an example which shows how to use interceptors to add
-properties to a message on the server.
+- [Interceptor](examples.md#interceptor)
+- [Interceptor AMQP](examples.md#interceptor-amqp)
+- [Interceptor Client](examples.md#interceptor-client)
+- [Interceptor MQTT](examples.md#interceptor-mqtt)

http://git-wip-us.apache.org/repos/asf/activemq-artemis/blob/2b5d8f3b/docs/user-manual/en/jms-bridge.md
----------------------------------------------------------------------
diff --git a/docs/user-manual/en/jms-bridge.md b/docs/user-manual/en/jms-bridge.md
index f859e7b..327d37f 100644
--- a/docs/user-manual/en/jms-bridge.md
+++ b/docs/user-manual/en/jms-bridge.md
@@ -2,237 +2,222 @@
 
 Apache ActiveMQ Artemis includes a fully functional JMS message bridge.
 
-The function of the bridge is to consume messages from a source queue or
-topic, and send them to a target queue or topic, typically on a
-different server.
+The function of the bridge is to consume messages from a source queue or topic,
+and send them to a target queue or topic, typically on a different server.
 
-> *Notice:*
-> The JMS Bridge is not intended as a replacement for transformation and more expert systems such as Camel.
-> The JMS Bridge may be useful for fast transfers as this chapter covers, but keep in mind that more complex scenarios requiring transformations will require you to use a more advanced transformation system that will play on use cases that will go beyond Apache ActiveMQ Artemis.
-
-The source and target servers do not have to be in the same cluster
-which makes bridging suitable for reliably sending messages from one
-cluster to another, for instance across a WAN, and where the connection
-may be unreliable.
-
-A bridge can be deployed as a standalone application, with Apache ActiveMQ Artemis
-standalone server or inside a JBoss AS instance. The source and the
-target can be located in the same virtual machine or another one.
-
-The bridge can also be used to bridge messages from other non Apache ActiveMQ Artemis
-JMS servers, as long as they are JMS 1.1 compliant.
-
-> **Note**
+> **Note:**
 >
-> Do not confuse a JMS bridge with a core bridge. A JMS bridge can be
-> used to bridge any two JMS 1.1 compliant JMS providers and uses the
-> JMS API. A core bridge (described in [Core Bridges](core-bridges.md)) is used to bridge any two
-> Apache ActiveMQ Artemis instances and uses the core API. Always use a core bridge if
-> you can in preference to a JMS bridge. The core bridge will typically
-> provide better performance than a JMS bridge. Also the core bridge can
-> provide *once and only once* delivery guarantees without using XA.
-
-The bridge has built-in resilience to failure so if the source or target
-server connection is lost, e.g. due to network failure, the bridge will
-retry connecting to the source and/or target until they come back
-online. When it comes back online it will resume operation as normal.
-
-The bridge can be configured with an optional JMS selector, so it will
-only consume messages matching that JMS selector
-
-It can be configured to consume from a queue or a topic. When it
-consumes from a topic it can be configured to consume using a non
-durable or durable subscription
-
-Typically, the bridge is deployed by the JBoss Micro Container via a
-beans configuration file. This would typically be deployed inside the
-JBoss Application Server and the following example shows an example of a
-beans file that bridges 2 destinations which are actually on the same
-server.
-
-The JMS Bridge is a simple POJO so can be deployed with most frameworks,
-simply instantiate the `org.apache.activemq.artemis.api.jms.bridge.impl.JMSBridgeImpl`
+> The JMS Bridge is not intended as a replacement for transformation and more
+> expert systems such as Camel.  The JMS Bridge may be useful for fast
+> transfers as this chapter covers, but keep in mind that more complex
+> scenarios requiring transformations will require you to use a more advanced
+> transformation system that will play on use cases that will go beyond Apache
+> ActiveMQ Artemis.
+
+The source and target servers do not have to be in the same cluster which makes
+bridging suitable for reliably sending messages from one cluster to another,
+for instance across a WAN, and where the connection may be unreliable.
+
+A bridge can be deployed as a standalone application or as a web application
+managed by the embedded Jetty instance bootstrapped with Apache ActiveMQ
+Artemis. The source and the target can be located in the same virtual machine
+or another one.
+
+The bridge can also be used to bridge messages from other non Apache ActiveMQ
+Artemis JMS servers, as long as they are JMS 1.1 compliant.
+
+> **Note:**
+>
+> Do not confuse a JMS bridge with a core bridge. A JMS bridge can be used to
+> bridge any two JMS 1.1 compliant JMS providers and uses the JMS API. A [core
+> bridge](core-bridges.md)) is used to bridge any two Apache ActiveMQ Artemis
+> instances and uses the core API. Always use a core bridge if you can in
+> preference to a JMS bridge. The core bridge will typically provide better
+> performance than a JMS bridge. Also the core bridge can provide *once and
+> only once* delivery guarantees without using XA.
+
+The bridge has built-in resilience to failure so if the source or target server
+connection is lost, e.g. due to network failure, the bridge will retry
+connecting to the source and/or target until they come back online. When it
+comes back online it will resume operation as normal.
+
+The bridge can be configured with an optional JMS selector, so it will only
+consume messages matching that JMS selector
+
+It can be configured to consume from a queue or a topic. When it consumes from
+a topic it can be configured to consume using a non durable or durable
+subscription
+
+The JMS Bridge is a simple POJO so can be deployed with most frameworks, simply
+instantiate the `org.apache.activemq.artemis.api.jms.bridge.impl.JMSBridgeImpl`
 class and set the appropriate parameters.
 
 ## JMS Bridge Parameters
 
-The main bean deployed is the `JMSBridge` bean. The bean is configurable
-by the parameters passed to its constructor.
+The main POJO is the `JMSBridge`. It is is configurable by the parameters
+passed to its constructor.
 
-> **Note**
->
-> To let a parameter be unspecified (for example, if the authentication
-> is anonymous or no message selector is provided), use `<null
->                         />` for the unspecified parameter value.
-
--   Source Connection Factory Factory
+- Source Connection Factory Factory
 
-    This injects the `SourceCFF` bean (also defined in the beans file).
-    This bean is used to create the *source* `ConnectionFactory`
+  This injects the `SourceCFF` bean (also defined in the beans file).  This
+  bean is used to create the *source* `ConnectionFactory`
 
--   Target Connection Factory Factory
+- Target Connection Factory Factory
 
-    This injects the `TargetCFF` bean (also defined in the beans file).
-    This bean is used to create the *target* `ConnectionFactory`
+  This injects the `TargetCFF` bean (also defined in the beans file).  This
+  bean is used to create the *target* `ConnectionFactory`
 
--   Source Destination Factory Factory
+- Source Destination Factory Factory
 
-    This injects the `SourceDestinationFactory` bean (also defined in
-    the beans file). This bean is used to create the *source*
-    `Destination`
+  This injects the `SourceDestinationFactory` bean (also defined in the beans
+  file). This bean is used to create the *source* `Destination`
 
--   Target Destination Factory Factory
+- Target Destination Factory Factory
 
-    This injects the `TargetDestinationFactory` bean (also defined in
-    the beans file). This bean is used to create the *target*
-    `Destination`
+  This injects the `TargetDestinationFactory` bean (also defined in the beans
+  file). This bean is used to create the *target* `Destination`
 
--   Source User Name
+- Source User Name
 
-    this parameter is the username for creating the *source* connection
+  this parameter is the username for creating the *source* connection
 
--   Source Password
+- Source Password
 
-    this parameter is the parameter for creating the *source* connection
+  this parameter is the parameter for creating the *source* connection
 
--   Target User Name
+- Target User Name
 
-    this parameter is the username for creating the *target* connection
+  this parameter is the username for creating the *target* connection
 
--   Target Password
+- Target Password
 
-    this parameter is the password for creating the *target* connection
+  this parameter is the password for creating the *target* connection
 
--   Selector
+- Selector
 
-    This represents a JMS selector expression used for consuming
-    messages from the source destination. Only messages that match the
-    selector expression will be bridged from the source to the target
-    destination
+  This represents a JMS selector expression used for consuming
+  messages from the source destination. Only messages that match the
+  selector expression will be bridged from the source to the target
+  destination
 
-    The selector expression must follow the [JMS selector
-    syntax](https://docs.oracle.com/javaee/7/api/javax/jms/Message.html)
+  The selector expression must follow the [JMS selector
+  syntax](https://docs.oracle.com/javaee/7/api/javax/jms/Message.html)
 
--   Failure Retry Interval
+- Failure Retry Interval
 
-    This represents the amount of time in ms to wait between trying to
-    recreate connections to the source or target servers when the bridge
-    has detected they have failed
+  This represents the amount of time in ms to wait between trying to recreate
+  connections to the source or target servers when the bridge has detected they
+  have failed
 
--   Max Retries
+- Max Retries
 
-    This represents the number of times to attempt to recreate
-    connections to the source or target servers when the bridge has
-    detected they have failed. The bridge will give up after trying this
-    number of times. `-1` represents 'try forever'
+  This represents the number of times to attempt to recreate connections to the
+  source or target servers when the bridge has detected they have failed. The
+  bridge will give up after trying this number of times. `-1` represents 'try
+  forever'
 
--   Quality Of Service
+- Quality Of Service
 
-    This parameter represents the desired quality of service mode
+  This parameter represents the desired quality of service mode
 
-    Possible values are:
+  Possible values are:
 
-    -   `AT_MOST_ONCE`
+  - `AT_MOST_ONCE`
 
-    -   `DUPLICATES_OK`
+  - `DUPLICATES_OK`
 
-    -   `ONCE_AND_ONLY_ONCE`
+  - `ONCE_AND_ONLY_ONCE`
 
-    See Quality Of Service section for a explanation of these modes.
+  See Quality Of Service section for a explanation of these modes.
 
--   Max Batch Size
+- Max Batch Size
 
-    This represents the maximum number of messages to consume from the
-    source destination before sending them in a batch to the target
-    destination. Its value must `>= 1`
+  This represents the maximum number of messages to consume from the source
+  destination before sending them in a batch to the target destination. Its value
+  must `>= 1`
 
--   Max Batch Time
+- Max Batch Time
 
-    This represents the maximum number of milliseconds to wait before
-    sending a batch to target, even if the number of messages consumed
-    has not reached `MaxBatchSize`. Its value must be `-1` to represent
-    'wait forever', or `>= 1` to specify an actual time
+  This represents the maximum number of milliseconds to wait before sending a
+  batch to target, even if the number of messages consumed has not reached
+  `MaxBatchSize`. Its value must be `-1` to represent 'wait forever', or `>= 1`
+  to specify an actual time
 
--   Subscription Name
+- Subscription Name
 
-    If the source destination represents a topic, and you want to
-    consume from the topic using a durable subscription then this
-    parameter represents the durable subscription name
+  If the source destination represents a topic, and you want to consume from
+  the topic using a durable subscription then this parameter represents the
+  durable subscription name
 
--   Client ID
+- Client ID
 
-    If the source destination represents a topic, and you want to
-    consume from the topic using a durable subscription then this
-    attribute represents the the JMS client ID to use when
-    creating/looking up the durable subscription
+  If the source destination represents a topic, and you want to consume from
+  the topic using a durable subscription then this attribute represents the the
+  JMS client ID to use when creating/looking up the durable subscription
 
--   Add MessageID In Header
+- Add MessageID In Header
 
-    If `true`, then the original message's message ID will be appended
-    in the message sent to the destination in the header
-    `ACTIVEMQ_BRIDGE_MSG_ID_LIST`. If the message is bridged more than
-    once, each message ID will be appended. This enables a distributed
-    request-response pattern to be used
+  If `true`, then the original message's message ID will be appended in the
+  message sent to the destination in the header `ACTIVEMQ_BRIDGE_MSG_ID_LIST`. If
+  the message is bridged more than once, each message ID will be appended. This
+  enables a distributed request-response pattern to be used
 
-    > **Note**
-    >
-    > when you receive the message you can send back a response using
-    > the correlation id of the first message id, so when the original
-    > sender gets it back it will be able to correlate it.
+  > **Note:**
+  >
+  > when you receive the message you can send back a response using the
+  > correlation id of the first message id, so when the original sender gets it
+  > back it will be able to correlate it.
 
--   MBean Server
+- MBean Server
 
-    To manage the JMS Bridge using JMX, set the MBeanServer where the
-    JMS Bridge MBean must be registered (e.g. the JVM Platform
-    MBeanServer or JBoss AS MBeanServer)
+  To manage the JMS Bridge using JMX, set the MBeanServer where the JMS Bridge
+  MBean must be registered (e.g. the JVM Platform MBeanServer)
 
--   ObjectName
+- ObjectName
 
-    If you set the MBeanServer, you also need to set the ObjectName used
-    to register the JMS Bridge MBean (must be unique)
+  If you set the MBeanServer, you also need to set the ObjectName used to
+  register the JMS Bridge MBean (must be unique)
 
 The "transactionManager" property points to a JTA transaction manager
 implementation and should be set if you need to use the 'ONCE_AND_ONCE_ONLY'
-Quality of Service. Apache ActiveMQ Artemis doesn't ship with such an implementation, but
-if you are running within an Application Server you can inject the Transaction
-Manager that is shipped.
+Quality of Service. Apache ActiveMQ Artemis doesn't ship with such an
+implementation, but if you are running within an Application Server you can
+inject the Transaction Manager that is shipped.
 
 ## Source and Target Connection Factories
 
-The source and target connection factory factories are used to create
-the connection factory used to create the connection for the source or
-target server.
+The source and target connection factory factories are used to create the
+connection factory used to create the connection for the source or target
+server.
 
-The configuration example above uses the default implementation provided
-by Apache ActiveMQ Artemis that looks up the connection factory using JNDI. For other
-Application Servers or JMS providers a new implementation may have to be
+The configuration example above uses the default implementation provided by
+Apache ActiveMQ Artemis that looks up the connection factory using JNDI. For
+other Application Servers or JMS providers a new implementation may have to be
 provided. This can easily be done by implementing the interface
 `org.apache.activemq.artemis.jms.bridge.ConnectionFactoryFactory`.
 
 ## Source and Target Destination Factories
 
-Again, similarly, these are used to create or lookup up the
-destinations.
+Again, similarly, these are used to create or lookup up the destinations.
 
-In the configuration example above, we have used the default provided by
-Apache ActiveMQ Artemis that looks up the destination using JNDI.
+In the configuration example above, we have used the default provided by Apache
+ActiveMQ Artemis that looks up the destination using JNDI.
 
 A new implementation can be provided by implementing
 `org.apache.activemq.artemis.jms.bridge.DestinationFactory` interface.
 
 ## Quality Of Service
 
-The quality of service modes used by the bridge are described here in
-more detail.
+The quality of service modes used by the bridge are described here in more
+detail.
 
 ### AT_MOST_ONCE
 
-With this QoS mode messages will reach the destination from the source
-at most once. The messages are consumed from the source and acknowledged
-before sending to the destination. Therefore there is a possibility that
-if failure occurs between removing them from the source and them
-arriving at the destination they could be lost. Hence delivery will
-occur at most once.
+With this QoS mode messages will reach the destination from the source at most
+once. The messages are consumed from the source and acknowledged before sending
+to the destination. Therefore there is a possibility that if failure occurs
+between removing them from the source and them arriving at the destination they
+could be lost. Hence delivery will occur at most once.
 
 This mode is available for both durable and non-durable messages.
 
@@ -240,71 +225,51 @@ This mode is available for both durable and non-durable messages.
 
 With this QoS mode, the messages are consumed from the source and then
 acknowledged after they have been successfully sent to the destination.
-Therefore there is a possibility that if failure occurs after sending to
-the destination but before acknowledging them, they could be sent again
-when the system recovers. I.e. the destination might receive duplicates
-after a failure.
+Therefore there is a possibility that if failure occurs after sending to the
+destination but before acknowledging them, they could be sent again when the
+system recovers. I.e. the destination might receive duplicates after a failure.
 
 This mode is available for both durable and non-durable messages.
 
 ### ONCE_AND_ONLY_ONCE
 
-This QoS mode ensures messages will reach the destination from the
-source once and only once. (Sometimes this mode is known as "exactly
-once"). If both the source and the destination are on the same Apache ActiveMQ Artemis
-server instance then this can be achieved by sending and acknowledging
-the messages in the same local transaction. If the source and
-destination are on different servers this is achieved by enlisting the
-sending and consuming sessions in a JTA transaction. The JTA transaction
-is controlled by a JTA Transaction Manager which will need to be set
-via the settransactionManager method on the Bridge.
+This QoS mode ensures messages will reach the destination from the source once
+and only once. (Sometimes this mode is known as "exactly once"). If both the
+source and the destination are on the same Apache ActiveMQ Artemis server
+instance then this can be achieved by sending and acknowledging the messages in
+the same local transaction. If the source and destination are on different
+servers this is achieved by enlisting the sending and consuming sessions in a
+JTA transaction. The JTA transaction is controlled by a JTA Transaction Manager
+which will need to be set via the settransactionManager method on the Bridge.
 
 This mode is only available for durable messages.
 
-> **Note**
+> **Note:**
 >
-> For a specific application it may possible to provide once and only
-> once semantics without using the ONCE\_AND\_ONLY\_ONCE QoS level. This
-> can be done by using the DUPLICATES\_OK mode and then checking for
-> duplicates at the destination and discarding them. Some JMS servers
-> provide automatic duplicate message detection functionality, or this
-> may be possible to implement on the application level by maintaining a
-> cache of received message ids on disk and comparing received messages
-> to them. The cache would only be valid for a certain period of time so
-> this approach is not as watertight as using ONCE\_AND\_ONLY\_ONCE but
-> may be a good choice depending on your specific application.
+> For a specific application it may possible to provide once and only once
+> semantics without using the ONCE\_AND\_ONLY\_ONCE QoS level. This can be done
+> by using the DUPLICATES\_OK mode and then checking for duplicates at the
+> destination and discarding them. Some JMS servers provide automatic duplicate
+> message detection functionality, or this may be possible to implement on the
+> application level by maintaining a cache of received message ids on disk and
+> comparing received messages to them. The cache would only be valid for a
+> certain period of time so this approach is not as watertight as using
+> ONCE\_AND\_ONLY\_ONCE but may be a good choice depending on your specific
+> application.
 
 ### Time outs and the JMS bridge
 
-There is a possibility that the target or source server will not be
-available at some point in time. If this occurs then the bridge will try
-`Max Retries` to reconnect every `Failure Retry Interval` milliseconds
-as specified in the JMS Bridge definition.
-
-However since a third party JNDI is used, in this case the JBoss naming
-server, it is possible for the JNDI lookup to hang if the network were
-to disappear during the JNDI lookup. To stop this from occurring the
-JNDI definition can be configured to time out if this occurs. To do this
-set the `jnp.timeout` and the `jnp.sotimeout` on the Initial Context
-definition. The first sets the connection timeout for the initial
-connection and the second the read timeout for the socket.
-
-> **Note**
->
-> Once the initial JNDI connection has succeeded all calls are made
-> using RMI. If you want to control the timeouts for the RMI connections
-> then this can be done via system properties. JBoss uses Sun's RMI and
-> the properties can be found
-> [here](https://docs.oracle.com/javase/8/docs/technotes/guides/rmi/sunrmiproperties.html).
-> The default connection timeout is 10 seconds and the default read
-> timeout is 18 seconds.
+There is a possibility that the target or source server will not be available
+at some point in time. If this occurs then the bridge will try `Max Retries` to
+reconnect every `Failure Retry Interval` milliseconds as specified in the JMS
+Bridge definition.
 
-If you implement your own factories for looking up JMS resources then
-you will have to bear in mind timeout issues.
+If you implement your own factories for looking up JMS resources then you will
+have to bear in mind timeout issues.
 
 ### Examples
 
-Please see [the examples chapter](examples.md) which shows how to configure and use a JMS Bridge with
-JBoss AS to send messages to the source destination and consume them
-from the target destination and how to configure and use a JMS Bridge between
-two standalone Apache ActiveMQ Artemis servers.
+Please see [JMS Bridge Example](examples.md#jms-bridge) which shows how to
+programmatically instantiate and configure a JMS Bridge to send messages to the
+source destination and consume them from the target destination between two
+standalone Apache ActiveMQ Artemis brokers.

http://git-wip-us.apache.org/repos/asf/activemq-artemis/blob/2b5d8f3b/docs/user-manual/en/jms-core-mapping.md
----------------------------------------------------------------------
diff --git a/docs/user-manual/en/jms-core-mapping.md b/docs/user-manual/en/jms-core-mapping.md
index d7fe3fc..21c3d0c 100644
--- a/docs/user-manual/en/jms-core-mapping.md
+++ b/docs/user-manual/en/jms-core-mapping.md
@@ -1,15 +1,15 @@
 # Mapping JMS Concepts to the Core API
 
-This chapter describes how JMS destinations are mapped to Apache ActiveMQ Artemis
-addresses.
+This chapter describes how JMS destinations are mapped to Apache ActiveMQ
+Artemis addresses.
 
-Apache ActiveMQ Artemis core is JMS-agnostic. It does not have any concept of a JMS
-topic. A JMS topic is implemented in core as an address with name=(the topic name) 
-and with a MULTICAST routing type with zero or more queues bound to it. Each queue bound to that address
-represents a topic subscription. 
+Apache ActiveMQ Artemis core is JMS-agnostic. It does not have any concept of a
+JMS topic. A JMS topic is implemented in core as an address with name=(the
+topic name) and with a MULTICAST routing type with zero or more queues bound to
+it. Each queue bound to that address represents a topic subscription. 
 
-Likewise, a JMS queue is implemented as an address with name=(the JMS queue name) with an ANYCAST routing type assocatied
-with it.
+Likewise, a JMS queue is implemented as an address with name=(the JMS queue
+name) with an ANYCAST routing type associated with it.
 
-Note.  That whilst it is possible to configure a JMS topic and queue with the same name, it is not a recommended
-configuration for use with cross protocol. 
+**Note:**  While it is possible to configure a JMS topic and queue with the same
+name, it is not a recommended configuration for use with cross protocol.
\ No newline at end of file

http://git-wip-us.apache.org/repos/asf/activemq-artemis/blob/2b5d8f3b/docs/user-manual/en/large-messages.md
----------------------------------------------------------------------
diff --git a/docs/user-manual/en/large-messages.md b/docs/user-manual/en/large-messages.md
index 0ecb866..26188af 100644
--- a/docs/user-manual/en/large-messages.md
+++ b/docs/user-manual/en/large-messages.md
@@ -1,154 +1,128 @@
 # Large Messages
 
-Apache ActiveMQ Artemis supports sending and receiving of huge messages, even when the
-client and server are running with limited memory. The only realistic
-limit to the size of a message that can be sent or consumed is the
-amount of disk space you have available. We have tested sending and
-consuming messages up to 8 GiB in size with a client and server running
-in just 50MiB of RAM!
-
-To send a large message, the user can set an `InputStream` on a message
-body, and when that message is sent, Apache ActiveMQ Artemis will read the
-`InputStream`. A `FileInputStream` could be used for example to send a
-huge message from a huge file on disk.
-
-As the `InputStream` is read the data is sent to the server as a stream
-of fragments. The server persists these fragments to disk as it receives
-them and when the time comes to deliver them to a consumer they are read
-back of the disk, also in fragments and sent down the wire. When the
-consumer receives a large message it initially receives just the message
-with an empty body, it can then set an `OutputStream` on the message to
-stream the huge message body to a file on disk or elsewhere. At no time
-is the entire message body stored fully in memory, either on the client
-or the server.
+Apache ActiveMQ Artemis supports sending and receiving of huge messages, even
+when the client and server are running with limited memory. The only realistic
+limit to the size of a message that can be sent or consumed is the amount of
+disk space you have available. We have tested sending and consuming messages up
+to 8 GiB in size with a client and server running in just 50MiB of RAM!
+
+To send a large message, the user can set an `InputStream` on a message body,
+and when that message is sent, Apache ActiveMQ Artemis will read the
+`InputStream`. A `FileInputStream` could be used for example to send a huge
+message from a huge file on disk.
+
+As the `InputStream` is read the data is sent to the server as a stream of
+fragments. The server persists these fragments to disk as it receives them and
+when the time comes to deliver them to a consumer they are read back of the
+disk, also in fragments and sent down the wire. When the consumer receives a
+large message it initially receives just the message with an empty body, it can
+then set an `OutputStream` on the message to stream the huge message body to a
+file on disk or elsewhere. At no time is the entire message body stored fully
+in memory, either on the client or the server.
 
 ## Configuring the server
 
-Large messages are stored on a disk directory on the server side, as
-configured on the main configuration file.
+Large messages are stored on a disk directory on the server side, as configured
+on the main configuration file.
 
-The configuration property `large-messages-directory` specifies where
-large messages are stored.  For JDBC persistence the `large-message-table`
-should be configured.
+The configuration property `large-messages-directory` specifies where large
+messages are stored.  For JDBC persistence the `large-message-table` should be
+configured.
 
 ```xml
 <configuration xmlns="urn:activemq"
    xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
    xsi:schemaLocation="urn:activemq /schema/artemis-server.xsd">
-   ...
-   <large-messages-directory>/data/large-messages</large-messages-directory>
-   ...
-</configuration
+   <core xmlns="urn:activemq:core" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="urn:activemq:core">
+      ...
+      <large-messages-directory>/data/large-messages</large-messages-directory>
+      ...
+   </core>
+</configuration>
 ```
 
-By default the large message directory is `data/largemessages` and `large-message-table` is
-configured as "LARGE_MESSAGE_TABLE".
+By default the large message directory is `data/largemessages` and
+`large-message-table` is configured as "LARGE_MESSAGE_TABLE".
 
-For the best performance we recommend using file store with large messages directory stored
-on a different physical volume to the message journal or paging directory.
+For the best performance we recommend using file store with large messages
+directory stored on a different physical volume to the message journal or
+paging directory.
 
 ## Configuring the Client
 
-Any message larger than a certain size is considered a large message.
-Large messages will be split up and sent in fragments. This is
-determined by the URL parameter `minLargeMessageSize`
+Any message larger than a certain size is considered a large message.  Large
+messages will be split up and sent in fragments. This is determined by the URL
+parameter `minLargeMessageSize`
 
-> **Note**
+> **Note:**
 >
-> Apache ActiveMQ Artemis messages are encoded using 2 bytes per character so if the
-> message data is filled with ASCII characters (which are 1 byte) the
-> size of the resulting Apache ActiveMQ Artemis message would roughly double. This is
-> important when calculating the size of a "large" message as it may
-> appear to be less than the `minLargeMessageSize` before it is sent,
-> but it then turns into a "large" message once it is encoded.
+> Apache ActiveMQ Artemis messages are encoded using 2 bytes per character so
+> if the message data is filled with ASCII characters (which are 1 byte) the
+> size of the resulting Apache ActiveMQ Artemis message would roughly double.
+> This is important when calculating the size of a "large" message as it may
+> appear to be less than the `minLargeMessageSize` before it is sent, but it
+> then turns into a "large" message once it is encoded.
 
 The default value is 100KiB.
 
-[Configuring the transport directly from the client side](configuring-transports.md)
-will provide more information on how to instantiate the core session factory
-or JMS connection factory.
+[Configuring the transport directly from the client
+side](configuring-transports.md#configuring-the-transport-directly-from-the-client)
+will provide more information on how to instantiate the core session factory or
+JMS connection factory.
 
 ## Compressed Large Messages
 
 You can choose to send large messages in compressed form using
 `compressLargeMessages` URL parameter.
 
-If you specify the boolean URL parameter `compressLargeMessages` as true,
-The system will use the ZIP algorithm to compress the message body as
-the message is transferred to the server's side. Notice that there's no
-special treatment at the server's side, all the compressing and uncompressing
-is done at the client.
+If you specify the boolean URL parameter `compressLargeMessages` as true, The
+system will use the ZIP algorithm to compress the message body as the message
+is transferred to the server's side. Notice that there's no special treatment
+at the server's side, all the compressing and uncompressing is done at the
+client.
 
-If the compressed size of a large message is below `minLargeMessageSize`,
-it is sent to server as regular messages. This means that the message won't
-be written into the server's large-message data directory, thus reducing the
-disk I/O.
+If the compressed size of a large message is below `minLargeMessageSize`, it is
+sent to server as regular messages. This means that the message won't be
+written into the server's large-message data directory, thus reducing the disk
+I/O.
 
 ## Streaming large messages
 
-Apache ActiveMQ Artemis supports setting the body of messages using input and output
-streams (`java.lang.io`)
+Apache ActiveMQ Artemis supports setting the body of messages using input and
+output streams (`java.lang.io`)
 
-These streams are then used directly for sending (input streams) and
-receiving (output streams) messages.
+These streams are then used directly for sending (input streams) and receiving
+(output streams) messages.
 
-When receiving messages there are 2 ways to deal with the output stream;
-you may choose to block while the output stream is recovered using the
-method `ClientMessage.saveOutputStream` or alternatively using the
-method `ClientMessage.setOutputstream` which will asynchronously write
-the message to the stream. If you choose the latter the consumer must be
-kept alive until the message has been fully received.
+When receiving messages there are 2 ways to deal with the output stream; you
+may choose to block while the output stream is recovered using the method
+`ClientMessage.saveOutputStream` or alternatively using the method
+`ClientMessage.setOutputstream` which will asynchronously write the message to
+the stream. If you choose the latter the consumer must be kept alive until the
+message has been fully received.
 
-You can use any kind of stream you like. The most common use case is to
-send files stored in your disk, but you could also send things like JDBC
-Blobs, `SocketInputStream`, things you recovered from `HTTPRequests`
-etc. Anything as long as it implements `java.io.InputStream` for sending
-messages or `java.io.OutputStream` for receiving them.
+You can use any kind of stream you like. The most common use case is to send
+files stored in your disk, but you could also send things like JDBC Blobs,
+`SocketInputStream`, things you recovered from `HTTPRequests` etc. Anything as
+long as it implements `java.io.InputStream` for sending messages or
+`java.io.OutputStream` for receiving them.
 
 ### Streaming over Core API
 
-The following table shows a list of methods available at `ClientMessage`
-which are also available through JMS by the use of object properties.
-
-<table summary="org.hornetq.api.core.client.ClientMessage API" border="1">
-    <colgroup>
-        <col/>
-        <col/>
-        <col/>
-        <col/>
-    </colgroup>
-    <thead>
-    <tr>
-        <th>Name</th>
-        <th>Description</th>
-        <th>JMS Equivalent</th>
-    </tr>
-    </thead>
-    <tbody>
-    <tr>
-        <td>setBodyInputStream(InputStream)</td>
-        <td>Set the InputStream used to read a message body when sending it.</td>
-        <td>JMS_AMQ_InputStream</td>
-    </tr>
-    <tr>
-        <td>setOutputStream(OutputStream)</td>
-        <td>Set the OutputStream that will receive the body of a message. This method does not block.</td>
-        <td>JMS_AMQ_OutputStream</td>
-    </tr>
-    <tr>
-        <td>saveOutputStream(OutputStream)</td>
-        <td>Save the body of the message to the `OutputStream`. It will block until the entire content is transferred to the `OutputStream`.</td>
-        <td>JMS_AMQ_SaveStream</td>
-    </tr>
-    </tbody>
-</table>
+The following table shows a list of methods available at `ClientMessage` which
+are also available through JMS by the use of object properties.
+
+Name | Description | JMS Equivalent
+---|---|---
+setBodyInputStream(InputStream)|Set the InputStream used to read a message body when sending it.|JMS_AMQ_InputStream
+setOutputStream(OutputStream)|Set the OutputStream that will receive the body of a message. This method does not block.|JMS_AMQ_OutputStream
+saveOutputStream(OutputStream)|Save the body of the message to the `OutputStream`. It will block until the entire content is transferred to the `OutputStream`.|JMS_AMQ_SaveStream
 
 To set the output stream when receiving a core message:
 
 ``` java
 ClientMessage msg = consumer.receive(...);
 
-
 // This will block here until the stream was transferred
 msg.saveOutputStream(someOutputStream);
 
@@ -165,16 +139,15 @@ ClientMessage msg = session.createMessage();
 msg.setInputStream(dataInputStream);
 ```
 
-Notice also that for messages with more than 2GiB the getBodySize() will
-return invalid values since this is an integer (which is also exposed to
-the JMS API). On those cases you can use the message property
-_AMQ_LARGE_SIZE.
+Notice also that for messages with more than 2GiB the getBodySize() will return
+invalid values since this is an integer (which is also exposed to the JMS API).
+On those cases you can use the message property _AMQ_LARGE_SIZE.
 
 ### Streaming over JMS
 
-When using JMS, Apache ActiveMQ Artemis maps the streaming methods on the core API (see
-ClientMessage API table above) by setting object properties . You can use the method
-`Message.setObjectProperty` to set the input and output streams.
+When using JMS, Apache ActiveMQ Artemis maps the streaming methods on the core
+API (see ClientMessage API table above) by setting object properties . You can
+use the method `Message.setObjectProperty` to set the input and output streams.
 
 The `InputStream` can be defined through the JMS Object Property
 JMS_AMQ_InputStream on messages being sent:
@@ -215,16 +188,16 @@ using the property JMS_AMQ_OutputStream.
 messageReceived.setObjectProperty("JMS_AMQ_OutputStream", bufferedOutput);
 ```
 
-> **Note**
+> **Note:**
 >
 > When using JMS, Streaming large messages are only supported on
 > `StreamMessage` and `BytesMessage`.
 
 ### Streaming Alternative
 
-If you choose not to use the `InputStream` or `OutputStream` capability
-of Apache ActiveMQ Artemis You could still access the data directly in an alternative
-fashion.
+If you choose not to use the `InputStream` or `OutputStream` capability of
+Apache ActiveMQ Artemis You could still access the data directly in an
+alternative fashion.
 
 On the Core API just get the bytes of the body as you normally would.
 
@@ -241,6 +214,7 @@ for (int i = 0 ;  i < msg.getBodySize(); i += bytes.length)
 
 If using JMS API, `BytesMessage` and `StreamMessage` also supports it
 transparently.
+
 ``` java
 BytesMessage rm = (BytesMessage)cons.receive(10000);
 
@@ -255,5 +229,5 @@ for (int i = 0; i < rm.getBodyLength(); i += 1024)
 
 ## Large message example
 
-Please see the [examples](examples.md) chapter for an example which shows
-how large message is configured and used with JMS.
+Please see the [Large Message Example](examples.md#large-message) which shows
+how large messages are configured and used with JMS.

http://git-wip-us.apache.org/repos/asf/activemq-artemis/blob/2b5d8f3b/docs/user-manual/en/last-value-queues.md
----------------------------------------------------------------------
diff --git a/docs/user-manual/en/last-value-queues.md b/docs/user-manual/en/last-value-queues.md
index f242ff5..ea7cfc9 100644
--- a/docs/user-manual/en/last-value-queues.md
+++ b/docs/user-manual/en/last-value-queues.md
@@ -14,26 +14,26 @@ Last-Value queues can be statically configured via the `last-value`
 boolean property:
 
 ```xml
-<configuration ...>
-  <core ...>
-    ...
-    <address name="foo.bar">
-      <multicast>
-        <queue name="orders1" last-value="true"/>
-      </multicast>
-    </address>
-  </core>
-</configuration>
+<address name="foo.bar">
+   <multicast>
+      <queue name="orders1" last-value="true"/>
+   </multicast>
+</address>
 ```
 
-Specified on creating a Queue by using the CORE api specifying the parameter `lastValue` to `true`. 
+Specified on creating a queue by using the CORE api specifying the parameter 
+`lastValue` to `true`. 
 
-Or on auto-create when using the JMS Client by using address parameters when creating the destination used by the consumer.
+Or on auto-create when using the JMS Client by using address parameters when 
+creating the destination used by the consumer.
 
-    Queue queue = session.createQueue("my.destination.name?last-value=true");
-    Topic topic = session.createTopic("my.destination.name?last-value=true");
+```java
+Queue queue = session.createQueue("my.destination.name?last-value=true");
+Topic topic = session.createTopic("my.destination.name?last-value=true");
+```
 
-Also the default for all queues under and address can be defaulted using the address-setting configuration:
+Also the default for all queues under and address can be defaulted using the 
+`address-setting` configuration:
 
 ```xml
 <address-setting match="lastValueQueue">
@@ -45,7 +45,8 @@ By default, `default-last-value-queue` is false.
 Address wildcards can be used to configure Last-Value queues 
 for a set of addresses (see [here](wildcard-syntax.md)).
 
-Note that address-setting `last-value-queue` config is deprecated, please use `default-last-value-queue` instead.
+Note that `address-setting` `last-value-queue` config is deprecated, please use
+`default-last-value-queue` instead.
 
 ## Last-Value Property
 
@@ -77,5 +78,5 @@ System.out.format("Received message: %s\n", messageReceived.getText());
 
 ## Example
 
-See the [examples](examples.md) chapter for an example which shows how last value queues are configured
-and used with JMS.
+See the [last-value queue example](examples.md#last-value-queue) which shows 
+how last value queues are configured and used with JMS.

http://git-wip-us.apache.org/repos/asf/activemq-artemis/blob/2b5d8f3b/docs/user-manual/en/libaio.md
----------------------------------------------------------------------
diff --git a/docs/user-manual/en/libaio.md b/docs/user-manual/en/libaio.md
index ad4aba5..51023ef 100644
--- a/docs/user-manual/en/libaio.md
+++ b/docs/user-manual/en/libaio.md
@@ -13,8 +13,8 @@ please see [Persistence](persistence.md).
 
 These are the native libraries distributed by Apache ActiveMQ Artemis:
 
--   libartemis-native-64.so - x86 64 bits
--   We distributed a 32-bit version until early 2017. While it's not available on the distribution any longer it should still be possible to compile to a 32-bit environment if needed.
+- libartemis-native-64.so - x86 64 bits
+- We distributed a 32-bit version until early 2017. While it's not available on the distribution any longer it should still be possible to compile to a 32-bit environment if needed.
 
 When using libaio, Apache ActiveMQ Artemis will always try loading these files as long
 as they are on the [library path](using-server.md#library-path)
@@ -28,12 +28,15 @@ You can install libaio using the following steps as the root user:
 
 Using yum, (e.g. on Fedora or Red Hat Enterprise Linux):
 
-    yum install libaio
+```
+yum install libaio
+```
 
 Using aptitude, (e.g. on Ubuntu or Debian system):
 
-    apt-get install libaio
-
+```
+apt-get install libaio
+```
 
 ## Compiling the native libraries
 
@@ -44,26 +47,26 @@ those platforms with the release.
 
 ## Compilation dependencies
 
-> **Note**
+> **Note:**
 >
 > The native layer is only available on Linux. If you are
 > in a platform other than Linux the native compilation will not work
 
 These are the required linux packages to be installed for the compilation to work:
 
--   gcc - C Compiler
+- gcc - C Compiler
 
--   gcc-c++ or g++ - Extension to gcc with support for C++
+- gcc-c++ or g++ - Extension to gcc with support for C++
 
--   libtool - Tool for link editing native libraries
+- libtool - Tool for link editing native libraries
 
--   libaio - library to disk asynchronous IO kernel functions
+- libaio - library to disk asynchronous IO kernel functions
 
--   libaio-dev - Compilation support for libaio
+- libaio-dev - Compilation support for libaio
 
--   cmake
+- cmake
 
--   A full JDK installed with the environment variable JAVA\_HOME set to
+- A full JDK installed with the environment variable JAVA\_HOME set to
     its location
 
 To perform this installation on RHEL or Fedora, you can simply type this at a command line:
@@ -74,7 +77,7 @@ Or on Debian systems:
 
     sudo apt-get install libtool gcc-g++ gcc libaio libaio- cmake
 
-> **Note**
+> **Note:**
 >
 > You could find a slight variation of the package names depending on
 > the version and Linux distribution. (for example gcc-c++ on Fedora

http://git-wip-us.apache.org/repos/asf/activemq-artemis/blob/2b5d8f3b/docs/user-manual/en/logging.md
----------------------------------------------------------------------
diff --git a/docs/user-manual/en/logging.md b/docs/user-manual/en/logging.md
index 9cad817..662218c 100644
--- a/docs/user-manual/en/logging.md
+++ b/docs/user-manual/en/logging.md
@@ -7,63 +7,34 @@ the console and to a file.
 
 There are 6 loggers available which are as follows:
 
-<table summary="Loggers" border="1">
-    <colgroup>
-        <col/>
-        <col/>
-    </colgroup>
-    <thead>
-    <tr>
-        <th>Logger</th>
-        <th>Logger Description</th>
-    </tr>
-    </thead>
-    <tbody>
-    <tr>
-        <td>org.jboss.logging</td>
-        <td>Logs any calls not handled by the Apache ActiveMQ Artemis loggers</td>
-    </tr>
-    <tr>
-        <td>org.apache.activemq.artemis.core.server</td>
-        <td>Logs the core server</td>
-    </tr>
-    <tr>
-        <td>org.apache.activemq.artemis.utils</td>
-        <td>Logs utility calls</td>
-    </tr>
-    <tr>
-        <td>org.apache.activemq.artemis.journal</td>
-        <td>Logs Journal calls</td>
-    </tr>
-    <tr>
-        <td>org.apache.activemq.artemis.jms</td>
-        <td>Logs JMS calls</td>
-    </tr>
-    <tr>
-        <td>org.apache.activemq.artemis.integration.bootstrap </td>
-        <td>Logs bootstrap calls</td>
-    </tr>
-    </tbody>
-</table>
-
-  : Global Configuration Properties
+Logger | Description
+---|---
+org.jboss.logging|Logs any calls not handled by the Apache ActiveMQ Artemis loggers
+org.apache.activemq.artemis.core.server|Logs the core server
+org.apache.activemq.artemis.utils|Logs utility calls
+org.apache.activemq.artemis.journal|Logs Journal calls
+org.apache.activemq.artemis.jms|Logs JMS calls
+org.apache.activemq.artemis.integration.bootstrap|Logs bootstrap calls
+
 
 ## Logging in a client or with an Embedded server
 
 Firstly, if you want to enable logging on the client side you need to
-include the JBoss logging jars in your library. If you are using maven
-add the following dependencies.
-
-    <dependency>
-       <groupId>org.jboss.logmanager</groupId>
-       <artifactId>jboss-logmanager</artifactId>
-       <version>1.5.3.Final</version>
-    </dependency>
-    <dependency>
-       <groupId>org.apache.activemq</groupId>
-       <artifactId>activemq-core-client</artifactId>
-       <version>1.0.0.Final</version>
-    </dependency>
+include the JBoss logging jars in your library. If you are using Maven
+the simplest way is to use the "all" client jar.
+
+```xml
+<dependency>
+   <groupId>org.jboss.logmanager</groupId>
+   <artifactId>jboss-logmanager</artifactId>
+   <version>2.0.3.Final</version>
+</dependency>
+<dependency>
+   <groupId>org.apache.activemq</groupId>
+   <artifactId>activemq-core-client</artifactId>
+   <version>2.5.0</version>
+</dependency>
+```
 
 There are 2 properties you need to set when starting your java program,
 the first is to set the Log Manager to use the JBoss Log Manager, this
@@ -74,41 +45,43 @@ The second is to set the location of the logging.properties file to use,
 this is done via the `-Dlogging.configuration` for instance
 `-Dlogging.configuration=file:///home/user/projects/myProject/logging.properties`.
 
-> **Note**
+> **Note:**
 >
-> The value for this needs to be valid URL
+> The `logging.configuration` system property needs to be valid URL
 
 The following is a typical `logging.properties for a client`
 
-    # Root logger option
-    loggers=org.jboss.logging,org.apache.activemq.artemis.core.server,org.apache.activemq.artemis.utils,org.apache.activemq.artemis.journal,org.apache.activemq.artemis.jms,org.apache.activemq.artemis.ra
-
-    # Root logger level
-    logger.level=INFO
-    # Apache ActiveMQ Artemis logger levels
-    logger.org.apache.activemq.artemis.core.server.level=INFO
-    logger.org.apache.activemq.artemis.utils.level=INFO
-    logger.org.apache.activemq.artemis.jms.level=DEBUG
-
-    # Root logger handlers
-    logger.handlers=FILE,CONSOLE
-
-    # Console handler configuration
-    handler.CONSOLE=org.jboss.logmanager.handlers.ConsoleHandler
-    handler.CONSOLE.properties=autoFlush
-    handler.CONSOLE.level=FINE
-    handler.CONSOLE.autoFlush=true
-    handler.CONSOLE.formatter=PATTERN
-
-    # File handler configuration
-    handler.FILE=org.jboss.logmanager.handlers.FileHandler
-    handler.FILE.level=FINE
-    handler.FILE.properties=autoFlush,fileName
-    handler.FILE.autoFlush=true
-    handler.FILE.fileName=activemq.log
-    handler.FILE.formatter=PATTERN
-
-    # Formatter pattern configuration
-    formatter.PATTERN=org.jboss.logmanager.formatters.PatternFormatter
-    formatter.PATTERN.properties=pattern
-    formatter.PATTERN.pattern=%d{HH:mm:ss,SSS} %-5p [%c] %s%E%n
+```
+# Root logger option
+loggers=org.jboss.logging,org.apache.activemq.artemis.core.server,org.apache.activemq.artemis.utils,org.apache.activemq.artemis.journal,org.apache.activemq.artemis.jms,org.apache.activemq.artemis.ra
+
+# Root logger level
+logger.level=INFO
+# Apache ActiveMQ Artemis logger levels
+logger.org.apache.activemq.artemis.core.server.level=INFO
+logger.org.apache.activemq.artemis.utils.level=INFO
+logger.org.apache.activemq.artemis.jms.level=DEBUG
+
+# Root logger handlers
+logger.handlers=FILE,CONSOLE
+
+# Console handler configuration
+handler.CONSOLE=org.jboss.logmanager.handlers.ConsoleHandler
+handler.CONSOLE.properties=autoFlush
+handler.CONSOLE.level=FINE
+handler.CONSOLE.autoFlush=true
+handler.CONSOLE.formatter=PATTERN
+
+# File handler configuration
+handler.FILE=org.jboss.logmanager.handlers.FileHandler
+handler.FILE.level=FINE
+handler.FILE.properties=autoFlush,fileName
+handler.FILE.autoFlush=true
+handler.FILE.fileName=activemq.log
+handler.FILE.formatter=PATTERN
+
+# Formatter pattern configuration
+formatter.PATTERN=org.jboss.logmanager.formatters.PatternFormatter
+formatter.PATTERN.properties=pattern
+formatter.PATTERN.pattern=%d{HH:mm:ss,SSS} %-5p [%c] %s%E%n
+```
\ No newline at end of file

http://git-wip-us.apache.org/repos/asf/activemq-artemis/blob/2b5d8f3b/docs/user-manual/en/management-console.md
----------------------------------------------------------------------
diff --git a/docs/user-manual/en/management-console.md b/docs/user-manual/en/management-console.md
index f542a01..d0ec2d8 100644
--- a/docs/user-manual/en/management-console.md
+++ b/docs/user-manual/en/management-console.md
@@ -4,7 +4,6 @@ Apache ActiveMQ Artemis ships by default with a management console. It is powere
 
 Its purpose is to expose the [Management API](management.md "Management API") via a user friendly web ui. 
 
-
 ## Login
 
 To access the management console use a browser and go to the URL [http://localhost:8161/console]().
@@ -30,29 +29,27 @@ Once logged in you should be presented with a screen similar to.
 
 On the top right is small menu area you will see some icons.
 
--    `question mark` This will load the artemis documentation in the console main window
--    `person` will provide a drop down menu with
-- -  `about` this will load an about screen, here you will be able to see and validate versions
-- -  `log out` self descriptive.
+- `question mark` This will load the artemis documentation in the console main window
+- `person` will provide a drop down menu with
+- `about` this will load an about screen, here you will be able to see and validate versions
+- `log out` self descriptive.
 
 #### Navigation Tabs
 
 Running below the Navigation Menu you will see several default feature tabs.
  
--    `Artemis` This is the core tab for Apache ActiveMQ Artemis specific functionality. The rest of this document will focus on this.
+- `Artemis` This is the core tab for Apache ActiveMQ Artemis specific functionality. The rest of this document will focus on this.
 
--    `Connect` This allows you to connect to a remote broker from the same console.
+- `Connect` This allows you to connect to a remote broker from the same console.
 
--    `Dashboard` Here you can create and save graphs and tables of metrics available via JMX, a default jvm health dashboard is provided. 
+- `Dashboard` Here you can create and save graphs and tables of metrics available via JMX, a default jvm health dashboard is provided. 
 
--    `JMX` This exposes the raw Jolokia JMX so you can browse/access all the JMX endpoints exposed by the JVM.
+- `JMX` This exposes the raw Jolokia JMX so you can browse/access all the JMX endpoints exposed by the JVM.
 
--    `Threads` This allows you to monitor the thread usage and their state.
+- `Threads` This allows you to monitor the thread usage and their state.
 
 You can install further hawtio plugins if you wish to have further functionality.
 
-
-
 ## Artemis Tab
 
 Click `Artemis` in the top navigation bar to see the Artemis specific plugin. (The Artemis tab won't appear if there is no broker in this JVM).  The Artemis plugin works very much the same as the JMX plugin however with a focus on interacting with an Artemis broker.
@@ -71,8 +68,6 @@ This expands to show the current configured available `addresses`.
 
 Under the address you can expand to find the `queues` for the address exposing attributes
 
-
-
 ### Key Operations
 
 #### Creating a new Address


Mime
View raw message