activemq-commits mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From clebertsuco...@apache.org
Subject [08/16] activemq-artemis git commit: ARTEMIS-1912 big doc refactor
Date Thu, 07 Jun 2018 15:26:50 GMT
http://git-wip-us.apache.org/repos/asf/activemq-artemis/blob/2b5d8f3b/docs/user-manual/en/message-grouping.md
----------------------------------------------------------------------
diff --git a/docs/user-manual/en/message-grouping.md b/docs/user-manual/en/message-grouping.md
index 5fb4255..0508327 100644
--- a/docs/user-manual/en/message-grouping.md
+++ b/docs/user-manual/en/message-grouping.md
@@ -1,106 +1,106 @@
 # Message Grouping
 
-Message groups are sets of messages that have the following
-characteristics:
+Message groups are sets of messages that have the following characteristics:
 
--   Messages in a message group share the same group id, i.e. they have
-    same group identifier property (`JMSXGroupID` for JMS,
-    `_AMQ_GROUP_ID` for Apache ActiveMQ Artemis Core API).
+- Messages in a message group share the same group id, i.e. they have same
+  group identifier property (`JMSXGroupID` for JMS, `_AMQ_GROUP_ID` for Apache
+  ActiveMQ Artemis Core API).
 
--   Messages in a message group are always consumed by the same
-    consumer, even if there are many consumers on a queue. They pin all
-    messages with the same group id to the same consumer. If that
-    consumer closes another consumer is chosen and will receive all
-    messages with the same group id.
+- Messages in a message group are always consumed by the same consumer, even if
+  there are many consumers on a queue. They pin all messages with the same
+  group id to the same consumer. If that consumer closes another consumer is
+  chosen and will receive all messages with the same group id.
 
-Message groups are useful when you want all messages for a certain value
-of the property to be processed serially by the same consumer.
+Message groups are useful when you want all messages for a certain value of the
+property to be processed serially by the same consumer.
 
-An example might be orders for a certain stock. You may want orders for
-any particular stock to be processed serially by the same consumer. To
-do this you can create a pool of consumers (perhaps one for each stock,
-but less will work too), then set the stock name as the value of the
-_AMQ_GROUP_ID property.
+An example might be orders for a certain stock. You may want orders for any
+particular stock to be processed serially by the same consumer. To do this you
+can create a pool of consumers (perhaps one for each stock, but less will work
+too), then set the stock name as the value of the _AMQ_GROUP_ID property.
 
 This will ensure that all messages for a particular stock will always be
 processed by the same consumer.
 
-> **Note**
+> **Note:**
 >
-> Grouped messages can impact the concurrent processing of non-grouped
-> messages due to the underlying FIFO semantics of a queue. For example,
-> if there is a chunk of 100 grouped messages at the head of a queue
-> followed by 1,000 non-grouped messages then all the grouped messages
-> will need to be sent to the appropriate client (which is consuming
-> those grouped messages serially) before any of the non-grouped
-> messages can be consumed. The functional impact in this scenario is a
-> temporary suspension of concurrent message processing while all the
-> grouped messages are processed. This can be a performance bottleneck
-> so keep it in mind when determining the size of your message groups,
-> and consider whether or not you should isolate your grouped messages
+> Grouped messages can impact the concurrent processing of non-grouped messages
+> due to the underlying FIFO semantics of a queue. For example, if there is a
+> chunk of 100 grouped messages at the head of a queue followed by 1,000
+> non-grouped messages then all the grouped messages will need to be sent to
+> the appropriate client (which is consuming those grouped messages serially)
+> before any of the non-grouped messages can be consumed. The functional impact
+> in this scenario is a temporary suspension of concurrent message processing
+> while all the grouped messages are processed. This can be a performance
+> bottleneck so keep it in mind when determining the size of your message
+> groups, and consider whether or not you should isolate your grouped messages
 > from your non-grouped messages.
 
 ## Using Core API
 
-The property name used to identify the message group is `"_AMQ_GROUP_ID"`
-(or the constant `MessageImpl.HDR_GROUP_ID`). Alternatively, you can set
-`autogroup` to true on the `SessionFactory` which will pick a random
-unique id.
+The property name used to identify the message group is `"_AMQ_GROUP_ID"` (or
+the constant `MessageImpl.HDR_GROUP_ID`). Alternatively, you can set
+`autogroup` to true on the `SessionFactory` which will pick a random unique id.
 
 ## Using JMS
 
 The property name used to identify the message group is `JMSXGroupID`.
 
-     // send 2 messages in the same group to ensure the same
-     // consumer will receive both
-     Message message = ...
-     message.setStringProperty("JMSXGroupID", "Group-0");
-     producer.send(message);
+```java
+// send 2 messages in the same group to ensure the same
+// consumer will receive both
+Message message = ...
+message.setStringProperty("JMSXGroupID", "Group-0");
+producer.send(message);
 
-     message = ...
-     message.setStringProperty("JMSXGroupID", "Group-0");
-     producer.send(message);
+message = ...
+message.setStringProperty("JMSXGroupID", "Group-0");
+producer.send(message);
+```
 
 Alternatively, you can set `autogroup` to true on the
-`ActiveMQConnectonFactory` which will pick a random unique id. This can
-also be set in the JNDI context environment, e.g. `jndi.properties`.
-Here's a simple example using the "ConnectionFactory" connection factory
-which is available in the context by default
-
-    java.naming.factory.initial=org.apache.activemq.artemis.jndi.ActiveMQInitialContextFactory
-    connectionFactory.myConnectionFactory=tcp://localhost:61616?autoGroup=true
+`ActiveMQConnectonFactory` which will pick a random unique id. This can also be
+set in the JNDI context environment, e.g. `jndi.properties`.  Here's a simple
+example using the "ConnectionFactory" connection factory which is available in
+the context by default
+
+```properties
+java.naming.factory.initial=org.apache.activemq.artemis.jndi.ActiveMQInitialContextFactory
+connectionFactory.myConnectionFactory=tcp://localhost:61616?autoGroup=true
+```
 
-Alternatively you can set the group id via the connection factory. All
-messages sent with producers created via this connection factory will
-set the `JMSXGroupID` to the specified value on all messages sent. This
-can also be set in the JNDI context environment, e.g. `jndi.properties`.
-Here's a simple example using the "ConnectionFactory" connection factory
-which is available in the context by default:
+Alternatively you can set the group id via the connection factory. All messages
+sent with producers created via this connection factory will set the
+`JMSXGroupID` to the specified value on all messages sent. This can also be set
+in the JNDI context environment, e.g. `jndi.properties`.  Here's a simple
+example using the "ConnectionFactory" connection factory which is available in
+the context by default:
 
-    java.naming.factory.initial=org.apache.activemq.artemis.jndi.ActiveMQInitialContextFactory
-    connectionFactory.myConnectionFactory=tcp://localhost:61616?groupID=Group-0
+```properties
+java.naming.factory.initial=org.apache.activemq.artemis.jndi.ActiveMQInitialContextFactory
+connectionFactory.myConnectionFactory=tcp://localhost:61616?groupID=Group-0
+```
 
 ## Example
 
-See the [examples](examples.md} chapter for an example which shows how message groups are configured and used with JMS and via a connection factory.
+See the [Message Group Example](examples.md#message-group) which shows how
+message groups are configured and used with JMS and via a connection factory.
 
 ## Clustered Grouping
 
 Using message groups in a cluster is a bit more complex. This is because
-messages with a particular group id can arrive on any node so each node
-needs to know about which group id's are bound to which consumer on
-which node. The consumer handling messages for a particular group id may
-be on a different node of the cluster, so each node needs to know this
-information so it can route the message correctly to the node which has
-that consumer.
+messages with a particular group id can arrive on any node so each node needs
+to know about which group id's are bound to which consumer on which node. The
+consumer handling messages for a particular group id may be on a different node
+of the cluster, so each node needs to know this information so it can route the
+message correctly to the node which has that consumer.
 
-To solve this there is the notion of a grouping handler. Each node will
-have its own grouping handler and when a messages is sent with a group
-id assigned, the handlers will decide between them which route the
-message should take.
+To solve this there is the notion of a grouping handler. Each node will have
+its own grouping handler and when a messages is sent with a group id assigned,
+the handlers will decide between them which route the message should take.
 
-Here is a sample config for each type of handler. This should be 
-configured in `broker.xml`.
+Here is a sample config for each type of handler. This should be configured in
+`broker.xml`.
 
 ```xml
 <grouping-handler name="my-grouping-handler">
@@ -116,71 +116,66 @@ configured in `broker.xml`.
 </grouping-handler>
 ```
     
- - `type` two types of handlers are supported - `LOCAL` and `REMOTE`. 
-   Each cluster should choose 1 node to have a `LOCAL` grouping handler
-   and all the other nodes should have `REMOTE` handlers. It's the `LOCAL`
-   handler that actually makes the decision as to what route should be
-   used, all the other `REMOTE` handlers converse with this. 
-
- - `address` refers to a [cluster connection and the address
-   it uses](clusters.md#configuring-cluster-connections). Refer to the 
-   clustering section on how to configure clusters.
+- `type` two types of handlers are supported - `LOCAL` and `REMOTE`.  Each
+  cluster should choose 1 node to have a `LOCAL` grouping handler and all the
+  other nodes should have `REMOTE` handlers. It's the `LOCAL` handler that
+  actually makes the decision as to what route should be used, all the other
+  `REMOTE` handlers converse with this. 
+
+- `address` refers to a [cluster connection and the address it
+  uses](clusters.md#configuring-cluster-connections). Refer to the clustering
+  section on how to configure clusters.
     
- - `timeout` how long to wait for a decision to be made. An exception 
-   will be thrown during the send if this timeout is reached, this 
-   ensures that strict ordering is kept.
-
-The decision as to where a message should be routed to is initially
-proposed by the node that receives the message. The node will pick a
-suitable route as per the normal clustered routing conditions, i.e.
-round robin available queues, use a local queue first and choose a queue
-that has a consumer. If the proposal is accepted by the grouping
-handlers the node will route messages to this queue from that point on,
-if rejected an alternative route will be offered and the node will again
-route to that queue indefinitely. All other nodes will also route to the
-queue chosen at proposal time. Once the message arrives at the queue
-then normal single server message group semantics take over and the
+- `timeout` how long to wait for a decision to be made. An exception will be
+  thrown during the send if this timeout is reached, this ensures that strict
+  ordering is kept.
+
+The decision as to where a message should be routed to is initially proposed by
+the node that receives the message. The node will pick a suitable route as per
+the normal clustered routing conditions, i.e.  round robin available queues,
+use a local queue first and choose a queue that has a consumer. If the proposal
+is accepted by the grouping handlers the node will route messages to this queue
+from that point on, if rejected an alternative route will be offered and the
+node will again route to that queue indefinitely. All other nodes will also
+route to the queue chosen at proposal time. Once the message arrives at the
+queue then normal single server message group semantics take over and the
 message is pinned to a consumer on that queue.
 
-You may have noticed that there is a single point of failure with the
-single local handler. If this node crashes then no decisions will be
-able to be made. Any messages sent will be not be delivered and an
-exception thrown. To avoid this happening Local Handlers can be
-replicated on another backup node. Simple create your back up node and
-configure it with the same Local handler.
+You may have noticed that there is a single point of failure with the single
+local handler. If this node crashes then no decisions will be able to be made.
+Any messages sent will be not be delivered and an exception thrown. To avoid
+this happening Local Handlers can be replicated on another backup node. Simple
+create your back up node and configure it with the same Local handler.
 
 ## Clustered Grouping Best Practices
 
 Some best practices should be followed when using clustered grouping:
 
-1.  Make sure your consumers are distributed evenly across the different
-    nodes if possible. This is only an issue if you are creating and
-    closing consumers regularly. Since messages are always routed to the
-    same queue once pinned, removing a consumer from this queue may
-    leave it with no consumers meaning the queue will just keep
-    receiving the messages. Avoid closing consumers or make sure that
-    you always have plenty of consumers, i.e., if you have 3 nodes have
-    3 consumers.
-
-2.  Use durable queues if possible. If queues are removed once a group
-    is bound to it, then it is possible that other nodes may still try
-    to route messages to it. This can be avoided by making sure that the
-    queue is deleted by the session that is sending the messages. This
-    means that when the next message is sent it is sent to the node
-    where the queue was deleted meaning a new proposal can successfully
-    take place. Alternatively you could just start using a different
-    group id.
-
-3.  Always make sure that the node that has the Local Grouping Handler
-    is replicated. These means that on failover grouping will still
-    occur.
-
-4.  In case you are using group-timeouts, the remote node should have a
-    smaller group-timeout with at least half of the value on the main
-    coordinator. This is because this will determine how often the
-    last-time-use value should be updated with a round trip for a
-    request to the group between the nodes.
+1. Make sure your consumers are distributed evenly across the different nodes
+   if possible. This is only an issue if you are creating and closing
+   consumers regularly. Since messages are always routed to the same queue once
+   pinned, removing a consumer from this queue may leave it with no consumers
+   meaning the queue will just keep receiving the messages. Avoid closing
+   consumers or make sure that you always have plenty of consumers, i.e., if you
+   have 3 nodes have 3 consumers.
+
+2. Use durable queues if possible. If queues are removed once a group is bound
+   to it, then it is possible that other nodes may still try to route messages
+   to it. This can be avoided by making sure that the queue is deleted by the
+   session that is sending the messages. This means that when the next message is
+   sent it is sent to the node where the queue was deleted meaning a new proposal
+   can successfully take place. Alternatively you could just start using a
+   different group id.
+
+3. Always make sure that the node that has the Local Grouping Handler is
+   replicated. These means that on failover grouping will still occur.
+
+4. In case you are using group-timeouts, the remote node should have a smaller
+   group-timeout with at least half of the value on the main coordinator. This
+   is because this will determine how often the last-time-use value should be
+   updated with a round trip for a request to the group between the nodes.
 
 ## Clustered Grouping Example
 
-See the [examples](examples.md) chapter for an example of how to configure message groups with a ActiveMQ Artemis Cluster.
+See the [Clustered Grouping Example](examples.md#clustered-grouping) which
+shows how to configure message groups with a ActiveMQ Artemis Cluster.

http://git-wip-us.apache.org/repos/asf/activemq-artemis/blob/2b5d8f3b/docs/user-manual/en/messaging-concepts.md
----------------------------------------------------------------------
diff --git a/docs/user-manual/en/messaging-concepts.md b/docs/user-manual/en/messaging-concepts.md
index 582b85d..ab829cc 100644
--- a/docs/user-manual/en/messaging-concepts.md
+++ b/docs/user-manual/en/messaging-concepts.md
@@ -1,240 +1,240 @@
 # Messaging Concepts
 
-Apache ActiveMQ Artemis is an asynchronous messaging system, an example of [Message
-Oriented
-Middleware](https://en.wikipedia.org/wiki/Message-oriented_middleware) ,
-we'll just call them messaging systems in the remainder of this book.
+Apache ActiveMQ Artemis is an asynchronous messaging system, an example of
+[Message Oriented
+Middleware](https://en.wikipedia.org/wiki/Message-oriented_middleware) , we'll
+just call them messaging systems in the remainder of this book.
 
-We'll first present a brief overview of what kind of things messaging
-systems do, where they're useful and the kind of concepts you'll hear
-about in the messaging world.
+We'll first present a brief overview of what kind of things messaging systems
+do, where they're useful and the kind of concepts you'll hear about in the
+messaging world.
 
 If you're already familiar with what a messaging system is and what it's
 capable of, then you can skip this chapter.
 
 ## General Concepts
 
-Messaging systems allow you to loosely couple heterogeneous systems
-together, whilst typically providing reliability, transactions and many
-other features.
+Messaging systems allow you to loosely couple heterogeneous systems together,
+whilst typically providing reliability, transactions and many other features.
 
 Unlike systems based on a [Remote Procedure
 Call](https://en.wikipedia.org/wiki/Remote_procedure_call) (RPC) pattern,
-messaging systems primarily use an asynchronous message passing pattern
-with no tight relationship between requests and responses. Most
-messaging systems also support a request-response mode but this is not a
-primary feature of messaging systems.
-
-Designing systems to be asynchronous from end-to-end allows you to
-really take advantage of your hardware resources, minimizing the amount
-of threads blocking on IO operations, and to use your network bandwidth
-to its full capacity. With an RPC approach you have to wait for a
-response for each request you make so are limited by the network round
-trip time, or *latency* of your network. With an asynchronous system you
-can pipeline flows of messages in different directions, so are limited
-by the network *bandwidth* not the latency. This typically allows you to
-create much higher performance applications.
+messaging systems primarily use an asynchronous message passing pattern with no
+tight relationship between requests and responses. Most messaging systems also
+support a request-response mode but this is not a primary feature of messaging
+systems.
+
+Designing systems to be asynchronous from end-to-end allows you to really take
+advantage of your hardware resources, minimizing the amount of threads blocking
+on IO operations, and to use your network bandwidth to its full capacity. With
+an RPC approach you have to wait for a response for each request you make so
+are limited by the network round trip time, or *latency* of your network. With
+an asynchronous system you can pipeline flows of messages in different
+directions, so are limited by the network *bandwidth* not the latency. This
+typically allows you to create much higher performance applications.
 
 Messaging systems decouple the senders of messages from the consumers of
-messages. The senders and consumers of messages are completely
-independent and know nothing of each other. This allows you to create
-flexible, loosely coupled systems.
-
-Often, large enterprises use a messaging system to implement a message
-bus which loosely couples heterogeneous systems together. Message buses
-often form the core of an [Enterprise Service
-Bus](https://en.wikipedia.org/wiki/Enterprise_service_bus). (ESB). Using
-a message bus to de-couple disparate systems can allow the system to
-grow and adapt more easily. It also allows more flexibility to add new
-systems or retire old ones since they don't have brittle dependencies on
-each other.
+messages. The senders and consumers of messages are completely independent and
+know nothing of each other. This allows you to create flexible, loosely coupled
+systems.
+
+Often, large enterprises use a messaging system to implement a message bus
+which loosely couples heterogeneous systems together. Message buses often form
+the core of an [Enterprise Service
+Bus](https://en.wikipedia.org/wiki/Enterprise_service_bus). (ESB). Using a
+message bus to de-couple disparate systems can allow the system to grow and
+adapt more easily. It also allows more flexibility to add new systems or retire
+old ones since they don't have brittle dependencies on each other.
 
 ## Messaging styles
 
-Messaging systems normally support two main styles of asynchronous
-messaging: [message queue](https://en.wikipedia.org/wiki/Message_queue)
-messaging (also known as *point-to-point messaging*) and [publish
-subscribe](https://en.wikipedia.org/wiki/Publish_subscribe) messaging.
-We'll summarise them briefly here:
+Messaging systems normally support two main styles of asynchronous messaging:
+[message queue](https://en.wikipedia.org/wiki/Message_queue) messaging (also
+known as *point-to-point messaging*) and [publish
+subscribe](https://en.wikipedia.org/wiki/Publish_subscribe) messaging.  We'll
+summarise them briefly here:
 
 ### Point-to-Point
 
-With this type of messaging you send a message to a queue. The message
-is then typically persisted to provide a guarantee of delivery, then
-some time later the messaging system delivers the message to a consumer.
-The consumer then processes the message and when it is done, it
-acknowledges the message. Once the message is acknowledged it disappears
-from the queue and is not available to be delivered again. If the system
-crashes before the messaging server receives an acknowledgement from the
-consumer, then on recovery, the message will be available to be
-delivered to a consumer again.
-
-With point-to-point messaging, there can be many consumers on the queue
-but a particular message will only ever be consumed by a maximum of one
-of them. Senders (also known as *producers*) to the queue are completely
-decoupled from receivers (also known as *consumers*) of the queue - they
-do not know of each other's existence.
-
-A classic example of point to point messaging would be an order queue in
-a company's book ordering system. Each order is represented as a message
-which is sent to the order queue. Let's imagine there are many front end
-ordering systems which send orders to the order queue. When a message
-arrives on the queue it is persisted - this ensures that if the server
-crashes the order is not lost. Let's also imagine there are many
-consumers on the order queue - each representing an instance of an order
-processing component - these can be on different physical machines but
-consuming from the same queue. The messaging system delivers each
-message to one and only one of the ordering processing components.
-Different messages can be processed by different order processors, but a
-single order is only processed by one order processor - this ensures
+With this type of messaging you send a message to a queue. The message is then
+typically persisted to provide a guarantee of delivery, then some time later
+the messaging system delivers the message to a consumer.  The consumer then
+processes the message and when it is done, it acknowledges the message. Once
+the message is acknowledged it disappears from the queue and is not available
+to be delivered again. If the system crashes before the messaging server
+receives an acknowledgement from the consumer, then on recovery, the message
+will be available to be delivered to a consumer again.
+
+With point-to-point messaging, there can be many consumers on the queue but a
+particular message will only ever be consumed by a maximum of one of them.
+Senders (also known as *producers*) to the queue are completely decoupled from
+receivers (also known as *consumers*) of the queue - they do not know of each
+other's existence.
+
+A classic example of point to point messaging would be an order queue in a
+company's book ordering system. Each order is represented as a message which is
+sent to the order queue. Let's imagine there are many front end ordering
+systems which send orders to the order queue. When a message arrives on the
+queue it is persisted - this ensures that if the server crashes the order is
+not lost. Let's also imagine there are many consumers on the order queue - each
+representing an instance of an order processing component - these can be on
+different physical machines but consuming from the same queue. The messaging
+system delivers each message to one and only one of the ordering processing
+components.  Different messages can be processed by different order processors,
+but a single order is only processed by one order processor - this ensures
 orders aren't processed twice.
 
-As an order processor receives a message, it fulfills the order, sends
-order information to the warehouse system and then updates the order
-database with the order details. Once it's done that it acknowledges the
-message to tell the server that the order has been processed and can be
-forgotten about. Often the send to the warehouse system, update in
-database and acknowledgement will be completed in a single transaction
-to ensure [ACID](https://en.wikipedia.org/wiki/ACID) properties.
+As an order processor receives a message, it fulfills the order, sends order
+information to the warehouse system and then updates the order database with
+the order details. Once it's done that it acknowledges the message to tell the
+server that the order has been processed and can be forgotten about. Often the
+send to the warehouse system, update in database and acknowledgement will be
+completed in a single transaction to ensure
+[ACID](https://en.wikipedia.org/wiki/ACID) properties.
 
 ### Publish-Subscribe
 
-With publish-subscribe messaging many senders can send messages to an
-entity on the server, often called a *topic* (e.g. in the JMS world).
+With publish-subscribe messaging many senders can send messages to an entity on
+the server, often called a *topic* (e.g. in the JMS world).
 
-There can be many *subscriptions* on a topic, a subscription is just
-another word for a consumer of a topic. Each subscription receives a
-*copy* of *each* message sent to the topic. This differs from the
-message queue pattern where each message is only consumed by a single
-consumer.
+There can be many *subscriptions* on a topic, a subscription is just another
+word for a consumer of a topic. Each subscription receives a *copy* of *each*
+message sent to the topic. This differs from the message queue pattern where
+each message is only consumed by a single consumer.
 
-Subscriptions can optionally be *durable* which means they retain a copy
-of each message sent to the topic until the subscriber consumes them -
-even if the server crashes or is restarted in between. Non-durable
-subscriptions only last a maximum of the lifetime of the connection that
-created them.
+Subscriptions can optionally be *durable* which means they retain a copy of
+each message sent to the topic until the subscriber consumes them - even if the
+server crashes or is restarted in between. Non-durable subscriptions only last
+a maximum of the lifetime of the connection that created them.
 
 An example of publish-subscribe messaging would be a news feed. As news
-articles are created by different editors around the world they are sent
-to a news feed topic. There are many subscribers around the world who
-are interested in receiving news items - each one creates a subscription
-and the messaging system ensures that a copy of each news message is
-delivered to each subscription.
+articles are created by different editors around the world they are sent to a
+news feed topic. There are many subscribers around the world who are interested
+in receiving news items - each one creates a subscription and the messaging
+system ensures that a copy of each news message is delivered to each
+subscription.
 
 ## Delivery guarantees
 
-A key feature of most messaging systems is *reliable messaging*. With
-reliable messaging the server gives a guarantee that the message will be
-delivered once and only once to each consumer of a queue or each durable
-subscription of a topic, even in the event of system failure. This is
-crucial for many businesses; e.g. you don't want your orders fulfilled
-more than once or any of your orders to be lost.
+A key feature of most messaging systems is *reliable messaging*. With reliable
+messaging the server gives a guarantee that the message will be delivered once
+and only once to each consumer of a queue or each durable subscription of a
+topic, even in the event of system failure. This is crucial for many
+businesses; e.g. you don't want your orders fulfilled more than once or any of
+your orders to be lost.
 
-In other cases you may not care about a once and only once delivery
-guarantee and are happy to cope with duplicate deliveries or lost
-messages - an example of this might be transient stock price updates -
-which are quickly superseded by the next update on the same stock. The
-messaging system allows you to configure which delivery guarantees you
-require.
+In other cases you may not care about a once and only once delivery guarantee
+and are happy to cope with duplicate deliveries or lost messages - an example
+of this might be transient stock price updates - which are quickly superseded
+by the next update on the same stock. The messaging system allows you to
+configure which delivery guarantees you require.
 
 ## Transactions
 
-Messaging systems typically support the sending and acknowledgement of
-multiple messages in a single local transaction. Apache ActiveMQ Artemis also supports
+Messaging systems typically support the sending and acknowledgement of multiple
+messages in a single local transaction. Apache ActiveMQ Artemis also supports
 the sending and acknowledgement of message as part of a large global
 transaction - using the Java mapping of XA: JTA.
 
 ## Durability
 
-Messages are either durable or non durable. Durable messages will be
-persisted in permanent storage and will survive server failure or
-restart. Non durable messages will not survive server failure or
-restart. Examples of durable messages might be orders or trades, where
-they cannot be lost. An example of a non durable message might be a
-stock price update which is transitory and doesn't need to survive a
-restart.
+Messages are either durable or non durable. Durable messages will be persisted
+in permanent storage and will survive server failure or restart. Non durable
+messages will not survive server failure or restart. Examples of durable
+messages might be orders or trades, where they cannot be lost. An example of a
+non durable message might be a stock price update which is transitory and
+doesn't need to survive a restart.
 
 ## Messaging APIs and protocols
 
-How do client applications interact with messaging systems in order to
-send and consume messages?
+How do client applications interact with messaging systems in order to send and
+consume messages?
 
-Several messaging systems provide their own proprietary APIs with which
-the client communicates with the messaging system.
+Several messaging systems provide their own proprietary APIs with which the
+client communicates with the messaging system.
 
-There are also some standard ways of operating with messaging systems
-and some emerging standards in this space.
+There are also some standard ways of operating with messaging systems and some
+emerging standards in this space.
 
 Let's take a brief look at these:
 
 ### Java Message Service (JMS)
 
-[JMS](https://en.wikipedia.org/wiki/Java_Message_Service) is part of
-Oracle's Java EE specification. It's a Java API that encapsulates both message
-queue and publish-subscribe messaging patterns. JMS is a lowest common
-denominator specification - i.e. it was created to encapsulate common
-functionality of the already existing messaging systems that were
-available at the time of its creation.
+[JMS](https://en.wikipedia.org/wiki/Java_Message_Service) is part of Oracle's
+Java EE specification. It's a Java API that encapsulates both message queue and
+publish-subscribe messaging patterns. JMS is a lowest common denominator
+specification - i.e. it was created to encapsulate common functionality of the
+already existing messaging systems that were available at the time of its
+creation.
 
-JMS is a very popular API and is implemented by most messaging systems.
-JMS is only available to clients running Java.
+JMS is a very popular API and is implemented by most messaging systems.  JMS is
+only available to clients running Java.
 
-JMS does not define a standard wire format - it only defines a
-programmatic API so JMS clients and servers from different vendors
-cannot directly interoperate since each will use the vendor's own
-internal wire protocol.
+JMS does not define a standard wire format - it only defines a programmatic API
+so JMS clients and servers from different vendors cannot directly interoperate
+since each will use the vendor's own internal wire protocol.
 
-Apache ActiveMQ Artemis provides a fully compliant JMS 1.1 and JMS 2.0 API.
+Apache ActiveMQ Artemis provides a fully compliant [JMS 1.1 and JMS 2.0 client
+implementation](using-jms.md).
 
 ### System specific APIs
 
-Many systems provide their own programmatic API for which to interact
-with the messaging system. The advantage of this it allows the full set
-of system functionality to be exposed to the client application. API's
-like JMS are not normally rich enough to expose all the extra features
-that most messaging systems provide.
+Many systems provide their own programmatic API for which to interact with the
+messaging system. The advantage of this it allows the full set of system
+functionality to be exposed to the client application. API's like JMS are not
+normally rich enough to expose all the extra features that most messaging
+systems provide.
 
-Apache ActiveMQ Artemis provides its own core client API for clients to use if they
-wish to have access to functionality over and above that accessible via
+Apache ActiveMQ Artemis provides its own core client API for clients to use if
+they wish to have access to functionality over and above that accessible via
 the JMS API.
 
+Please see [Core](core.md) for using the Core API with Apache ActiveMQ Artemis.
+
 ### RESTful API
 
 [REST](https://en.wikipedia.org/wiki/Representational_State_Transfer)
 approaches to messaging are showing a lot interest recently.
 
-It seems plausible that API standards for cloud computing may converge
-on a REST style set of interfaces and consequently a REST messaging
-approach is a very strong contender for becoming the de-facto method for
-messaging interoperability.
+It seems plausible that API standards for cloud computing may converge on a
+REST style set of interfaces and consequently a REST messaging approach is a
+very strong contender for becoming the de-facto method for messaging
+interoperability.
 
-With a REST approach messaging resources are manipulated as resources
-defined by a URI and typically using a simple set of operations on those
-resources, e.g. PUT, POST, GET etc. REST approaches to messaging often
-use HTTP as their underlying protocol.
+With a REST approach messaging resources are manipulated as resources defined
+by a URI and typically using a simple set of operations on those resources,
+e.g. PUT, POST, GET etc. REST approaches to messaging often use HTTP as their
+underlying protocol.
 
-The advantage of a REST approach with HTTP is in its simplicity and the
-fact the internet is already tuned to deal with HTTP optimally.
+The advantage of a REST approach with HTTP is in its simplicity and the fact
+the internet is already tuned to deal with HTTP optimally.
 
-Please see [Rest Interface](rest.md) for using Apache ActiveMQ Artemis's RESTful interface.
+Please see [Rest Interface](rest.md) for using Apache ActiveMQ Artemis's
+RESTful interface.
 
 ### AMQP
 
-[AMQP](https://en.wikipedia.org/wiki/AMQP) is a specification for
-interoperable messaging. It also defines a wire format, so any AMQP
-client can work with any messaging system that supports AMQP. AMQP
-clients are available in many different programming languages.
+[AMQP](https://en.wikipedia.org/wiki/AMQP) is a specification for interoperable
+messaging. It also defines a wire format, so any AMQP client can work with any
+messaging system that supports AMQP. AMQP clients are available in many
+different programming languages.
 
 Apache ActiveMQ Artemis implements the [AMQP
 1.0](https://www.oasis-open.org/committees/tc_home.php?wg_abbrev=amqp)
-specification. Any client that supports the 1.0 specification will be
-able to interact with Apache ActiveMQ Artemis.
+specification. Any client that supports the 1.0 specification will be able to
+interact with Apache ActiveMQ Artemis.
+
+Please see [AMQP](amqp.md) for using AMQP with Apache ActiveMQ Artemis.
 
 ### MQTT
-[MQTT](https://mqtt.org/) is a lightweight connectivity protocol.  It is designed
-to run in environments where device and networks are constrained.  Out of the box
-Apache ActiveMQ Artemis supports version MQTT 3.1.1.  Any client supporting this
-version of the protocol will work against Apache ActiveMQ Artemis.
+
+[MQTT](https://mqtt.org/) is a lightweight connectivity protocol.  It is
+designed to run in environments where device and networks are constrained.  Out
+of the box Apache ActiveMQ Artemis supports version MQTT 3.1.1.  Any client
+supporting this version of the protocol will work against Apache ActiveMQ
+Artemis.
+
+Please see [MQTT](mqtt.md) for using MQTT with Apache ActiveMQ Artemis.
 
 ### STOMP
 
@@ -244,64 +244,67 @@ theoretically any Stomp client can work with any messaging system that
 supports Stomp. Stomp clients are available in many different
 programming languages.
 
-Please see [Stomp](protocols-interoperability.md) for using STOMP with Apache ActiveMQ Artemis.
+Please see [Stomp](stomp.md) for using STOMP with Apache ActiveMQ Artemis.
+
+### OpenWire
 
-### OPENWIRE
+ActiveMQ 5.x defines it's own wire protocol: OpenWire.  In order to support
+ActiveMQ 5.x clients, Apache ActiveMQ Artemis supports OpenWire.  Any ActiveMQ
+5.12.x or higher can be used with Apache ActiveMQ Artemis.
 
-ActiveMQ 5.x defines it's own wire Protocol "OPENWIRE".  In order to support 
-ActiveMQ 5.x clients, Apache ActiveMQ Artemis supports OPENWIRE.  Any ActiveMQ 5.12.x
-or higher can be used with Apache ActiveMQ Artemis.
+Please see [OpenWire](openwire.md) for using OpenWire with Apache ActiveMQ
+Artemis.
 
 ## High Availability
 
-High Availability (HA) means that the system should remain operational
-after failure of one or more of the servers. The degree of support for
-HA varies between various messaging systems.
+High Availability (HA) means that the system should remain operational after
+failure of one or more of the servers. The degree of support for HA varies
+between various messaging systems.
 
 Apache ActiveMQ Artemis provides automatic failover where your sessions are
-automatically reconnected to the backup server on event of live server
-failure.
+automatically reconnected to the backup server on event of live server failure.
 
 For more information on HA, please see [High Availability and Failover](ha.md).
 
 ## Clusters
 
-Many messaging systems allow you to create groups of messaging servers
-called *clusters*. Clusters allow the load of sending and consuming
-messages to be spread over many servers. This allows your system to
-scale horizontally by adding new servers to the cluster.
+Many messaging systems allow you to create groups of messaging servers called
+*clusters*. Clusters allow the load of sending and consuming messages to be
+spread over many servers. This allows your system to scale horizontally by
+adding new servers to the cluster.
 
-Degrees of support for clusters varies between messaging systems, with
-some systems having fairly basic clusters with the cluster members being
-hardly aware of each other.
+Degrees of support for clusters varies between messaging systems, with some
+systems having fairly basic clusters with the cluster members being hardly
+aware of each other.
 
-Apache ActiveMQ Artemis provides very configurable state-of-the-art clustering model
-where messages can be intelligently load balanced between the servers in
-the cluster, according to the number of consumers on each node, and
-whether they are ready for messages.
+Apache ActiveMQ Artemis provides very configurable state-of-the-art clustering
+model where messages can be intelligently load balanced between the servers in
+the cluster, according to the number of consumers on each node, and whether
+they are ready for messages.
 
-Apache ActiveMQ Artemis also has the ability to automatically redistribute messages
-between nodes of a cluster to prevent starvation on any particular node.
+Apache ActiveMQ Artemis also has the ability to automatically redistribute
+messages between nodes of a cluster to prevent starvation on any particular
+node.
 
 For full details on clustering, please see [Clusters](clusters.md).
 
 ## Bridges and routing
 
-Some messaging systems allow isolated clusters or single nodes to be
-bridged together, typically over unreliable connections like a wide area
-network (WAN), or the internet.
+Some messaging systems allow isolated clusters or single nodes to be bridged
+together, typically over unreliable connections like a wide area network (WAN),
+or the internet.
 
-A bridge normally consumes from a queue on one server and forwards
-messages to another queue on a different server. Bridges cope with
-unreliable connections, automatically reconnecting when the connections
-becomes available again.
+A bridge normally consumes from a queue on one server and forwards messages to
+another queue on a different server. Bridges cope with unreliable connections,
+automatically reconnecting when the connections becomes available again.
 
-Apache ActiveMQ Artemis bridges can be configured with filter expressions to only
-forward certain messages, and transformation can also be hooked in.
+Apache ActiveMQ Artemis bridges can be configured with filter expressions to
+only forward certain messages, and transformation can also be hooked in.
 
-Apache ActiveMQ Artemis also allows routing between queues to be configured in server
-side configuration. This allows complex routing networks to be set up
-forwarding or copying messages from one destination to another, forming
-a global network of interconnected brokers.
+Apache ActiveMQ Artemis also allows routing between queues to be configured in
+server side configuration. This allows complex routing networks to be set up
+forwarding or copying messages from one destination to another, forming a
+global network of interconnected brokers.
 
-For more information please see [Core Bridges](core-bridges.md) and [Diverting and Splitting Message Flows](diverts.md).
+For more information please see [Core Bridges](core-bridges.md) and [Diverting
+and Splitting Message Flows](diverts.md).

http://git-wip-us.apache.org/repos/asf/activemq-artemis/blob/2b5d8f3b/docs/user-manual/en/mqtt.md
----------------------------------------------------------------------
diff --git a/docs/user-manual/en/mqtt.md b/docs/user-manual/en/mqtt.md
new file mode 100644
index 0000000..dfb8da0
--- /dev/null
+++ b/docs/user-manual/en/mqtt.md
@@ -0,0 +1,137 @@
+# MQTT
+
+MQTT is a light weight, client to server, publish / subscribe messaging
+protocol.  MQTT has been specifically designed to reduce transport overhead
+(and thus network traffic) and code footprint on client devices.  For this
+reason MQTT is ideally suited to constrained devices such as sensors and
+actuators and is quickly becoming the defacto standard communication protocol
+for IoT.
+
+Apache ActiveMQ Artemis supports MQTT v3.1.1 (and also the older v3.1 code
+message format). By default there are `acceptor` elements configured to accept
+MQTT connections on ports `61616` and `1883`.
+
+See the general [Protocols and Interoperability](protocols-interoperability.md)
+chapter for details on configuring an `acceptor` for MQTT.
+
+The best source of information on the MQTT protocol is in the [3.1.1
+specification](https://docs.oasis-open.org/mqtt/mqtt/v3.1.1/os/mqtt-v3.1.1-os.html).
+
+Refer to the MQTT examples for a look at some of this functionality in action.
+
+## MQTT Quality of Service
+
+MQTT offers 3 quality of service levels.
+
+Each message (or topic subscription) can define a quality of service that is
+associated with it.  The quality of service level defined on a topic is the
+maximum level a client is willing to accept.  The quality of service level on a
+message is the desired quality of service level for this message.  The broker
+will attempt to deliver messages to subscribers at the highest quality of
+service level based on what is defined on the message and topic subscription.
+
+Each quality of service level offers a level of guarantee by which a message is
+sent or received:
+
+- QoS 0: `AT MOST ONCE`
+
+  Guarantees that a particular message is only ever received by the subscriber
+  a maximum of one time. This does mean that the message may never arrive.  The
+  sender and the receiver will attempt to deliver the message, but if something
+  fails and the message does not reach it's destination (say due to a network
+  connection) the message may be lost. This QoS has the least network traffic
+  overhead and the least burden on the client and the broker and is often useful
+  for telemetry data where it doesn't matter if some of the data is lost.
+
+- QoS 1: `AT LEAST ONCE`
+
+  Guarantees that a message will reach it's intended recipient one or more
+  times.  The sender will continue to send the message until it receives an
+  acknowledgment from the recipient, confirming it has received the message. The
+  result of this QoS is that the recipient may receive the message multiple
+  times, and also increases the network overhead than QoS 0, (due to acks).  In
+  addition more burden is placed on the sender as it needs to store the message
+  and retry should it fail to receive an ack in a reasonable time.
+
+- QoS 2: `EXACTLY ONCE`
+
+  The most costly of the QoS (in terms of network traffic and burden on sender
+  and receiver) this QoS will ensure that the message is received by a recipient
+  exactly one time.  This ensures that the receiver never gets any duplicate
+  copies of the message and will eventually get it, but at the extra cost of
+  network overhead and complexity required on the sender and receiver.
+
+## MQTT Retain Messages
+
+MQTT has an interesting feature in which messages can be "retained" for a
+particular address.  This means that once a retain message has been sent to an
+address, any new subscribers to that address will receive the last sent retain
+message before any others messages, this happens even if the retained message
+was sent before a client has connected or subscribed.  An example of where this
+feature might be useful is in environments such as IoT where devices need to
+quickly get the current state of a system when they are on boarded into a
+system.
+
+## Will Messages
+
+A will message can be sent when a client initially connects to a broker.
+Clients are able to set a "will message" as part of the connect packet.  If the
+client abnormally disconnects, say due to a device or network failure the
+broker will proceed to publish the will message to the specified address (as
+defined also in the connect packet). Other subscribers to the will topic will
+receive the will message and can react accordingly. This feature can be useful
+in an IoT style scenario to detect errors across a potentially large scale
+deployment of devices.
+
+## Debug Logging
+
+Detailed protocol logging (e.g. packets in/out) can be activated via the
+following steps:
+
+1. Open `<ARTEMIS_INSTANCE>/etc/logging.properties`
+
+2. Add `org.apache.activemq.artemis.core.protocol.mqtt` to the `loggers` list.
+
+3. Add this line to enable `TRACE` logging for this new logger: 
+   `logger.org.apache.activemq.artemis.core.protocol.mqtt.level=TRACE`
+
+4. Ensure the `level` for the `handler` you want to log the message doesn't 
+   block the `TRACE` logging. For example, modify the `level` of the `CONSOLE` 
+   `handler` like so: `handler.CONSOLE.level=TRACE`.
+
+The MQTT specification doesn't dictate the format of the payloads which clients
+publish. As far as the broker is concerned a payload is just just an array of
+bytes. However, to facilitate logging the broker will encode the payloads as
+UTF-8 strings and print them up to 256 characters. Payload logging is limited
+to avoid filling the logs with potentially hundreds of megabytes of unhelpful
+information.
+
+
+## Wild card subscriptions
+
+MQTT addresses are hierarchical much like a file system, and they use a special
+character (i.e. `/` by default) to separate hierarchical levels. Subscribers
+are able to subscribe to specific topics or to whole branches of a hierarchy.
+
+To subscribe to branches of an address hierarchy a subscriber can use wild
+cards. These wild cards (including the aforementioned separator) are
+configurable. See the [Wildcard
+Syntax](wildcard-syntax.md#customizing-the-syntax) chapter for details about
+how to configure custom wild cards.
+
+There are 2 types of wild cards in MQTT:
+
+- **Multi level** (`#` by default)
+
+  Adding this wild card to an address would match all branches of the address
+  hierarchy under a specified node.  For example: `/uk/#`  Would match
+  `/uk/cities`, `/uk/cities/newcastle` and also `/uk/rivers/tyne`. Subscribing to
+  an address `#` would result in subscribing to all topics in the broker.  This
+  can be useful, but should be done so with care since it has significant
+  performance implications.
+
+- **Single level** (`+` by default)
+
+  Matches a single level in the address hierarchy. For example `/uk/+/stores`
+  would match `/uk/newcastle/stores` but not `/uk/cities/newcastle/stores`.
+

http://git-wip-us.apache.org/repos/asf/activemq-artemis/blob/2b5d8f3b/docs/user-manual/en/network-isolation.md
----------------------------------------------------------------------
diff --git a/docs/user-manual/en/network-isolation.md b/docs/user-manual/en/network-isolation.md
index 78426e3..d864d7b 100644
--- a/docs/user-manual/en/network-isolation.md
+++ b/docs/user-manual/en/network-isolation.md
@@ -1,15 +1,18 @@
 # Network Isolation (Split Brain)
 
-It is possible that if a replicated live or backup server becomes isolated in a network that failover will occur and you will end up
-with 2 live servers serving messages in a cluster, this we call split brain. There are different configurations you can choose
-from that will help mitigate this problem
+It is possible that if a replicated live or backup server becomes isolated in a
+network that failover will occur and you will end up with 2 live servers
+serving messages in a cluster, this we call split brain. There are different
+configurations you can choose from that will help mitigate this problem
 
 ## Quorum Voting
 
-Quorum voting is used by both the live and the backup to decide what to do if a replication connection is disconnected. 
-Basically the server will request each live server in the cluster to vote as to whether it thinks the server it is replicating 
-to or from is still alive. You can also configure the time for which the quorum manager will wait for the quorum vote response.
-The default time is 30 sec you can configure like so for master and also for the slave: 
+Quorum voting is used by both the live and the backup to decide what to do if a
+replication connection is disconnected.  Basically the server will request each
+live server in the cluster to vote as to whether it thinks the server it is
+replicating to or from is still alive. You can also configure the time for which
+the quorum manager will wait for the quorum vote response. The default time is 30
+seconds you can configure like so for master and also for the slave:
 
 ```xml
 <ha-policy>
@@ -21,18 +24,23 @@ The default time is 30 sec you can configure like so for master and also for the
 </ha-policy>
 ```
 
-This being the case the minimum number of live/backup pairs needed is 3. If less than 3 pairs 
-are used then the only option is to use a Network Pinger which is explained later in this chapter or choose how you want each server to 
-react which the following details:
- 
+This being the case the minimum number of live/backup pairs needed is 3. If less
+than 3 pairs are used then the only option is to use a Network Pinger which is
+explained later in this chapter or choose how you want each server to react which
+the following details:
+
 ### Backup Voting
 
-By default if a replica loses its replication connection to the live broker it makes a decision as to whether to start or not
-with a quorum vote. This of course requires that there be at least 3 pairs of live/backup nodes in the cluster. For a 3 node 
-cluster it will start if it gets 2 votes back saying that its live server is no longer available, for 4 nodes this would be 
-3 votes and so on. When a backup loses connection to the master it will keep voting for a quorum until it either receives a vote 
-allowing it to start or it detects that the master is still live. for the latter it will then restart as a backup. How many votes 
-and how long between each vote the backup should wait is configured like so:
+By default if a replica loses its replication connection to the live broker it
+makes a decision as to whether to start or not with a quorum vote. This of
+course requires that there be at least 3 pairs of live/backup nodes in the
+cluster. For a 3 node cluster it will start if it gets 2 votes back saying that
+its live server is no longer available, for 4 nodes this would be 3 votes and
+so on. When a backup loses connection to the master it will keep voting for a
+quorum until it either receives a vote allowing it to start or it detects that
+the master is still live. for the latter it will then restart as a backup. How
+many votes and how long between each vote the backup should wait is configured
+like so:
 
 ```xml
 <ha-policy>
@@ -45,8 +53,9 @@ and how long between each vote the backup should wait is configured like so:
 </ha-policy>
 ```
 
-It's also possible to statically set the quorum size that should be used for the case where the cluster size is known up front,
-this is done on the Replica Policy like so:
+It's also possible to statically set the quorum size that should be used for
+the case where the cluster size is known up front, this is done on the Replica
+Policy like so:
 
 ```xml
 <ha-policy>
@@ -58,16 +67,18 @@ this is done on the Replica Policy like so:
 </ha-policy>
 ```
 
-In this example the quorum size is set to 2 so if you were using a single pair and the backup lost connectivity it would 
-never start.
+In this example the quorum size is set to 2 so if you were using a single pair
+and the backup lost connectivity it would never start.
 
 ### Live Voting
 
-By default, if the live server loses its replication connection then it will just carry on and wait for a backup to reconnect 
-and start replicating again. In the event of a possible split brain scenario this may mean that the live stays live even though
-the backup has been activated. It is possible to configure the live server to vote for a quorum if this happens, in this way
-if the live server doesn't not receive a majority vote then it will shutdown. This is done by setting the _vote-on-replication-failure_ 
-to true.
+By default, if the live server loses its replication connection then it will
+just carry on and wait for a backup to reconnect and start replicating again.
+In the event of a possible split brain scenario this may mean that the live
+stays live even though the backup has been activated. It is possible to
+configure the live server to vote for a quorum if this happens, in this way if
+the live server doesn't not receive a majority vote then it will shutdown. This
+is done by setting the _vote-on-replication-failure_ to true.
 
 ```xml
 <ha-policy>
@@ -79,22 +90,24 @@ to true.
   </replication>
 </ha-policy>
 ```
-As in the backup policy it is also possible to statically configure the quorum size.
+As in the backup policy it is also possible to statically configure the quorum
+size.
 
 ## Pinging the network
 
-You may configure one more addresses on the broker.xml that are part of your network topology, that will be pinged through the life cycle of the server.
+You may configure one more addresses on the broker.xml that are part of your
+network topology, that will be pinged through the life cycle of the server.
 
 The server will stop itself until the network is back on such case.
 
-If you execute the create command passing a -ping argument, you will create a default xml that is ready to be used with network checks:
+If you execute the create command passing a -ping argument, you will create a
+default xml that is ready to be used with network checks:
 
 
 ```
 ./artemis create /myDir/myServer --ping 10.0.0.1
 ```
 
-
 This XML part will be added to your broker.xml:
 
 ```xml
@@ -126,10 +139,8 @@ Use this to use an HTTP server to validate the network
 
 ```
 
-
-Once you lose connectivity towards 10.0.0.1 on the given example
-, you will see see this output at the server:
-
+Once you lose connectivity towards 10.0.0.1 on the given example, you will see
+see this output at the server:
 
 ```
 09:49:24,562 WARN  [org.apache.activemq.artemis.core.server.NetworkHealthCheck] Ping Address /10.0.0.1 wasn't reacheable
@@ -178,8 +189,9 @@ Once you re establish your network connections towards the configured check list
 09:53:23,556 INFO  [org.apache.activemq.artemis.core.server] AMQ221001: Apache ActiveMQ Artemis Message Broker version 1.6.0 [0.0.0.0, nodeID=04fd5dd8-b18c-11e6-9efe-6a0001921ad0] 
 ```
 
-# Warning
-
-> Make sure you understand your network topology as this is meant to validate your network.
-> Using IPs that could eventually disappear or be partially visible may defeat the purpose.
-> You can use a list of multiple IPs. Any successful ping will make the server OK to continue running
+> ## Warning
+>
+> Make sure you understand your network topology as this is meant to validate
+> your network.  Using IPs that could eventually disappear or be partially
+> visible may defeat the purpose.  You can use a list of multiple IPs. Any
+> successful ping will make the server OK to continue running

http://git-wip-us.apache.org/repos/asf/activemq-artemis/blob/2b5d8f3b/docs/user-manual/en/openwire.md
----------------------------------------------------------------------
diff --git a/docs/user-manual/en/openwire.md b/docs/user-manual/en/openwire.md
new file mode 100644
index 0000000..31ada92
--- /dev/null
+++ b/docs/user-manual/en/openwire.md
@@ -0,0 +1,112 @@
+# OpenWire
+
+Apache ActiveMQ Artemis supports the
+[OpenWire](http://activemq.apache.org/openwire.html) protocol so that an Apache
+ActiveMQ 5.x JMS client can talk directly to an Apache ActiveMQ Artemis server.
+By default there is an `acceptor` configured to accept OpenWire connections on
+port `61616`.
+
+See the general [Protocols and Interoperability](protocols-interoperability.md)
+chapter for details on configuring an `acceptor` for OpenWire.
+
+Refer to the OpenWire examples for a look at this functionality in action.
+
+## Connection Monitoring
+
+OpenWire has a few parameters to control how each connection is monitored, they
+are:
+
+- `maxInactivityDuration`
+
+  It specifies the time (milliseconds) after which the connection is closed by
+  the broker if no data was received.  Default value is 30000.
+
+- `maxInactivityDurationInitalDelay`
+
+  It specifies the maximum delay (milliseconds) before inactivity monitoring is
+  started on the connection. It can be useful if a broker is under load with many
+  connections being created concurrently. Default value is 10000.
+
+- `useInactivityMonitor`
+
+  A value of false disables the InactivityMonitor completely and connections
+  will never time out. By default it is enabled. On broker side you don't neet
+  set this. Instead you can set the connection-ttl to -1.
+
+- `useKeepAlive`
+
+  Whether or not to send a KeepAliveInfo on an idle connection to prevent it
+  from timing out. Enabled by default.  Disabling the keep alive will still make
+  connections time out if no data was received on the connection for the
+  specified amount of time.
+
+Note at the beginning the InactivityMonitor negotiates the appropriate
+`maxInactivityDuration` and `maxInactivityDurationInitalDelay`. The shortest
+duration is taken for the connection.
+
+Fore more details please see [ActiveMQ
+InactivityMonitor](http://activemq.apache.org/activemq-inactivitymonitor.html).
+
+## Disable/Enable Advisories
+
+By default, advisory topics ([ActiveMQ
+Advisory](http://activemq.apache.org/advisory-message.html)) are created in
+order to send certain type of advisory messages to listening clients. As a
+result, advisory addresses and queues will be displayed on the management
+console, along with user deployed addresses and queues. This sometimes cause
+confusion because the advisory objects are internally managed without user
+being aware of them. In addition, users may not want the advisory topics at all
+(they cause extra resources and performance penalty) and it is convenient to
+disable them at all from the broker side.
+
+The protocol provides two parameters to control advisory behaviors on the
+broker side.
+
+- `supportAdvisory`
+
+  Whether or not the broker supports advisory messages. If the value is true,
+  advisory addresses/queues will be created.  If the value is false, no advisory
+  addresses/queues are created. Default value is `true`. 
+
+- `suppressInternalManagementObjects`
+
+  Whether or not the advisory addresses/queues, if any, will be registered to
+  management service (e.g. JMX registry). If set to true, no advisory
+  addresses/queues will be registered. If set to false, those are registered and
+  will be displayed on the management console. Default value is `true`.
+
+The two parameters are configured on an OpenWire `acceptor`, e.g.:
+
+```xml
+<acceptor name="artemis">tcp://localhost:61616?protocols=OPENWIRE;supportAdvisory=true;suppressInternalManagementObjects=false</acceptor>
+```
+
+## Virtual Topic Consumer Destination Translation
+
+For existing OpenWire consumers of virtual topic destinations it is possible to
+configure a mapping function that will translate the virtual topic consumer
+destination into a FQQN address. This address then represents the consumer as a
+multicast binding to an address representing the virtual topic. 
+
+The configuration string property `virtualTopicConsumerWildcards` has two parts
+seperated by a `;`. The first is the 5.x style destination filter that
+identifies the destination as belonging to a virtual topic. The second
+identifies the number of `paths` that identify the consumer queue such that it
+can be parsed from the destination. For example, the default 5.x virtual topic
+with consumer prefix of `Consumer.*.`, would require a
+`virtualTopicConsumerWildcards` filter of `Consumer.*.>;2`. As a url parameter
+this transforms to `Consumer.*.%3E%3B2` when the url significant characters
+`>;` are escaped with their hex code points. In an `acceptor` url it would be:
+
+```xml
+<acceptor name="artemis">tcp://localhost:61616?protocols=OPENWIRE;virtualTopicConsumerWildcards=Consumer.*.%3E%3B2</acceptor>
+```
+
+This will translate `Consumer.A.VirtualTopic.Orders` into a FQQN of
+`VirtualTopic.Orders::Consumer.A` using the int component `2` of the
+configuration to identify the consumer queue as the first two paths of the
+destination.  `virtualTopicConsumerWildcards` is multi valued using a `,`
+separator.
+
+Please see Virtual Topic Mapping example contained in the OpenWire
+[examples](examples.md).
\ No newline at end of file

http://git-wip-us.apache.org/repos/asf/activemq-artemis/blob/2b5d8f3b/docs/user-manual/en/paging.md
----------------------------------------------------------------------
diff --git a/docs/user-manual/en/paging.md b/docs/user-manual/en/paging.md
index a6ce648..60e58d0 100644
--- a/docs/user-manual/en/paging.md
+++ b/docs/user-manual/en/paging.md
@@ -1,77 +1,70 @@
 # Paging
 
-Apache ActiveMQ Artemis transparently supports huge queues containing millions of
-messages while the server is running with limited memory.
+Apache ActiveMQ Artemis transparently supports huge queues containing millions
+of messages while the server is running with limited memory.
 
-In such a situation it's not possible to store all of the queues in
-memory at any one time, so Apache ActiveMQ Artemis transparently *pages* messages into
-and out of memory as they are needed, thus allowing massive queues with
-a low memory footprint.
+In such a situation it's not possible to store all of the queues in memory at
+any one time, so Apache ActiveMQ Artemis transparently *pages* messages into
+and out of memory as they are needed, thus allowing massive queues with a low
+memory footprint.
 
-Apache ActiveMQ Artemis will start paging messages to disk, when the size of all
-messages in memory for an address exceeds a configured maximum size.
+Apache ActiveMQ Artemis will start paging messages to disk, when the size of
+all messages in memory for an address exceeds a configured maximum size.
 
 The default configuration from Artemis has destinations with paging.
 
 ## Page Files
 
 Messages are stored per address on the file system. Each address has an
-individual folder where messages are stored in multiple files (page
-files). Each file will contain messages up to a max configured size
-(`page-size-bytes`). The system will navigate on the files as needed,
-and it will remove the page file as soon as all the messages are
-acknowledged up to that point.
+individual folder where messages are stored in multiple files (page files).
+Each file will contain messages up to a max configured size
+(`page-size-bytes`). The system will navigate the files as needed, and it
+will remove the page file as soon as all the messages are acknowledged up to
+that point.
 
 Browsers will read through the page-cursor system.
 
-Consumers with selectors will also navigate through the page-files and it will ignore messages that don't match the criteria.
+Consumers with selectors will also navigate through the page-files and it will
+ignore messages that don't match the criteria.
+
 > *Warning:*
-> When you have a queue, and consumers filtering the queue with a very restrictive selector you may get into a situation where you won't be able to read more data from paging until you consume messages from the queue.
 >
-> Example: in one consumer you make a selector as 'color="red"'
-> but you only have one color red 1 millions messages after blue, you won't be able to consume red until you consume blue ones.
+> When you have a queue, and consumers filtering the queue with a very
+> restrictive selector you may get into a situation where you won't be able to
+> read more data from paging until you consume messages from the queue.
+>
+> Example: in one consumer you make a selector as 'color="red"' but you only
+> have one color red 1 millions messages after blue, you won't be able to
+> consume red until you consume blue ones.
 >
-> This is different to browsing as we will "browse" the entire queue looking for messages and while we "depage" messages while feeding the queue.
+> This is different to browsing as we will "browse" the entire queue looking
+> for messages and while we "depage" messages while feeding the queue.
 
 
 
 ### Configuration
 
-You can configure the location of the paging folder
-
-Global paging parameters are specified on the main configuration file
-(`broker.xml`).
-
-    <configuration xmlns="urn:activemq"
-       xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
-       xsi:schemaLocation="urn:activemq /schema/artemis-server.xsd">
-    ...
-    <paging-directory>/somewhere/paging-directory</paging-directory>
-    ...
+You can configure the location of the paging folder in `broker.xml`.
 
-  Property Name        Description                                                                                                                 Default
-  -------------------- --------------------------------------------------------------------------------------------------------------------------- -------------
-  `paging-directory`   Where page files are stored. Apache ActiveMQ Artemis will create one folder for each address being paged under this configured location.   data/paging
-
-  : Paging Configuration Parameters
+- `paging-directory` Where page files are stored. Apache ActiveMQ Artemis will
+  create one folder for each address being paged under this configured
+  location. Default is `data/paging`.
 
 ## Paging Mode
 
 As soon as messages delivered to an address exceed the configured size,
 that address alone goes into page mode.
 
-> **Note**
+> **Note:**
 >
-> Paging is done individually per address. If you configure a
-> max-size-bytes for an address, that means each matching address will
-> have a maximum size that you specified. It DOES NOT mean that the
-> total overall size of all matching addresses is limited to
-> max-size-bytes.
+> Paging is done individually per address. If you configure a max-size-bytes
+> for an address, that means each matching address will have a maximum size
+> that you specified. It DOES NOT mean that the total overall size of all
+> matching addresses is limited to max-size-bytes.
 
 ### Configuration
 
-Configuration is done at the address settings, done at the main
-configuration file (`broker.xml`).
+Configuration is done at the address settings in `broker.xml`.
 
 ```xml
 <address-settings>
@@ -85,117 +78,90 @@ configuration file (`broker.xml`).
 
 This is the list of available parameters on the address settings.
 
-<table summary="Server Configuration" border="1">
-    <colgroup>
-        <col/>
-        <col/>
-        <col/>
-    </colgroup>
-    <thead>
-    <tr>
-        <th>Property Name</th>
-        <th>Description</th>
-        <th>Default</th>
-    </tr>
-    </thead>
-    <tbody>
-    <tr>
-        <td>`max-size-bytes`</td>
-        <td>What's the max memory the address could have before entering on page mode.</td>
-        <td>-1 (disabled)</td>
-    </tr>
-    <tr>
-        <td>`page-size-bytes`</td>
-        <td>The size of each page file used on the paging system</td>
-        <td>10MiB (10 \* 1024 \* 1024 bytes)</td>
-    </tr>
-    <tr>
-        <td>`address-full-policy`</td>
-        <td>This must be set to PAGE for paging to enable. If the value is PAGE then further messages will be paged to disk. If the value is DROP then further messages will be silently dropped. If the value is FAIL then the messages will be dropped and the client message producers will receive an exception. If the value is BLOCK then client message producers will block when they try and send further messages.</td>
-        <td>PAGE</td>
-    </tr>
-    <tr>
-        <td>`page-max-cache-size`</td>
-        <td>The system will keep up to `page-max-cache-size` page files in memory to optimize IO during paging navigation.</td>
-        <td>5</td>
-    </tr>
-    </tbody>
-</table>
+Property Name|Description|Default
+---|---|---
+`max-size-bytes`|What's the max memory the address could have before entering on page mode.|-1 (disabled)
+`page-size-bytes`|The size of each page file used on the paging system|10MB
+`address-full-policy`|This must be set to `PAGE` for paging to enable. If the value is `PAGE` then further messages will be paged to disk. If the value is `DROP` then further messages will be silently dropped. If the value is `FAIL` then the messages will be dropped and the client message producers will receive an exception. If the value is `BLOCK` then client message producers will block when they try and send further messages.|`PAGE`
+`page-max-cache-size`|The system will keep up to `page-max-cache-size` page files in memory to optimize IO during paging navigation.|5
 
 ## Global Max Size
 
-Beyond the max-size-bytes on the address you can also set the global-max-size on the main configuration. If you set max-size-bytes = -1 on paging the global-max-size can still be used.
+Beyond the `max-size-bytes` on the address you can also set the global-max-size
+on the main configuration. If you set `max-size-bytes` = `-1` on paging the
+`global-max-size` can still be used.
 
-When you have more messages than what is configured global-max-size any new produced message will make that destination to go through its paging policy. 
+When you have more messages than what is configured `global-max-size` any new
+produced message will make that destination to go through its paging policy. 
 
-global-max-size is calculated as half of the max memory available to the Java Virtual Machine, unless specified on the broker.xml configuration.
+`global-max-size` is calculated as half of the max memory available to the Java
+Virtual Machine, unless specified on the `broker.xml` configuration.
 
 ## Dropping messages
 
-Instead of paging messages when the max size is reached, an address can
-also be configured to just drop messages when the address is full.
+Instead of paging messages when the max size is reached, an address can also be
+configured to just drop messages when the address is full.
 
-To do this just set the `address-full-policy` to `DROP` in the address
-settings
+To do this just set the `address-full-policy` to `DROP` in the address settings
 
 ## Dropping messages and throwing an exception to producers
 
-Instead of paging messages when the max size is reached, an address can
-also be configured to drop messages and also throw an exception on the
-client-side when the address is full.
+Instead of paging messages when the max size is reached, an address can also be
+configured to drop messages and also throw an exception on the client-side when
+the address is full.
 
-To do this just set the `address-full-policy` to `FAIL` in the address
-settings
+To do this just set the `address-full-policy` to `FAIL` in the address settings
 
 ## Blocking producers
 
-Instead of paging messages when the max size is reached, an address can
-also be configured to block producers from sending further messages when
-the address is full, thus preventing the memory being exhausted on the
-server.
+Instead of paging messages when the max size is reached, an address can also be
+configured to block producers from sending further messages when the address is
+full, thus preventing the memory being exhausted on the server.
 
-When memory is freed up on the server, producers will automatically
-unblock and be able to continue sending.
+When memory is freed up on the server, producers will automatically unblock and
+be able to continue sending.
 
 To do this just set the `address-full-policy` to `BLOCK` in the address
 settings
 
-In the default configuration, all addresses are configured to block
-producers after 10 MiB of data are in the address.
+In the default configuration, all addresses are configured to block producers
+after 10 MiB of data are in the address.
 
 ## Caution with Addresses with Multiple Multicast Queues
 
-When a message is routed to an address that has multiple multicast queues bound to
-it, e.g. a JMS subscription in a Topic, there is only 1 copy of the
-message in memory. Each queue only deals with a reference to this.
-Because of this the memory is only freed up once all queues referencing
-the message have delivered it.
+When a message is routed to an address that has multiple multicast queues bound
+to it, e.g. a JMS subscription in a Topic, there is only 1 copy of the message
+in memory. Each queue only deals with a reference to this.  Because of this the
+memory is only freed up once all queues referencing the message have delivered
+it.
 
-If you have a single lazy subscription, the entire address will suffer
-IO performance hit as all the queues will have messages being sent
-through an extra storage on the paging system.
+If you have a single lazy subscription, the entire address will suffer IO
+performance hit as all the queues will have messages being sent through an
+extra storage on the paging system.
 
 For example:
 
--   An address has 10 multicast queues
+- An address has 10 multicast queues
 
--   One of the queues does not deliver its messages (maybe because of a
-    slow consumer).
+- One of the queues does not deliver its messages (maybe because of a
+  slow consumer).
 
--   Messages continually arrive at the address and paging is started.
+- Messages continually arrive at the address and paging is started.
 
--   The other 9 queues are empty even though messages have been sent.
+- The other 9 queues are empty even though messages have been sent.
 
-In this example all the other 9 queues will be consuming messages from
-the page system. This may cause performance issues if this is an
-undesirable state.
+In this example all the other 9 queues will be consuming messages from the page
+system. This may cause performance issues if this is an undesirable state.
 
 ## Max Disk Usage
 
-The System will perform scans on the disk to determine if the disk is beyond a configured limit. 
-These are configured through 'max-disk-usage' in percentage. Once that limit is reached any 
-message will be blocked. (unless the protocol doesn't support flow control on which case there will be an exception thrown and the connection for those clients dropped).
+The System will perform scans on the disk to determine if the disk is beyond a
+configured limit.  These are configured through `max-disk-usage` in percentage.
+Once that limit is reached any message will be blocked. (unless the protocol
+doesn't support flow control on which case there will be an exception thrown
+and the connection for those clients dropped).
 
 ## Example
 
-See the [examples](examples.md) chapter for an example which shows how to use paging with Apache ActiveMQ Artemis.
+See the [Paging Example](examples.md#paging) which shows how to use paging with 
+Apache ActiveMQ Artemis.


Mime
View raw message