qpid-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Keith Wall (JIRA)" <j...@apache.org>
Subject [jira] [Updated] (QPID-7753) Sparsely occupied message buffers may lead to java.lang.OutOfMemoryError: Direct buffer memory
Date Wed, 10 May 2017 10:20:04 GMT

     [ https://issues.apache.org/jira/browse/QPID-7753?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]

Keith Wall updated QPID-7753:
-----------------------------
    Fix Version/s: qpid-java-broker-7.0.0

> Sparsely occupied message buffers may lead to java.lang.OutOfMemoryError: Direct buffer
memory
> ----------------------------------------------------------------------------------------------
>
>                 Key: QPID-7753
>                 URL: https://issues.apache.org/jira/browse/QPID-7753
>             Project: Qpid
>          Issue Type: Bug
>          Components: Java Broker
>    Affects Versions: qpid-java-6.0, qpid-java-6.1
>            Reporter: Keith Wall
>            Assignee: Keith Wall
>             Fix For: qpid-java-broker-7.0.0
>
>         Attachments: flow-to-disk-based-on-used-direct-memory-6-0-x.diff
>
>
> Some Broker usage patterns can lead to the Broker failing with a "java.lang.OutOfMemoryError:
Direct buffer memory" error.
> For the condition to manifest a producing application needs to use a single connection
for the production of messages. Some messages need to be consumed quickly whilst others remain
on the Broker. This pattern might result from:
> # the consuming application using message selectors to selectively consume some messages
whilst leaving others on the Broker.
> # the use of 'out of order' queue types (priority/sorted etc) where lower priority items
are left of the Broker.
> # the producing application routing messages to multiple queues and the consuming application
draining some queues but not others.
> The problem arises owing to the buffering strategy under the IO layer.
> {{NonBlockingConnection}} allocates a {{netInputBuffer}} from pooled direct memory of
size 256K.  This buffer is used for all network reads until the space remaining in the buffer
is less than the amount required to complete the AMQP frame that is currently being read,
in which case a new netInputBuffer is allocated. The protocol layers identify the message
payload/message headers as part of AMQP frame parsing and create a byte-buffer *views* onto
the original input byte buffer.  This byte buffer is retained by the store until the message
is consumed.   In the usage pattern described above, a single long lived message amongst a
stream of shorted lived messages causes an entire 256K buffer chunk to be retained.  Qpid
cannot dispose or reuse the buffer until it is entirely unoccupied.
> The flow to disk feature is designed to prevent excessive direct memory use by flushing
messages to disk if thresholds are breached.  Flow to disk does not help the scenario described
above because it considers the total payloads of live messages.  Its algorithm does not consider
direct memory occupancy.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

---------------------------------------------------------------------
To unsubscribe, e-mail: dev-unsubscribe@qpid.apache.org
For additional commands, e-mail: dev-help@qpid.apache.org


Mime
View raw message