[ https://issues.apache.org/jira/browse/TIKA-2802?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16733511#comment-16733511
]
Hudson commented on TIKA-2802:
------------------------------
UNSTABLE: Integrated in Jenkins build Tika-trunk #1614 (See [https://builds.apache.org/job/Tika-trunk/1614/])
TIKA-2802 -- try to clear the XMLReader's resources to avoid OOM (tallison: [https://github.com/apache/tika/commit/a0688825b15b8d3f1672236b0f1f6536c8a863c4])
* (edit) tika-core/src/main/java/org/apache/tika/utils/XMLReaderUtils.java
> Out of memory issues when extracting large files (pst)
> ------------------------------------------------------
>
> Key: TIKA-2802
> URL: https://issues.apache.org/jira/browse/TIKA-2802
> Project: Tika
> Issue Type: Bug
> Components: parser
> Affects Versions: 1.20, 1.19.1
> Environment: Reproduced on Windows 2012 R2 and Ubuntu 18.04.
> Java: jdk1.8.0_151
>
> Reporter: Caleb Ott
> Priority: Critical
> Attachments: Selection_111.png
>
>
> I have an application that extracts text from multiple files on a file share. I've been
running into issues with the application running out of memory (~26g dedicated to the heap).
> I found in the heap dumps there is a "fDTDDecl" buffer which is creating very large char
arrays and never releasing that memory. In the picture you can see the heap dump with 4 SAXParsers
holding onto a large chunk of memory. The fourth one is expanded to show it is all being held
by the "fDTDDecl" field. This dump is from a scaled down execution (not a 26g heap).
> It looks like that DTD field should never be that large, I'm wondering if this is a bug
with xerces instead? I can easily reproduce the issue by attempting to extract text from large
.pst files.
--
This message was sent by Atlassian JIRA
(v7.6.3#76005)
|