tika-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Caleb Ott (JIRA)" <j...@apache.org>
Subject [jira] [Commented] (TIKA-2802) Out of memory issues when extracting large files (pst)
Date Fri, 04 Jan 2019 16:46:00 GMT

    [ https://issues.apache.org/jira/browse/TIKA-2802?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16734324#comment-16734324

Caleb Ott commented on TIKA-2802:

[~tallison@apache.org], I was able to use the github master branch as a dependency using [https://jitpack.io/#apache/tika/master-SNAPSHOT.]

I am still seeing the `fDTDDecl` field holding a large char[] in memory. It looks like it
is being held in the document source field which I don't think is getting cleared out by your


I think the reason the `fDTDDecl` field is never cleared during a reset is if it is just the
XML DTD, it should never grow very large at all. I'm wondering if it is a xerces bug that
somehow the entire excel workbook is being stored in the DTD field.

I have figured out a temporary solution that is working for me. I create my own cache of SAX
parsers and add my own parser to the ParseContext before running parse on Tika. Then I can
manually clear or reset the parser after tika is done parsing the file. Not a great solution,
but it is working for the time being to keep my application from running out of memory.

> Out of memory issues when extracting large files (pst)
> ------------------------------------------------------
>                 Key: TIKA-2802
>                 URL: https://issues.apache.org/jira/browse/TIKA-2802
>             Project: Tika
>          Issue Type: Bug
>          Components: parser
>    Affects Versions: 1.20, 1.19.1
>         Environment: Reproduced on Windows 2012 R2 and Ubuntu 18.04.
> Java: jdk1.8.0_151
>            Reporter: Caleb Ott
>            Priority: Critical
>         Attachments: Selection_111.png, Selection_117.png
> I have an application that extracts text from multiple files on a file share. I've been
running into issues with the application running out of memory (~26g dedicated to the heap).
> I found in the heap dumps there is a "fDTDDecl" buffer which is creating very large char
arrays and never releasing that memory. In the picture you can see the heap dump with 4 SAXParsers
holding onto a large chunk of memory. The fourth one is expanded to show it is all being held
by the "fDTDDecl" field. This dump is from a scaled down execution (not a 26g heap).
> It looks like that DTD field should never be that large, I'm wondering if this is a bug
with xerces instead? I can easily reproduce the issue by attempting to extract text from large
.pst files.

This message was sent by Atlassian JIRA

View raw message