lucene-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Varun Thacker (JIRA)" <j...@apache.org>
Subject [jira] [Commented] (SOLR-9764) Design a memory efficient DocSet if a query returns all docs
Date Fri, 03 Feb 2017 11:01:51 GMT

    [ https://issues.apache.org/jira/browse/SOLR-9764?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15851332#comment-15851332
] 

Varun Thacker commented on SOLR-9764:
-------------------------------------

I tried running a small benchmark to see how much memory does this save:

Indexed 10M documents and started solr with 4G of heap. Then on this static index I fired
10k queries {code}{!cache=false}*:*{code}
Freed memory was calculated by firing 10k queries then forcing a GC and reading the freed
memory in GC viewer.

Freed Memory:
Trunk with this patch:   1301MB
Solr 6.3             :           1290MB

A FixedBitSet of 10M entries translates to a long array of size=156250 = 1.2 MB

The filterCache/queryResultCache didn't have any entries but maybe I'm missing something here.
I'll look into the test setup over the next couple of days to see what's wrong

> Design a memory efficient DocSet if a query returns all docs
> ------------------------------------------------------------
>
>                 Key: SOLR-9764
>                 URL: https://issues.apache.org/jira/browse/SOLR-9764
>             Project: Solr
>          Issue Type: Improvement
>      Security Level: Public(Default Security Level. Issues are Public) 
>            Reporter: Michael Sun
>            Assignee: Yonik Seeley
>             Fix For: 6.5, master (7.0)
>
>         Attachments: SOLR_9764_no_cloneMe.patch, SOLR-9764.patch, SOLR-9764.patch, SOLR-9764.patch,
SOLR-9764.patch, SOLR-9764.patch, SOLR-9764.patch, SOLR-9764.patch, SOLR-9764.patch
>
>
> In some use cases, particularly use cases with time series data, using collection alias
and partitioning data into multiple small collections using timestamp, a filter query can
match all documents in a collection. Currently BitDocSet is used which contains a large array
of long integers with every bits set to 1. After querying, the resulted DocSet saved in filter
cache is large and becomes one of the main memory consumers in these use cases.
> For example. suppose a Solr setup has 14 collections for data in last 14 days, each collection
with one day of data. A filter query for last one week data would result in at least six DocSet
in filter cache which matches all documents in six collections respectively.   
> This is to design a new DocSet that is memory efficient for such a use case.  The new
DocSet removes the large array, reduces memory usage and GC pressure without losing advantage
of large filter cache.
> In particular, for use cases when using time series data, collection alias and partition
data into multiple small collections using timestamp, the gain can be large.
> For further optimization, it may be helpful to design a DocSet with run length encoding.
Thanks [~mmokhtar] for suggestion. 



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

---------------------------------------------------------------------
To unsubscribe, e-mail: dev-unsubscribe@lucene.apache.org
For additional commands, e-mail: dev-help@lucene.apache.org


Mime
View raw message