lucene-solr-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Shawn Heisey <>
Subject Re: Recovery Issue - Solr 6.6.1 and HDFS
Date Wed, 22 Nov 2017 16:44:43 GMT
On 11/22/2017 6:44 AM, Joe Obernberger wrote:
> Right now, we have a relatively small block cache due to the
> requirements that the servers run other software.  We tried to find
> the best balance between block cache size, and RAM for programs, while
> still giving enough for local FS cache.  This came out to be 84 128M
> blocks - or about 10G for the cache per node (45 nodes total).

How much data is being handled on a server with 10GB allocated for
caching HDFS data?

The first message in this thread says the index size is 31TB, which is
*enormous*.  You have also said that the index takes 93TB of disk
space.  If the data is distributed somewhat evenly, then the answer to
my question would be that each of those 45 Solr servers would be
handling over 2TB of data.  A 10GB cache is *nothing* compared to 2TB.

When index data that Solr needs to access for an operation is not in the
cache and Solr must actually wait for disk and/or network I/O, the
resulting performance usually isn't very good.  In most cases you don't
need to have enough memory to fully cache the index data ... but less
than half a percent is not going to be enough.


View raw message