lucene-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Mark Miller (JIRA)" <j...@apache.org>
Subject [jira] [Created] (SOLR-6089) When using the HDFS block cache, when a file is deleted, it's underlying data entries in the block cache are not removed, which is a problem with the global block cache option.
Date Mon, 19 May 2014 01:57:37 GMT
Mark Miller created SOLR-6089:
---------------------------------

             Summary: When using the HDFS block cache, when a file is deleted, it's underlying
data entries in the block cache are not removed, which is a problem with the global block
cache option.
                 Key: SOLR-6089
                 URL: https://issues.apache.org/jira/browse/SOLR-6089
             Project: Solr
          Issue Type: Bug
          Components: hdfs
            Reporter: Mark Miller
            Assignee: Mark Miller


Patrick Hunt noticed this. Without the global block cache, the block cache was not reused
after a directory was closed. Now that it is reused when using the global cache, leaving the
underlying entries presents a problem if that directory is created again because blocks from
the previous directory may be read. This could happen when you remove a solrcore and recreate
it with the same data directory (or a collection with the same name). I could only reproduce
it easily using index merges (core admin) with the sequence: merge index, delete collection,
create collection, merge index. Reads on the final merged index can look corrupt or queries
may just return no results.



--
This message was sent by Atlassian JIRA
(v6.2#6252)

---------------------------------------------------------------------
To unsubscribe, e-mail: dev-unsubscribe@lucene.apache.org
For additional commands, e-mail: dev-help@lucene.apache.org


Mime
View raw message