lucene-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Jason Rutherglen (JIRA)" <>
Subject [jira] Commented: (LUCENE-2324) Per thread DocumentsWriters that write their own private segments
Date Thu, 06 Jan 2011 01:00:51 GMT


Jason Rutherglen commented on LUCENE-2324:

We're going to great lengths it seems to emulate a producer consumer queue (eg,
ordering of calls with sequence ids, thread pooling) without actually
implementing one. A fixed size blocking queue would simply block threads as
needed and would probably look cleaner in code. We could still implement thread
affinities though I simply can't see most applications requiring affinity, so
perhaps we can avoid it for now and put it back in later? 

{quote}I think flush control must be global? Ie when we've used too much RAM we
start flushing?{quote}

Right, it should. I'm just not sure we still need FC's global waiting during
flush, that'd seem to go away because the RAM usage tracking is in DW. If we
record the new incremental RAM used (which I think we do) per add/update/delete
then we can enable a pluggable user defined flush policy. 

{quote} If a given DWPT is flushing then we pick another? Ie the binding logic
would naturally avoid DWPTs that are not available - either because another
thread has it, or it's flushing. But it would prefer to use the same DWPT it
used last time, if possible (affinity). {quote}

However once the affinity DWPT flush completed, we'd need logic to revert back
to the original?

I think the 5% model of LUCENE-2573 may typically yield flushing that occurs in
near intervals of each other, ie, it's going to slow down the aggregate
indexing if they're flushing on top of each other. Maybe we should start at 60%
then the multiple of 40% divided by maxthreadstate - 1? Ideally we'd
statistically optimize the flush interval per machine, eg, SSDs and RAM disks
will likely require only a small flush percentage interval.

> Per thread DocumentsWriters that write their own private segments
> -----------------------------------------------------------------
>                 Key: LUCENE-2324
>                 URL:
>             Project: Lucene - Java
>          Issue Type: Improvement
>          Components: Index
>            Reporter: Michael Busch
>            Assignee: Michael Busch
>            Priority: Minor
>             Fix For: Realtime Branch
>         Attachments: LUCENE-2324-SMALL.patch, LUCENE-2324-SMALL.patch, lucene-2324.patch,
lucene-2324.patch, LUCENE-2324.patch, test.out
> See LUCENE-2293 for motivation and more details.
> I'm copying here Mike's summary he posted on 2293:
> Change the approach for how we buffer in RAM to a more isolated
> approach, whereby IW has N fully independent RAM segments
> in-process and when a doc needs to be indexed it's added to one of
> them. Each segment would also write its own doc stores and
> "normal" segment merging (not the inefficient merge we now do on
> flush) would merge them. This should be a good simplification in
> the chain (eg maybe we can remove the *PerThread classes). The
> segments can flush independently, letting us make much better
> concurrent use of IO & CPU.

This message is automatically generated by JIRA.
You can reply to this email to add a comment to the issue online.

To unsubscribe, e-mail:
For additional commands, e-mail:

View raw message