lucene-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Otis Gospodnetic (JIRA)" <>
Subject [jira] Commented: (LUCENE-759) Add n-gram tokenizers to contrib/analyzers
Date Sat, 03 Mar 2007 16:45:50 GMT


Otis Gospodnetic commented on LUCENE-759:

Ah, didn't see your comments here earlier, Doron.  Yes, I think you are correct about the
1024 limit  - when I wrote that Tokenizer I was thinking TokenFilter, and thus I was thinking
that that input Reader represents a Token, which was wrong.  So, I thought, "oh, 1024 chars/token,
that will be enough".  I ended up needing TokenFilters for SOLR-81, so that's what I checked
in.  Those operate on tokens and don't have the 1024 limitation.

Anyhow, feel free to slap your test + the fix in and thanks for checking!

> Add n-gram tokenizers to contrib/analyzers
> ------------------------------------------
>                 Key: LUCENE-759
>                 URL:
>             Project: Lucene - Java
>          Issue Type: Improvement
>          Components: Analysis
>            Reporter: Otis Gospodnetic
>         Assigned To: Otis Gospodnetic
>            Priority: Minor
>             Fix For: 2.2
>         Attachments: LUCENE-759-filters.patch, LUCENE-759.patch, LUCENE-759.patch, LUCENE-759.patch
> It would be nice to have some n-gram-capable tokenizers in contrib/analyzers.  Patch
coming shortly.

This message is automatically generated by JIRA.
You can reply to this email to add a comment to the issue online.

To unsubscribe, e-mail:
For additional commands, e-mail:

View raw message