lucene-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Mark Harwood (JIRA)" <>
Subject [jira] [Updated] (LUCENE-6747) FingerprintFilter - a TokenFilter for clustering/linking purposes
Date Thu, 20 Aug 2015 08:42:45 GMT


Mark Harwood updated LUCENE-6747:
    Attachment: fingerprintv2.patch

Thanks for taking a look, Adrien.
Added a v2 patch with following changes:

1) added call to input.end() to get final offset state
2) final state is retained using captureState()  
3) Added a FingerprintFilterFactory class
As for the alternative hashing idea :
For speed reasons this would be a nice idea but reduces the read-ability of results if you
want to debug any collisions or otherwise display connections.

For compactness reasons (storing in doc values etc) it would always be possible to chain a
conventional hashing algo in a TokenFilter on the end of this text-normalizing filter. (Do
we already have a conventional hashing TokenFilter?)

> FingerprintFilter - a TokenFilter for clustering/linking purposes
> -----------------------------------------------------------------
>                 Key: LUCENE-6747
>                 URL:
>             Project: Lucene - Core
>          Issue Type: New Feature
>          Components: modules/analysis
>            Reporter: Mark Harwood
>            Priority: Minor
>         Attachments: fingerprintv1.patch, fingerprintv2.patch
> A TokenFilter that emits a single token which is a sorted, de-duplicated set of the input
> This approach to normalizing text is used in tools like OpenRefine[1] and elsewhere [2]
to help in clustering or linking texts.
> The implementation proposed here has a an upper limit on the size of the combined token
which is output.
> [1]
> [2]

This message was sent by Atlassian JIRA

To unsubscribe, e-mail:
For additional commands, e-mail:

View raw message