lucene-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Erik Hatcher (JIRA)" <>
Subject [jira] Closed: (LUCENE-444) StandardTokenizer loses Korean characters
Date Wed, 05 Oct 2005 10:41:55 GMT
     [ ]
Erik Hatcher closed LUCENE-444:

I'm closing this issue... but some unit tests would be nice to go along with this too, eventually

> StandardTokenizer loses Korean characters
> -----------------------------------------
>          Key: LUCENE-444
>          URL:
>      Project: Lucene - Java
>         Type: Bug
>   Components: Analysis
>     Reporter: Cheolgoo Kang
>     Priority: Minor
>      Fix For: 1.9
>  Attachments: StandardTokenizer_Korean.patch
> While using StandardAnalyzer, exp. StandardTokenizer with Korean text stream, StandardTokenizer
ignores the Korean characters. This is because the definition of CJK token in StandardTokenizer.jj
JavaCC file doesn't have enough range covering Korean syllables described in Unicode character
> This patch adds one line of 0xAC00~0xD7AF, the Korean syllables range to the StandardTokenizer.jj

This message is automatically generated by JIRA.
If you think it was sent incorrectly contact one of the administrators:
For more information on JIRA, see:

To unsubscribe, e-mail:
For additional commands, e-mail:

View raw message