lucene-solr-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Erick Erickson <erickerick...@gmail.com>
Subject Re: Can Apache Solr Handle TeraByte Large Data
Date Mon, 03 Aug 2015 18:29:40 GMT
Ahhh, listen to Hatcher if you're not indexing the _contents_ of the
files, just the filenames....

Erick

On Mon, Aug 3, 2015 at 2:22 PM, Erik Hatcher <erik.hatcher@gmail.com> wrote:
> Most definitely yes given your criteria below.  If you don’t care for the text to be
parsed and indexed within the files, a simple file system crawler that just got the directory
listings and posted the file names split as you’d like to Solr would suffice it sounds like.
> —
> Erik Hatcher, Senior Solutions Architect
> http://www.lucidworks.com <http://www.lucidworks.com/>
>
>
>
>
>> On Aug 3, 2015, at 1:56 PM, Mugeesh Husain <mugeesh@gmail.com> wrote:
>>
>> Hi Alexandre,
>> I have a 40 millions of files which is stored in a file systems,
>> the filename saved as ARIA_SSN10_0007_LOCATION_0000129.pdf
>> 1.)I have to split all underscore value from a filename and these value have
>> to be index to the solr.
>> 2.)Do Not need file contains(Text) to index.
>>
>> You Told me "The answer is Yes" i didn't get in which way you said Yes.
>>
>> Thanks
>>
>>
>>
>>
>> --
>> View this message in context: http://lucene.472066.n3.nabble.com/Can-Apache-Solr-Handle-TeraByte-Large-Data-tp3656484p4220527.html
>> Sent from the Solr - User mailing list archive at Nabble.com.
>

Mime
View raw message