lucene-solr-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Steven White <swhite4...@gmail.com>
Subject Scaling data extractor with Solr
Date Mon, 03 Oct 2016 22:13:16 GMT
Hi everyone,

I'm up to speed about Solr on how it can be setup to provide high
availability (if one Solr server goes down, the backup one takes over).  My
question is how do I make my custom crawler to play "nice" with Solr in
this environment.

Let us say I setup Solr with 3 servers so that if one fails the other one
takes over.  Let us say I also setup my crawler with 3 servers so if one
goes down the other takes over.  But how should my crawlers work?  Can each
function unaware of each other and send the same data to Solr or must my
crawlers synchronize with each other so only 1 is active sending data to
Solr and the others are on stand-by mode?

I like to hear from others how they solved this problem so I don't end up
re-inventing it.

Thanks.

Steve

Mime
  • Unnamed multipart/alternative (inline, None, 0 bytes)
View raw message