manifoldcf-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Karl Wright <>
Subject Re: Manifold Crawler Crashes
Date Thu, 20 Jun 2019 10:06:07 GMT
Hi Priya,

Being unable to reach the web interface sounds like either a network issue
or a problem with the app server.

Can you describe the configuration you are running in?  Is this a
multiprocess deployment or a single-process deployment?

When your docker container dies, can you still reach it via the standard
in-container bash tools?  What is happening there?


On Thu, Jun 20, 2019 at 5:54 AM Priya Arora <> wrote:

> Hi Karl,
> Crash here means, "the site could not be reached" kind of HTML page
> appears , when accessing http://localhost:3000/mcf-crawler-ui/index.jsp.
> Explanation:- When running certain job on ManifoldCF server(2.13) after
> sometime (of successful running state), suddenly browser gives me "the site
> could not be reached" (this kind of error) and page does not reload until i
> restart it through docker command.
> once i will restart the container through docker MCF get to load again.
> Thanks
> Priya
> On Thu, Jun 20, 2019 at 3:08 PM Karl Wright <> wrote:
>> Please describe what you mean by "crash".  What actually happens?
>> Karl
>> On Thu, Jun 20, 2019, 2:04 AM Priya Arora <> wrote:
>>> Hi,
>>> I am running multiple jobs(2,3) simultaneously on Manifold server and
>>> the configuration is
>>> 1) For Crawler server - 16 GB RAM and 8-Core Intel(R) Xeon(R) CPU
>>> E5-2660 v3 @ 2.60GHz and
>>> 2) For Elasticsearch server - 48GB and 1-Core Intel(R) Xeon(R) CPU
>>> E5-2660 v3 @ 2.60GHz
>>> Job working is to fetch data from some public and intranet sites and
>>> then ingesting data into Elastic search.
>>> Maximum connection on both Repository connections and Output connection
>>> is 48(for all 3 jobs).
>>> What problem i am facing here is when i am running multiple jobs the
>>> manifold crashes after some time and there is nothing inside manifold.log
>>> files that hints out me some error.
>>> Is the maximum connections increases(48+48+48) while running all three
>>> jobs together?
>>> So do i need to divide max connections(48) among all three jobs?
>>> How many connections maximum we can have to run the jobs individually
>>> and simultaneously.
>>> what should be the maximum allowed number of max handles in
>>> properties.xml file and postgres config file?
>>> So the problem is to figure out what is the reason for the crawler crash.
>>> Can you please help me on that as soon as possible.
>>> Thanks and regards
>>> Priya

View raw message