manifoldcf-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Karl Wright <daddy...@gmail.com>
Subject Re: dbname.data is huge
Date Wed, 20 May 2015 01:34:20 GMT
Hi Deanna,

HSQLDB is not great for production use for a number of reasons; it's also
unconstrained in memory consumption.

Indexing 30 rows over and over should not create a huge table; I suspect
that if you queried it you would find the number of rows to be tiny. The
reason it gets big has to do mainly with HSQLDB disk management algorithms
not being that great.  So try MySQL or PostgreSQL is my suggestion.

Karl


On Tue, May 19, 2015 at 6:10 PM, Delapasse, Deanna <
ddelapasse@oceaneering.com> wrote:

> Sorry for all these emails!  One more really easy question and I promise
> to stop bothering y'all for a while.
>
> While running locally on my laptop using the simple example my dbname.data
> file (I guess this is my hsqldb) get huge!!!  I'm only indexing 30 rows
> over and over (working on some connector enhancements and learning Elastic
> Search), but my dbname.data is already > 11GB.
>
> I don't need to keep any of this data.  Is there a fast, easy way to just
> 'reset' my db? I don't mind recreating my output/repo/jobs?
>
> I'm in a meeting right now listening to my boss tell some bigwigs how
> we're got this all figured out (which explains all my stress-filled emails)
>  :-).
>
> Thanks!!!
> Deanna
>

Mime
View raw message