db-derby-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Bergquist, Brett" <Br...@canoga.com>
Subject Thoughts on a design to help a use case with continuous inserted and deleteed records
Date Wed, 04 Aug 2010 17:08:20 GMT
Okay after some testing, it seems that Derby is reusing the space of deleted records before
allocating more space from the file system.  If this is the case then what use does the call:

                call syscs_util.syscs_inplace_compress_table(SCHEMA,TABLE,
1, 0, 0);

have?  Basically what does the PURGE_ROWS option do that is above and beyond what is being
done by Derby already to reuse the space of deleted records?

Also after testing we are seeing the following. With a database with no deleted rows, my test
application is inserting about 150 records/second into a table.  I let this run for about
2 million records and the insert rate is consistent.  Now I purge out 1.5 million records
and run the test again.   The insert rate is now about 25 records/second.  Running the
above compress with the PURGE_ROWS option and rerun the test and still about 25 records/second. 
Run full SYSCS_UTIL.SYSCS_COMPRESS_TABLE and rerun the test and the insert rate is back to
150 records/second.

The reduced insert rate because of deleted records is very much a problem.  We have a table
that gets about 700K records inserted per day and purges out 30 days old data at about 700K
records per day.  This has a great effect on our insert rate.  Why the big hit because of
deleted records and can anything other than a compress help?  This process has no downtime
so running a compress can only be done maybe once a month.

Thanks for an feedback.


View raw message