db-derby-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Bergquist, Brett" <BBergqu...@canoga.com>
Subject RE: Compress large table
Date Wed, 14 Mar 2012 12:54:45 GMT
The two procedures SYSCS_COMPRESS_TABLE and SYSCS_INPLACE_COMPRESS_TABLE) for compacting are
going to hold a lock on the table during the total time of compression which for 20 million
rows is going to be a long time.   Not the answer that you want to hear but that is what it

Have you tried querying the SYSCS_DIAG.SPACE_TABLE to make sure that you have space to be
reclaimed.  You might find that compressing the table is not going to return much space in
any case.


Replace 'MYSCHEMA' and 'MYTABLE' with your particular ones.

A couple of thoughts, it might be that you could copy the rows from one table to a new table
and then drop the original table and rename new table back.  Of course, this does require
you to have about twice the disk usage while being done and you would need to be smart about
making sure that any new rows after you start the copy are also moved to the new table, but
it might allow you to have the old table be used while you are building a new compressed table.

Hope this helps.


From: Stefan R. [mailto:elstefanito@gmail.com]
Sent: Tuesday, March 13, 2012 12:59 PM
To: derby-user@db.apache.org
Subject: Compress large table

In one of our current projects we are using derby and have some large db tables (over 20 Million
rows and some indexes).
We decided to clean this table with a scheduled task. Now to be able to free some disk space
we want to compress the files using SYSCS_COMPRESS_TABLE. What would be the best option to
use this function and keep the table lock as short as possible? Our app needs to keep running
while the compression is taking place.
Do you have any suggestions?

Thank you,

View raw message