db-derby-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Sundar Narayanaswamy <sundar...@yahoo.com>
Date Fri, 28 Oct 2011 04:26:54 GMT
I am trying to use Derby database in the embedded mode in an application. I wrote a test program
where I have a 

table (named LOCATION) with 4 columns as below:create table location(id int, num int, addr
varchar(40), zip int, primary key(id, zip))
create index loc_index on location (num)

I insert 10000 rows into the table, then delete all that rows. I then call  SYSCS_UTIL.SYSCS_INPLACE_COMPRESS_TABLE

with just the option to purge rows so that the space left behind by deleted rowscan be reused
for future inserts. I have 

the inserts, deletes and compress in a loop. After running through the loop a few times, I
am noticing that the 

number of allocated and unfilled pages for the primary key keeps growing (causing database
size to grow over time). The
longer I run the loop, larger these numbers are. That does not happen for the table or the
index files though..

CONGLOMERATE            NumAllocatedPages      NumFreePages    NumUnFilledPages
LOCATION                        1                   
              831                      0
SQL111027234806120       1342                            294   
LOC_INDEX                       1                       
          521                      1    

The primary key space continues to grow even when I include the options to defragment_rows
and truncate_end to the above function. 

CONGLOMERATE            NumAllocatedPages   NumFreePages    NumUnFilledPages
LOCATION                        1                       
           0                           0
SQL111027233119770       1674                            47   
LOC_INDEX                       13                   
            357                        3

The SYSCS_UTIL.SYSCS_COMPRESS_TABLE function works well and leaves no free/unfilled pages
as expected. However, I am concerned with the 
efficiency (time taken) of using the compress function since there could be 10s of millions
of rows in a production setup. It seems that the
SYSCS_UTIL.SYSCS_INPLACE_COMPRESS_TABLE function with just the purge_rows option would work
best for my situation. Is 
this function not expected to free up deleted space in primary key files ? 

Thanks very much for you help,
View raw message