db-derby-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Peter Ondruška" <peter.ondru...@gmail.com>
Subject Re: faster inserts in big tables
Date Tue, 25 Nov 2008 22:22:47 GMT
Go with latest Derby, if you load data from externat file use bulk
import. Using large log file may help.

2008/11/25, publicayers@verizon.net <publicayers@verizon.net>:
> I have 10's of thousands of rows to add to a table, possibly 100's of
> thousands, and I'm wondering if there's anything else I can do to speed
> this up. The table could end up having a couple of million rows.
> This is what I've done so far:
> * Using a PreparedStatement that gets reused with each insert.
> * Set locking level to TABLE for that table.
> * Turned off autocommit.
> * Set the connection to READ_COMMIT.
> In addition to that, I'm also setting these system parameters, though
> not
> necessarily to improve insert performance:
> * derby.system.durability=test
> * derby.storage.pageSize=32768
> The table has one odd feature: The last column is a VARCHAR(32672) FOR
> BIT DATA. I've tried setting the length to something smaller, but it
> didn't really seem to matter.
> The primary key is an auto generated int with another 2-column index on
> two BIGINT columns. Something I found interesting is that the inserts
> seem to go 2x faster if I have the 2-column index in place than if I
> have just the primary-key index.
> I'm running
>     Derby 10.2.2
>     JRE 1.6.0_07
>     Windows XP SP2
> Is there anything else I can do to speed up row inserts?
> Thanks,
> Brian

View raw message