db-derby-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From yarono <yar...@il.ibm.com>
Subject Re: Derby db - need to disable improved performance
Date Sun, 03 Jun 2007 16:14:09 GMT

It doesn't make sense that this parameter effected the performance, since I
ran all tests on the same device.

On one hand the synchronious write resulted in 116 writes of 24 bytes per
Odbc-db2 (usind derby db in c) performed 232 transactions (inc. commit) per
And the embeded version of derby db in java performed about 290 transactions
per second.

All the above tests wrote to the same device, thus it can't be something
that is associated with the device.

Here's the main loop of the synchronous write test (xstring_len = 24 and
xstring is char*):
    for (i = 0; i < n_ins; ++i)
        write(file_descriptor, xstring, xstring_len);

yarono wrote:
> My OS is suse10.
> my JVM is IBM's version 1.4.2 as detailed in the following link:
> http://www.novell.com/products/linuxpackages/server10/i386/java-1_4_2-ibm-jdbc.html
> The app is indeed single-threaded, so group commit is not the issue.
> The synchronous write was measured in c (not in java).
> Is there a way to control or configure the synchronization of writes of
> the JVM?
> Mike Matrigali wrote:
>> is your app single threaded, if so group commit is not the issue.
>> What is your OS?  What is your JVM?  Derby may use different syncing
>> algorithms depending on JVM version.
>> How did you measure synchronous write, ie. did you
>> write a java program and execute against the same JVM as derby is 
>> running in?
>> The disk that that contains the log directory is the one of interest. 
>> Each transaction is made up of a number of log records.  From your 
>> description each transaction will have the following:
>> begin log record
>> insert log record for row into base table
>> insert log record for row into primary key index
>> commit log record
>> yarono wrote:
>>> Hello,
>>> I'm working on a simple db. Each record is composed of 3 long values.
>>> The
>>> first two are the primary key.
>>> I have to measure the performance of the insertions. Each insertion is
>>> wrapped in a transaction, which is commited having only one insertion in
>>> it.
>>> I've measured both berkeley db performance and postgres and got about
>>> 110-115 insertions per second.
>>> Now in derby db (both in embeded mode and server mode) I get better
>>> performance: about 250-300 insertions per second. This obviously results
>>> from some kind of a group commit, although I get these results both when
>>> auto-commiting or manual-commiting after each insertion.
>>> I've performed a simple test of synchronious writing 24 bytes (3 * 8
>>> bytes)
>>> to the disk. It measure 117 writes per second, and I believe this is the
>>> upper bound of any db performance.
>>> So, I don't understand why I get such good performance, although I
>>> commit
>>> after each insertion.
>>> I examined the .dat files in both /log and /seg0 folders. None of them
>>> increase in 24 bytes segments, but rathar bigger segments.
>>> So, my questions are:
>>> 1. Which log file in /log or /seg0 should I examine to analyze the
>>> numebr of
>>> bytes written each write to disk?
>>> 2. How do I disable the group commit or whatever attribute that causes
>>> this
>>> communal write? how do I make each transaction be written on its own to
>>> the
>>> disk?
>>> Thanks in advance,
>>> Yaron

View this message in context: http://www.nabble.com/Derby-db---need-to-disable-improved-performance-tf3796921.html#a10937992
Sent from the Apache Derby Users mailing list archive at Nabble.com.

View raw message