db-derby-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From <de...@segel.com>
Subject RE: OutOfMemoryErrors when testing Derby with DOTS
Date Wed, 01 Feb 2006 15:43:12 GMT

> -----Original Message-----
> From: John Embretsen [mailto:John.Embretsen@Sun.COM]
> Sent: Wednesday, February 01, 2006 9:12 AM
> To: Derby Discussion
> Subject: Re: OutOfMemoryErrors when testing Derby with DOTS
> Michael Segel wrote:
> > I think that there are two issues. One how Derby handles itself and
> attempts
> > to clean up stale objects.
> >
> > The second is that whoever wrote the test didn't know what they were
> doing.
> > So that question is: "Should Derby be smart enough to protect bad
> programmers
> > from themselves?".
> And representatives from both the "yes" and "no" camps have provided
> arguments in this thread, which is good I guess :) I believe this is not
> an either/or issue, but that it has to be considered on a case-by-case
> basis.

I think you're on the right track. And this is kind of the confusing part.
At a macro level, one has to decide the future of Derby/Cloudscape/JavaDB.
(Since they are all the same code stream...)

Do you want to have a small footprint that relies on the developer/dba
knowing what they are doing? Or do you want a larger footprint to create
something that is self healing, or has some advanced features that will
increase the size of the footprint?

This is critical, in that it has an influence on the design  of the smaller

At the micro level, you are correct. Each component's design needs to
consider size and how much functionality to provide, including potential
extensibility.  At this level it is a case by case decision.

> There is also the question of protecting _other_ Derby users from "bad
> programmers" (e.g. in a multi-user Client/Server environment where one
> Derby Network Server is shared by multiple users with their own
> databases and/or JDBC client applications accessing these databases),
> which I think is important to consider.
True, no disagreement there.
Yet outside of patches, what else should be done?
Keep in mind that with an Apache product, any IP you add, you are giving
away freely. So that limits some from contributing. Also what about the
quality of the work being done? Not everyone here is at the same level of
coding or design. 

> > With respect to both issues, any solution will increase the footprint of
> > Derby.  This may be a bad thing.
> As you can see from my recent comment to DERBY-210, the patch Deepa has
> uploaded to that issue eliminated the OutOfMemory error I was reporting,
> at least for the first 24 hours (using the same test setup) of the test
> run.
Ok, but a patch is one thing. Adding an advanced SQL Optimizer, use
tablespaces /chunks (raw IO) etc ... all these features are worthwhile if
you're developing a first tier database yet they all add real estate.

> As far as I know, the patch changes just a few things in the client,
> which in my case increased the footprint of derbyclient.jar by a
> negligible amount (around 50 bytes). So in this particular case, it was
>   not a very bad thing, in my opinion. I understand your concern, though.
> > So the better question is... do you blame the hammer or the carpenter
> when he
> > can't hit a nail straight in to the wood?
> Who to blame is IMHO not the most important thing if the house falls
> down on the person who lives there because of this.
Again, do you blame the carpenter or the products he uses? (Keep in mind
that other carpenters are using the same product too. ;-)

I don't think we disagree at all on this.
Patches and bug fixes are necessary and I don't consider them to be
something to increase the size of the foot print.  What I am more concerned
is the feature requests that we will want to consider. (Sun has their
agenda, IBM has theirs.)

Does this make sense?

> --
> John

View raw message