db-derby-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Satheesh Bandaram <sathe...@Sourcery.Org>
Subject Re: Question about delete-connected restrictions
Date Thu, 02 Feb 2006 03:24:25 GMT
Glad you fixed your own problem... power of open-source! If you have
sufficient interest, you could merge the patch that is already present
to current codeline, fix one issue posted as a comment and submit it to
Apache. You might as well experience all parts of open-source....

Thanks for using Derby


Justin Patterson wrote:

> It looks like that is, indeed, the problem.  I just commented out that
> DDL-time check and now it's working fine.  I realize that I have
> nothing to cover me at run-time, but the situation is not going to
> come up in my app.  I'll just upgrade when the fix makes it in
> officially.
> Thanks!
> -Justin
> Satheesh Bandaram wrote:
>> Hi Justin,
>> You may be hitting DERBY-338
>> <http://issues.apache.org/jira/browse/DERBY-338>. I think the problem
>> there is Derby tries to catch possible conflict in cascade delete
>> actions at DDL time, when a runtime checking is what is needed. In
>> your case, you said, you don't expect a conflict at run-time.
>> There is a partial patch pending for DERBY-338 for long time now. It
>> is probably out of sync with the code. Do you think that could be the
>> problem you are seeing? Is there anyway you can define your DDL
>> without "conflicting actions" as Derby seems to think? Derby also has
>> triggers, if you could use that for part of your schema.
>> Satheesh
>> Justin Patterson wrote:
>>> Hey Derby Users,
>>> I'm designing an application to keep up with my music collection. 
>>> It's currently running fine on Hibernate/One$DB until I execute 20k
>>> or so transactions.  Then, the JVM runs out of memory.  After
>>> profiling it, I determined that the leaks are related to One$DB and
>>> that's just the final straw for me.  I submitted some questions to
>>> the One$DB forums, but they're not very active and there doesn't
>>> seem to be a new version forthcoming.  So, I'm shopping aroiund for
>>> a better embedded RDBMS and I've heard great things about Derby.
>>> After porting my schema over, I ran into something that's peculiar
>>> (to me, anyway) and I'd appreciate any advice that you can give.  My
>>> apologies if this is not Derby-specific, but I didn't have any
>>> problems doing this in my schema with either MySQL or One$DB.
>>> Here are the important snippets from my DDL script (I snipped the
>>> irrelevant columns and tables):
>>> CREATE TABLE playable (
>>>  CONSTRAINT pk_playable_id PRIMARY KEY (id)
>>> );
>>> CREATE TABLE track (
>>>  id INTEGER,
>>>  CONSTRAINT pk_track_id PRIMARY KEY (id),
>>>  CONSTRAINT fk_track_id FOREIGN KEY (id) REFERENCES playable_t(id)
>>> );
>>> CREATE TABLE mediafile (
>>>  id INTEGER,
>>>  trackid INTEGER,
>>>  CONSTRAINT pk_mediafile_id PRIMARY KEY (id),
>>> playable_t(id) ON DELETE CASCADE,
>>>  CONSTRAINT fk_mediafile_trackid FOREIGN KEY (trackid) REFERENCES
>>> track_t(id) ON DELETE SET NULL
>>> );
>>> When I run that, I get:
>>> ERROR 42915: Foreign  Key 'FK_MEDIAFILE_TRACKID' is invalid because
>>> 'the delete rule of foreign key must be CASCADE. (The relationship
>>> would cause the table to be delete-connected to the same table
>>> through multiple relationships and such relationships must have the
>>> same delete rule (NO ACTION, RESTRICT or CASCADE).) '
>>> So, I looked it up in the DB2 docs and I understand where the error
>>> is coming from, but I don't see how to design around it.  What I
>>> want is to have tracks and mediafiles with unique IDs (across both
>>> tables).  The playable table should achieve that.  If the parent
>>> playable is deleted, I want the track or mediafile subclass (in
>>> Hibernate/Java terms) to be deleted as well.  If the parent track of
>>> a mediafile is deleted, I want the mediafile's trackid to be set to
>>> NULL.
>>> I think that I understand why the delete-connected restriction is
>>> there, but since I have the foreign keys in two separate columns and
>>> the parent rows of any involved track and mediafile rows must be
>>> dependent on different rows in the playable table, I don't think
>>> that it should be an issue for me (logically, anyway).
>>> Can anyone think of a workaround (or a better way to design the
>>> schema)?  I thought about using a shared sequence, but that's not
>>> avaiable either.
>>> Thanks for your help!
>>> -Justin

View raw message