Thanks a lot for the suggestion.This
would be nice if it performs better.
Is the idea to split one request into
smaller parts or will "Select+Delete IDs" just perform better?
And regarding the latter option - is
this possible in one SQL request? So something like
DELETE FROM mytable WHERE id IN
( SELECT id FROM mytable WHERE created_at < some_fixed_millis OFFSET 0 ROWS FETCH NEXT 1000 ROWS ONLY )
And then loop through the results via
changing OFFSET and ROWS? (Btw: the column created_at is indexed)
Or would you recommend doing this as 2
separate statements in Java/JDBC? Or via maybe even just issuing
the original DELETE request more frequent?
On 06.10.19 03:50, Peter Ondruška
Peter, try this if it makes a difference:
1. Select entries to be deleted, note their
2. Issue delete using keys to be deleted (1.)
and use short transaction batches.
I have a table "mytable" with columns "id", "created_at" and
(VARCHAR, BIGINT, LONG VARCHAR), where data is coming in like
entries every hour and I would like to keep only entries of
the last 1
or 2 hours. It is expected behaviour for the user if too old
gets lost as it is some kind of a LRU cache.
The current solution is to delete entries older than 4 hours
DELETE FROM mytable WHERE created_at < ?
I'm using this in a prepared statement where ? is "4 hours
milliseconds (new DateTime().getMillis()).
This works, but some (not all) INSERT statement get a bigger
the same order (2-5 seconds) that this DELETE takes, which is
These INSERT statements are executed independently (using
threads) of the DELETE.
Is there a better way? Can I somehow avoid locking the
What helps a bit is when I make those deletes more frequently
delays will get smaller, but then the number of those delayed
What also helps a bit (currently have not seen a negative
increasing the page size for the Derby Network Server: