Can you explain or anyone else, how it works or what is the best practise?
I think that statement is open, and for each row method public void handleRow (Object object);  is invoked or something like that
I assume, that we want to process a large datasets, and do some operations.
Should't be better in that case to use stored procedure, and use i.e cursor in database to handle this, if we have such opportunity?
It seems to be much faster, then using JDBC. We are working on large datasets.
I don't have time for excercise now, but for 500 000 records to process, using only stored procedure, should be much faster.
What are your opinions?
Darek Dober
----- Original Message -----
From: Michal Malecki
Sent: Friday, July 08, 2005 4:22 PM
Subject: Re: Best mechanism to read large datasets

I think queryWithRowHandler should do the trick :)
Michał Małecki
----- Original Message -----
From: Rao, Satish
Sent: Friday, July 08, 2005 4:10 PM
Subject: Best mechanism to read large datasets

I have a situation where I might be retrieving thousands of rows from Oracle. I am using a queryForList() and I am assuming it stores the rows in memory. As a result I am quickly running out of memory.

Is there a way for ibatis to read each row from table and discard it as soon as it is processed, so I don't run into out of memory issues?