james-server-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Oki DZ <ok...@pindad.com>
Subject Re: DB connection pooling in 1.3-dev
Date Fri, 20 Jul 2001 09:09:45 GMT
On Fri, 20 Jul 2001, Serge Knystautas wrote:
> You're right, the current code in CVS doesn't include connection pooling
> (the JDBCMailRepository is what I've been working on).  I have to finish the
> configuration stuff along with that, and was planning to integrate the
> excalibur connection pooling code in with that.  

In addition to connection pooling, I think we'd need a superclass for the
JDBCMailRepository (and the class should be a subclass of
AbstractLoggable, as usual). The purpose for the class is for defining the
SQL statements used throughout the class. In the future the class can be
configured to read the statements from the config file. In doing so, the
JDBCMailRepository could be easily adapted to whatever db backend
currently used. Then, JDBCMailRepository would stand for its name; it's
"JDBC", your class shouldn't be tied to any particular database server.

BTW, I have started looking at the JDBCSpoolRepository. I think it would
be better if the list (the content of the result set) is stored in a
cache; as I understand the code, the accept() method will wait for a
"connection", then it retrieves the message list from the database. If the
first message in the list can be locked (meaning no thread is working on
it), then the message (name) will be returned; and the method will return. 
The loop will only continue if the messages in the list have been locked. 
Problem is, you'd have so many retrieval on the database; the lesser
threads are already working on the spool, the more retrieval there would
be. Meaning, there would be so much time spent for the (re-)retrieval. 

    public synchronized String accept() {
        while (true) {
            try {
                Connection conn = getConnection();
                PreparedStatement listMessages =
                listMessages.setString(1, repositoryName);
                ResultSet rsListMessages = listMessages.executeQuery();

                while (rsListMessages.next()) {
                    String message = rsListMessages.getString(1);

                    if (lock.lock(message)) {
                        return message;

If the list were put in a cache (before the loop, of course), then the
accept() method would just need to check if the cache is empty; it's the
first time the spool get processed. If empty, then retrieve the message
from the database and put the list in the cache. The next time accept()
method got invoked, just get the message from the cache. This would be
faster. This cache, should be global (of course), so that more than one
thread can work on it. There is TurbineGlobalCache for this; all you need
to do is to have an instance of it, and a wrapper for the element (the
message_name) in the list, and the wrapper should be a CachedObject.

There would be no problem if James got restarted; the content of the cache
would be gone, sure. But then, the message would be still in the spool.

Question remains though; how many times a minute the spool should be
looked up? Well I think if you just put a certain amount of messages in
the cache and then ask for more if the cache has been emptied, that would
be sufficient. 

>My JDBC driver has a
> connection pooling built into it, so at least for my current testing
> environment, I could get away without that (temporarily)..

This is a non standard JDBC feature, right? I think you shouldn't depend
on it too much (besides, think about the users who get _mad_ at you
because their drivers don't support pooling :-)


To unsubscribe, e-mail: james-dev-unsubscribe@jakarta.apache.org
For additional commands, e-mail: james-dev-help@jakarta.apache.org

View raw message