quetz-mod_python-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Jim Gallacher <jg.li...@sympatico.ca>
Subject Re: FileSession - a couple of small fixes
Date Wed, 13 Apr 2005 15:32:54 GMT
Nicolas Lehuen wrote:
>>Hi Jim,
>>
>>OK, I've integrated your fixes.
>>
>>Regarding the do_cleanup locking issue, I had two ideas in mind :
>>
>>1) Hold the lock n° 0 (AKA the "dbm lock") while doing the cleanup to
>>prevent two concurrent cleanups in the case we are very unlucky (two
>>cleanups triggered at the same time).

This could end up holding the global lock for a long time on a heavily 
loaded server. I'm not too concerned about the possibility of concurrent 
cleanups. If first one deletes particular a session file before the 
other, the second will simply raise an error and continue processing - 
no big deal. If the trylock function described below cannot be made to 
work, blocking a second do_cleanup thread while the first completes 
would be a bad thing. And of course there is always a very small chance 
that a THIRD do_cleanup could start up and get blocked...

>> More than that, we should try to
>>hold the lock with an APR try_lock call (which I think is not
>>implemented yet in _apache.c), so that if the lock is currently held,
>>we know that a cleanup is already being performed, so we can safely
>>skip it.

If try_lock works then the global lock would be OK.

>>2) while the cleanup is being performed, hold the lock on each session
>>with a try_lock call, so that if a session is locked, it will never be
>>candidate for a cleanup.

A lock/unlock for each session seems like there would be a whole lotta 
lockin' goin' on. I don't know how many sessions a site may end up with 
- 1000, 10000, 50000? - but acquiring and releasing this many locks in 
the duration of one request just can't be good for performance.

> Nicolas Lehuen wrote:
> I have checked in an implementation of _apache._global_trylock, which
> is a shameless copy/paste of _global_lock with some adaptation to use
> apr_global_mutex_trylock. Unfortunately my tests seem to show that the
> implementation is not good in that it erroneously tells me that the
> lock is acquired even when it isn't... Must work on it a little more.

_apachemodule.c would not compile for me without the following change:
@@ -487,7 +487,8 @@
          ap_log_error(APLOG_MARK, APLOG_WARNING, rv, s,
                       "Failed to acquire global mutex lock at index 
%d", index);         PyErr_SetString(PyExc_ValueError,
-                        "Failed to acquire global mutex lock %i",rv);
+                        "Failed to acquire global mutex lock");
+        //                "Failed to acquire global mutex lock %i",rv);
          return NULL;
      }
  }

Once compiled, I tested _apache._global_trylock(req.server, "test") on 
apache2-mpm-prefork-2.0.53 and got the following error in the apache log:

[Tue Apr 12 12:29:04 2005] [warn] (70023)This function has not been 
implemented on this platform: Failed to acquire global mutex lock at index 2

I'm not sure it this is the same problem you had, or a different one.

I've been walking through the do_cleanup code, trying to understand 
exactly where there may be conflicts with do_cleanup, and any sessions 
being accessed by other processes/threads. Once I have it clear in my 
head, I'll post my ramblings. Also, I haven't posted any timings showing 
the do_cleanup performance with the optimizations on and off, but I will 
in the next day or so. You can get 10x performance boost on a lightly 
loaded server with more than 10000 session files and probably even 
better when the server is under a heavy load.

Also, I just realized that do_cleanup, when it is run, is run at the end 
of BaseSession.__init__() rather than at the end of the request. The 
do_cleanup method should really just be a call to 
self._req.register_cleanup (do_filesession_cleanup,self) and our current 
do_cleanup code moved to do_filesession_cleanup. This is the same 
pattern used by DbmSession. I'll implement this and send you a patch.

Regards,
Jim



Mime
View raw message