quetz-mod_python-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Indrek Järve <ind...@inversion.ee>
Subject Re: [mod_python] A few questions about requestobject.c
Date Wed, 30 Jul 2003 14:26:52 GMT
Hey,

moving this over to the correct list...

On Wed, 2003-07-30 at 16:05, Gregory (Grisha) Trubetskoy wrote:
> > 1. The use of apr_palloc
> >
> > Multipart POST forms are parsed in util.py/FieldStorage using req's
> > req_readline function, which happily apr_palloc'ates enough memory to
> > read the remaining request data into memory, but never seems to free it
> > (nasty if the users have a habit of uploading 100+MB files, especially
> > once multiplied with the number of apache processes). Since the pool
> > used is attached to the apache request, I'm not sure I should freely
> > apr_pool_clear() or apr_pool_destroy() it either.
> >
> > I managed to release memory by replacing apr_palloc with malloc and
> > adding a few free()s where required, but is this the right approach?
> 
> Yes, this would definitely be a problem because that memory isn't freed
> until the end of the request... We could either create a separate pool,
> which then can be apr_pool_cleared(), or just use malloc like you have...
> I'll see if I can take a look at it later today. If you have a patch you
> could send in, that'd be nice.

Patch attached. Btw, the main issue I had with this and why I even
noticed it was that pool memory isn't freed even after the end of the
request (on RH9 default apache2 and suse 8.0 vanilla apache compiled on
site). It might be apr_pool_cleared(), but I understand that memory
isn't actually freed, only marked available for reuse with it. That
won't help me much as 100+ meg files * 10-20 apache processes would
still leave most average servers swapping or dying from lack of memory.

> 
> >
> > 2. The use of PyString_FromStringAndSize()
> >
> > Again in the req_readline() function, the result variable is initialized
> > at the size of the full remaining request data.
> 
> This would be a problem with 100M files too... Another thing that occured
> to me is that it could be better to rewrite the request read() functions
> to use the buckets interface, this way none of this would be an issue.

I will take a look at this myself too in a few days, as malloc use
definitely helped, but parallel big file uploads will still cause my box
to run out of memory. Also I'm going to look at rbuff's allocation size
in req_readline() and will try to reduce that if possible.

Regards,
Indrek

Mime
View raw message