quetz-mod_python-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Miles Egan <mi...@pixar.com>
Subject Re: possible bug in filter.write()
Date Fri, 16 Apr 2004 23:04:35 GMT
Well, I've narrowed the problem down a little further.  It's not in 
filter_write.  I think it's actually in _filter_read, although I 
haven't pinned it down yet.

To reproduce the problem all you have to do is set up a python output 
filter that reads a large file using filter.read() without passing a 
count of bytes to read.  If this reads a block larger than ~17k in my 
tests it starts to corrupt the output.

The workaround is to call filter.read() with a size argument and read 
in smaller chunks (4k at a time).  I'll see if I can figure out where 
the read call is going wrong.

BTW - is there anybody out there?

On Apr 15, 2004, at 5:41 PM, Miles Egan wrote:

> I think there may be a bug in the filter.write routine.  It seems that 
> if I write more than about 17k in one call I get a bunch of junk 
> output to the browser.  It looks like the output of previous requests 
> so I'm guessing that it's a buffer overflow of some kind.
>
> Looking at the source of filterobject.c I see that mod_python doesn't 
> verify that the apr_bucket_alloc call in filter_write actually 
> allocates the requested number of bytes.  Could this be the source of 
> the problem?
>
> --
> miles egan
> lord of the files
> miles@pixar.com
>
>
--
miles egan
lord of the files
miles@pixar.com


Mime
View raw message