httpd-users mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From André Warnier>
Subject Re: [users@httpd] Serving partial data of in-memory common data set
Date Wed, 29 Jul 2009 21:31:30 GMT
S.A. wrote:
>> ...
> The reason why I am not suspecting mysql was that the mysql
> log does indicate that it is getting all the requests and
> it is servicing them. As I have stated before, some of the
> users though are not getting images.
Can you explain this a bit ? When you say that some users are not 
getting images, what happens then ? Isn't there some error message in an 
Apache logfile ?
I also presume (maybe wrongly) that these are not real users with real 
browsers. What are you using as a client to test this, and does it leave 
a trace of why it is not getting an image ?

Then some basic calculations :

70 users X 50 images in a page = 3,500 requests to Apache.
Also, as a minimum, 70 simultaneous TCP connections to Apache, assuming 
your Apache can handle as many.

70 users X 50 images X 2KB/image = 7000 KB = +- 7,000,000 bytes = +- 
70,000,000 bit.
On a local network able to carry 100Mbit/s, say at 50% efficiency, this 
would take about 1.5 seconds.
So this should not be a case where you overwhelm the network bandwidth, 
or are my calculations above off the mark for some reason ?

Some additional questions about your Apache server configuration (and 
sorry if I missed some in an earlier response) :

- which MPM version are you using ?
and can you copy here the settings for that MPM ?
You can see which MPM is used by entering :
.../apache2ctl -l  (L lowercase)
(It will list a "prefork.c" or a "worker.c" or something).

The corresponding settings from your apache2.conf (or httpd.conf) are 
usually easy to find, under a comment like this one :
## Server-Pool Size Regulation (MPM specific)

- what are the values used for the following parameters :
	- KeepAlive
	- KeepAliveTimeout
	- MaxKeepAliveRequests
	- TimeOut

What I am trying to figure out above, is how many processes/threads on 
the Apache side you really have available to process the client requests.

This is because of the following generic kind of hypothetical scenario :

- imagine your Apache is configured so that it can have at most 50 
processes or threads simultaneously to handle requests.
- the first 50 clients connect, get their home page, which contains 
links to in-line images
- because they are using KeepAlive connections, these 50 clients do not 
release their TCP connection to the server, but use the same one to 
start sending their requests for images
- on the server side, the given process which sent to a given client 
it's homepage, is also keeping the connection open, so it is "stuck" 
with this client, and cannot server another client's request.
- as long as this client keeps up with sending more requests for images, 
it will keep this server process locked up for himself. That is, as long 
as it never exceeds the KeepAliveTimeOut or MaxKeepAliveRequests.
Since each client has 50-odd images to get, this can take a while, 
particularly since the browser also has to do some work to process and 
display these images.
- now comes client #51.  Because all server-side processes are tied up, 
his connection request is not answered right away.  Instead, it goes 
into the TCP wait queue for port #80.  That is in general not a problem, 
since the browser will wait several minutes before giving up.
- But this queue has a limited size.  If more than a certain number of 
connection requests pile up there without being acknowledged, at some 
point the next connection request will be refused.
The browser experiencing a "connection refused" for an inline image, 
will just display a broken image symbol instead, and try for the next one.

Of course, things will not be as tidy as outlined above, and there will 
be clients above the 50th getting their homepage and some of their 
images, but then some of the first 50 may be unable to make a new 
connection to obtain some other images, etc..

The point is, the lower the number of real server-side processes are 
available, and the higher the KeepAliveTimeOut, the more likely you are 
to get into the above kind of scenario.
One reason is that, when one particular client is done with his 
requests, the connection will nevertheless stay alive with its 
server-side correspondent, during the number of seconds specified in the 
KeepAliveTimeout, without achieving anything useful anymore.
I see for example that in the 2.2 documentation, this timeout is 
indicated as having a default of 5 seconds, which seems more or less 
reasonable for usual cases.  But in the standard configuration that the 
Linux Debian package installed on one of my servers, it is set at 15 
seconds, which in your case would really be detrimental.

The thing is also, that I can still not imagine that Apache would be 
overwhelmed with 3500 requests totalling 7 MB of content, so there must 
be something rather flagrant amiss.

The official User-To-User support forum of the Apache HTTP Server Project.
See <URL:> for more info.
To unsubscribe, e-mail:
   "   from the digest:
For additional commands, e-mail:

View raw message