trafficserver-users mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From J David <j.david.li...@gmail.com>
Subject Re: Migrating from squid
Date Fri, 27 Feb 2015 17:01:01 GMT
On Fri, Feb 27, 2015 at 10:50 AM, Leif Hedstrom <zwoop@apache.org> wrote:
> Dealing with external squid helpers is a bit wonky, but you probably could implement
something in a plugin that does it. The fact that you are doing so much weirdness (MySql,
Memcached) makes it particularly tough, I’m not sure how Squid deals with that?

Squid prestarts a (configurable, large) number of external rewriters
sufficient to handle the #requests * delay product.

On Fri, Feb 27, 2015 at 11:10 AM, Faysal Banna <degreane@gmail.com> wrote:
> I have been doing this using nothing but lua injecting/retrieving  data from
> mysql, mongodb , sqlite ...

Can the lua approach hold resources open between requests, preferably
in some kind of managed resource pool, like the APR offers in Apache?
Forking a process from lua for every incoming request will lead to a
*tremendous* amount of context-switching overhead at high request
rates.  And that's assuming that we pare the process being exec'd down
to some sort of thin client, as the current rewriter takes several
seconds to spin up and assimilate initial data.  And that kind of
split is definitely reasonable & doable.

This *might* work if we can get all the info about the request at the
Lua level and dispatch all the various types of responses (backend
selection for existing URL, 301+new URL, 4XX/5XX error) from there.
But to get really good performance, the best approach would probably
be to maintain a persistent pool of open sockets to the external
logic.  Is that possible?

Thanks!

Mime
View raw message