trafficserver-users mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From deepak srinivasan <>
Subject Cache sharing on full clustering
Date Wed, 09 Jan 2013 10:00:43 GMT

I have a couple of questions

We are presently working on a project that requires cache sharing between
two machines running Apache Traffic Server in full clustering mode. Though
the machines enter in a cluster, they dont seem to share cache between
them. Browsing through the mail archives of ATS i found that you initially
had a similar problem (a long time back) which you subsequently rectified.
Can you please share what was the cause of your problem and how did you
solve it so it might help us. Thanks in advane for your reply

*We have two nodes in full clustering mode. Though the sharing of
configuration files seems to work fine, the cache between the
nodes doesn't seem to be shared among one another. A request in first
machine gets forwarded to Origin server though the cache exists in the
second machine. Can anyone please give any pointer as to where the problem
may exist*
*We are using ATS v 3.2.0.*
*We need to map the requests coming in to these two nodes to an origin
*We have configured our remap.config as*
*regex_map  http://xxx.yyy.zzz.[0-9]:8080/cns/RecordingInfo
@pparam=locator @action=allow*
Can someone please comment on the remap configuration


I am trying to create a Cont using TSContCreate inside the thread. I then
make a Cache read call using TSCacheRead inside the thread.
The data for CacheRead is passed using void* to the function.

However if I create the thread using TSThreadCreate, the call to cache read
is successfull.
However if I create the thread using pthread_create. The call fails.

I am trying the above scenario because I am getting the following error
using TSThreadCreate-

FATAL: failed assert `pipe(evpipe) >= 0`
/usr/local/bin/traffic_server - STACK TRACE:

The call to TSThreadCreate doesn't fail in first instance. But it fails
after the pluggin runs for some time(like 5000 or more).

Any help is appreciated.


View raw message