jakarta-jcs-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Travis Savo <ts...@IFILM.com>
Subject RE: remote
Date Thu, 15 Apr 2004 21:09:03 GMT

The changes for the threading are all in CacheEventQueue, and

The original design spawned one thread for each region immediately, and had
the thread sleep until there was events in the queue to process.

My change was to have it spawn the thread the first time an item entered the
queue, and run until the queue was empty. When the queue was empty, the
thread would sleep for a specified period of time. If another event came
into the queue while the thread was sleeping, it would wake up and resume
processing. If the thread sleep period expired without another event coming
into the queue, the thread would die, leaving a new thread to be created
with an event came in.

Thus, an active queue would always have a thread available and ready to
process events. An inactive queue's thread won't ever get spawned. A
semi-active queue can be tuned for best behavior via the timeout. The
problem was that assuming you had 1,000 regions, it would instantly spawn
1,000 threads, even if only 20 regions were getting used. On some operating
systems, this would make the box completely unusable. I suspect this is no
longer as much of a problem with newer kernels like 2.6 on Linux, but rest
assured it's pretty broken on older machines.

The other major important change was (and my memory is failing me as to
where it was exactly) when a client did a remove to a remote cache, the
remote cache would send a remove to all the other clients clients, who would
in return send a remove back to the remote cache, who would send a remove to
all the other clients, ad nasueum, creating (X-1)^2 packets with every
iteration, where X is the number of clients talking to remote cache. It
won't happen with only one client... but it will with 2+.

My fix was a change from a 'remove()' to a 'localRemove()' at a key point...
now if only I could remember where that point was!

The final change, which is less important, but necessary for long-term
stability is to change the cache ID from a byte to an integer. Only
supporting 256 remote clients is all good and fine, but assuming there's 2
clients, and one of them disconnects and reconnects 255 times, it's going to
break in new and interesting ways when the ID wraps back around to 1.


The changes I made to the IndexedDisk was to move the logic of figuring out
where in the file to write elements into the IndexedDisk itself, simplifying
the interface from writeObject(Serializable, long) to put(CacheElement), and
readObject(long) to get(key). The actual management of the file itself is
left to the IndexedDisk, leaving the IndexedDiskCache free of the
implementation details of the actual storage of the objects. It just felt
like to me they were too closely coupled. The IndexedDiskCache acts as the
access point to the disk, and encapsulates the read/write locking semantics,
and simply delegates to the IndexedDisk for persistence, rather than
managing part of the process of reading/writing from the disk itself and
delegating part of it to another class. Now it delegates -all- the 'disk
work' to the IndexedDisk.

-Travis Savo

-----Original Message-----
From: Aaron Smuts [mailto:aasmuts@wisc.edu]
Sent: Thursday, April 15, 2004 12:43 PM
To: 'Turbine JCS Developers List'
Subject: remote

Hi Travis,

You said you worked on reducing the number of threads used in the remote
cache?  Can you give me some specifics on what you changed and what your
enhancements were.  It sounded like you had some good ideas here.

Also, could you describe in more what changes you made to the disk



To unsubscribe, e-mail: turbine-jcs-dev-unsubscribe@jakarta.apache.org
For additional commands, e-mail: turbine-jcs-dev-help@jakarta.apache.org

To unsubscribe, e-mail: turbine-jcs-dev-unsubscribe@jakarta.apache.org
For additional commands, e-mail: turbine-jcs-dev-help@jakarta.apache.org

View raw message