Hi!
I am running some performance test with large files. As mentioned in
one of the earlier threads I am using curl-loader for testing with
randomization in the URL to stress the cache.
Version: 2.0.1
Config:
CONFIG proxy.config.cache.ram_cache.size LLONG 2097152000
CONFIG proxy.config.cache.ram_cache_cutoff LLONG 100048576
storage.config
/mnt/cache/trafficserver 60368709120
the file is 15MB in size.
Hardware:
2x Intel Xeon Quad Core 2.4GHz, 12GB RAM.
Here are some numbers.
test1 sessions iter cache_size/ url hitrate
avg throughput
ram randomness
resp. time
Test1 500 50 56GB/2GB 1 99% 75sec
~720Mbps
Test2 500 50 56GB/2GB 0-2k 45% 200+
~200Mbps
Test3 500 50 4GB/2GB 0-20k 0.2% 75sec
~800Mbps
* The url randomness is just a number within that range in the URL.
* There are 500 clients each access the URL 50 times.
* So in the best case scenario with only a single URL, I can get 700+
Mbps and I think I can get more if I use 2 client machines and more
network cards. Currently the testbed is limited to 1Gbps.
* As I can increase the randomness, so essentially there are 2000
unique URLs, the performance drops significantly.
* The third test suggest that if the cache is small the performance is
good. I even tried with 0-2k random value instead of 20k, but the
throughput doesnt drop.
* So it seems like a large cache just kills the performance.
* The other thing I notices was the IOwait on 1 cpu core was 100%, the
others where pretty idle. Shouldnt the IO load be distributed evenly ?
Is using a file based cache killing it ? maybe its using only 1
thread.
Also,
Leif mentioned in an earlier thread about the threads_per_disk
settings which I didnt know about, I will run some more tests with
that.
I think I am not using the optimal settings. In production I believe
people are using much large caches. So if someone can share what
hardware configuration they use, I would appreciate it. Number of
drives, raid0 raid1 etc. And what kind of performance you are seeing
from the cache.
Thanks for your time.
-- pranav
|