hbase-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Adam Silberstein" <silbe...@yahoo-inc.com>
Subject random read/write performance
Date Tue, 06 Oct 2009 15:59:30 GMT
Hi,

Just wanted to give a quick update on our HBase benchmarking efforts at
Yahoo.  The basic use case we're looking at is:

1K records

20GB of records per node (and 6GB of memory per node, so data is not
memory resident)

Workloads that do random reads/writes (e.g. 95% reads, 5% writes).

Multiple clients doing the reads/writes (i.e. 50-200)

Measure throughput vs. latency, and see how high we can push the
throughput.  

Note that although we want to see where throughput maxes out, the
workload is random, rather than scan-oriented.

 

I've been tweaking our HBase installation based on advice I've
read/gotten from a few people.  Currently, I'm running 0.20.0, have heap
size set to 6GB per server, and have iCMS off.  I'm still using the REST
server instead of the java client.  We're about to move our benchmarking
tool to java, so at that point we can use the java API.  At that point,
I want to turn off WAL as well.  If anyone has more suggestions for this
workload (either things to try while still using REST, or things to try
once I have a java client), please let me know. 

 

Given all that, I'm currently seeing maximal throughput of about 300
ops/sec/server.  Has anyone with a similar disk-resident and random
workload seen drastically different numbers, or guesses for what I can
expect with the java client?

 

Thanks!

Adam


Mime
  • Unnamed multipart/alternative (inline, None, 0 bytes)
View raw message