hbase-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Anoop John <anoop.hb...@gmail.com>
Subject Re: Leveraging As Much Memory As Possible
Date Thu, 31 Mar 2016 05:14:11 GMT
Ya having HBase side cache will be a better choice rather than HDFS
cache IMO.   Yes u r correct...  You might not want to give a very
large size for the heap. You can make use of the off heap BucketCache.

-Anoop-

On Thu, Mar 31, 2016 at 4:35 AM, Ted Yu <yuzhihong@gmail.com> wrote:
> For #1, please see the top two blogs @ https://blogs.apache.org/hbase/
>
> FYI
>
> On Wed, Mar 30, 2016 at 7:59 AM, Amit Shah <amits.84@gmail.com> wrote:
>
>> Hi,
>>
>> I am trying to configure my hbase (version 1.0) phoenix (version - 4.6)
>> cluster to utilize as much memory as possible on the server hardware. We
>> have an OLAP workload that allows users to perform interactive analysis
>> over huge sets of data. While reading about hbase configuration I came
>> across two configs
>>
>> 1. Hbase bucket cache
>> <
>> http://blog.asquareb.com/blog/2014/11/24/how-to-leverage-large-physical-memory-to-improve-hbase-read-performance
>> >
>> (off heap) which looks like a good option to bypass garbage collection.
>> 2. Hadoop pinned hdfs blocks
>> <http://blog.cloudera.com/blog/2014/08/new-in-cdh-5-1-hdfs-read-caching/>
>> (max locked memory) concept that loads the hdfs blocks in memory, but given
>> that hbase is configured with short circuit reads I assume this config may
>> not be of much help. Instead it would be right to increase hbase region
>> server heap memory. Is my understanding right?
>>
>> We use HBase with Phoenix.
>> Kindly let me know your thoughts or suggestions on any more options that I
>> should explore
>>
>> Thanks,
>> Amit.
>>

Mime
View raw message