hbase-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Jean-Marc Spaggiari <jean-m...@spaggiari.org>
Subject Re: OutOfMemoryError in MapReduce Job
Date Fri, 01 Nov 2013 18:36:18 GMT
Ho John,

You might be better to ask this on the CDH mailing list since it's more
related to Cloudera Manager than HBase.

In the meantime, can you try to update the "Map Task Maximum Heap Size"
parameter too?


2013/11/1 John <johnnyenglish739@gmail.com>

> Hi,
> I have a problem with the memory. My use case is the following: I've crated
> a MapReduce-job and iterate in this over every row. If the row has more
> than for example 10k columns I will create a bloomfilter (a bitSet) for
> this row and store it in the hbase structure. This worked fine so far.
> BUT, now I try to store a BitSet with 1000000000 elements = ~120mb in size.
> In every map()-function there exist 2 BitSet. If i try to execute the
> MR-job I got this error: http://pastebin.com/DxFYNuBG
> Obviously, the tasktracker does not have enougth memory. I try to adjust
> the configuration for the memory, but I'm not sure which is the right one.
> I try to change the "MapReduce Child Java Maximum Heap Size" value from 1GB
> to 2GB, but still got the same error.
> Which parameters do I have to adjust? BTW. I'm using CDH 4.4.0 with the
> Clouder Manager
> kind regards

  • Unnamed multipart/alternative (inline, None, 0 bytes)
View raw message