phoenix-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Praveen Murugesan (JIRA)" <>
Subject [jira] [Comment Edited] (PHOENIX-990) OOM caused by order by query returning all rows
Date Thu, 19 Jun 2014 20:02:24 GMT


Praveen Murugesan edited comment on PHOENIX-990 at 6/19/14 8:02 PM:

In this case, 128KB should *always* work. The way we were allocating mappedbytebuffer was
just wrong. That said, any value you put here has a breaking point. If you try to query lets
say a billion records..Then yes..even 128 KB is not good enough...but then, we will have worse
problems there.

I've stress tested upto 20 million rows, so for the use case in hand, I would argue 128KB
would *always* work. I understand your point, but I don't think this is an ideal candidate
to be configurable.

was (Author: lefthandmagic):
In this case, 128KB should *always* work.

> OOM caused by order by query returning all rows
> -----------------------------------------------
>                 Key: PHOENIX-990
>                 URL:
>             Project: Phoenix
>          Issue Type: Bug
>    Affects Versions: 3.0.0, 4.0.0
>            Reporter: Mujtaba Chohan
>            Assignee: Praveen Murugesan
>         Attachments: PHOENIX-990-TEST.patch, PHOENIX-990.patch
> OOM error with the following stack trace with large number of rows for query with order
by returning all rows without limit or aggregation (ex. select * from table order by col1).
Created a local perf. test to verify it when this gets fixed. Also script can
be used to generate a few mil. rows.
> Originally reported by @zenmehra.
> Stack:
> ERROR org.apache.hadoop.hbase.regionserver.HRegionServer: Failed openScanner
> org.apache.hadoop.hbase.DoNotRetryIOException: PERFORMANCE_5000000,,1400524730456.c62cccdac8cffd098d236f5e282564bb.: Map failed
> 	at org.apache.phoenix.util.ServerUtil.throwIOException(
> 	at org.apache.phoenix.coprocessor.ScanRegionObserver.getTopNScanner(
> 	at org.apache.phoenix.coprocessor.ScanRegionObserver.doPostScannerOpen(
> 	at org.apache.phoenix.coprocessor.BaseScannerRegionObserver.postScannerOpen(
> 	at org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost.postScannerOpen(
> 	at org.apache.hadoop.hbase.regionserver.HRegionServer.internalOpenScanner(
> 	at org.apache.hadoop.hbase.regionserver.HRegionServer.openScanner(
> 	at sun.reflect.GeneratedMethodAccessor16.invoke(Unknown Source)
> 	at sun.reflect.DelegatingMethodAccessorImpl.invoke(
> 	at java.lang.reflect.Method.invoke(
> 	at org.apache.hadoop.hbase.ipc.WritableRpcEngine$
> 	at org.apache.hadoop.hbase.ipc.HBaseServer$
> Caused by: java.lang.RuntimeException: Map failed
> 	at org.apache.phoenix.iterate.MappedByteBufferSortedQueue.offer(
> 	at org.apache.phoenix.iterate.MappedByteBufferSortedQueue.offer(
> 	at java.util.AbstractQueue.add(
> 	at org.apache.phoenix.iterate.OrderedResultIterator.getResultIterator(
> 	at
> 	at org.apache.phoenix.coprocessor.ScanRegionObserver.getTopNScanner(
> 	... 10 more
> Caused by: Map failed
> 	at
> 	at org.apache.phoenix.iterate.MappedByteBufferSortedQueue$MappedByteBufferPriorityQueue.writeResult(
> 	at org.apache.phoenix.iterate.MappedByteBufferSortedQueue.offer(
> 	... 15 more
> Caused by: java.lang.OutOfMemoryError: Map failed
> 	at Method)
> 	at

This message was sent by Atlassian JIRA

View raw message