phoenix-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Praveen Murugesan (JIRA)" <>
Subject [jira] [Commented] (PHOENIX-990) OOM caused by order by query returning all rows
Date Thu, 19 Jun 2014 18:50:26 GMT


Praveen Murugesan commented on PHOENIX-990:

[~prkommireddi] I think yes and no, it will be good to have finite control on settings, but
this can soon become over-whelming (It makes me yelp everytime i look at the list of settings
hadoop offers :p). This is something very internal to how phoenix allocates buffers, so I'm
not sure if this is a good candidate for exposure, because if exposed you expect the user
to have finite details about how phoenix does things which increases the barrier to entry
to use phoenix

That said, one thing i forgot to add to this thread is, I really am not sure of the need of
mappedbytebuffers in this case, from the code i looked at, it looked like only one thread
is operating on the file at a particular time (I might be wrong) [~giacomotaylor] Can you
elaborate the choice of using a mapped byte buffer. I'm just curious and want to learn.

> OOM caused by order by query returning all rows
> -----------------------------------------------
>                 Key: PHOENIX-990
>                 URL:
>             Project: Phoenix
>          Issue Type: Bug
>    Affects Versions: 3.0.0, 4.0.0
>            Reporter: Mujtaba Chohan
>            Assignee: Praveen Murugesan
>         Attachments: PHOENIX-990-TEST.patch, PHOENIX-990.patch
> OOM error with the following stack trace with large number of rows for query with order
by returning all rows without limit or aggregation (ex. select * from table order by col1).
Created a local perf. test to verify it when this gets fixed. Also script can
be used to generate a few mil. rows.
> Originally reported by @zenmehra.
> Stack:
> ERROR org.apache.hadoop.hbase.regionserver.HRegionServer: Failed openScanner
> org.apache.hadoop.hbase.DoNotRetryIOException: PERFORMANCE_5000000,,1400524730456.c62cccdac8cffd098d236f5e282564bb.: Map failed
> 	at org.apache.phoenix.util.ServerUtil.throwIOException(
> 	at org.apache.phoenix.coprocessor.ScanRegionObserver.getTopNScanner(
> 	at org.apache.phoenix.coprocessor.ScanRegionObserver.doPostScannerOpen(
> 	at org.apache.phoenix.coprocessor.BaseScannerRegionObserver.postScannerOpen(
> 	at org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost.postScannerOpen(
> 	at org.apache.hadoop.hbase.regionserver.HRegionServer.internalOpenScanner(
> 	at org.apache.hadoop.hbase.regionserver.HRegionServer.openScanner(
> 	at sun.reflect.GeneratedMethodAccessor16.invoke(Unknown Source)
> 	at sun.reflect.DelegatingMethodAccessorImpl.invoke(
> 	at java.lang.reflect.Method.invoke(
> 	at org.apache.hadoop.hbase.ipc.WritableRpcEngine$
> 	at org.apache.hadoop.hbase.ipc.HBaseServer$
> Caused by: java.lang.RuntimeException: Map failed
> 	at org.apache.phoenix.iterate.MappedByteBufferSortedQueue.offer(
> 	at org.apache.phoenix.iterate.MappedByteBufferSortedQueue.offer(
> 	at java.util.AbstractQueue.add(
> 	at org.apache.phoenix.iterate.OrderedResultIterator.getResultIterator(
> 	at
> 	at org.apache.phoenix.coprocessor.ScanRegionObserver.getTopNScanner(
> 	... 10 more
> Caused by: Map failed
> 	at
> 	at org.apache.phoenix.iterate.MappedByteBufferSortedQueue$MappedByteBufferPriorityQueue.writeResult(
> 	at org.apache.phoenix.iterate.MappedByteBufferSortedQueue.offer(
> 	... 15 more
> Caused by: java.lang.OutOfMemoryError: Map failed
> 	at Method)
> 	at

This message was sent by Atlassian JIRA

View raw message