drill-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Andries Engelbrecht <aengelbre...@maprtech.com>
Subject Re: Queries failure for exists or IN
Date Tue, 19 May 2015 14:51:58 GMT
Hi George,

Have a look at the Drill Doc site, it has been significantly improved over the last few months.

http://drill.apache.org/docs/


You can use CTAS (Create Table AS) to achieve what you are trying to do.
http://drill.apache.org/docs/create-table-as-ctas-command/

I think most will suggest you write the data in parquet 
http://drill.apache.org/docs/parquet-format/

—Andries


On May 19, 2015, at 7:04 AM, George Lu <luwenbin888@gmail.com> wrote:

> Hi Jacques,
> 
> I really appreciate your suggestion.
> I noticed Drill 1.0 was released. I really like this tool, it gave me so
> much fun and save me so much time!
> 
> May I check with you can I query some data from HBase and store that data
> into file system maybe using JSON or Parquet.
> So later I can query against that data set.
> 
> Can tell me whether that is possible or any links I can learn?
> 
> Thanks!
> 
> Regards,
> George Lu
> 
> On Mon, May 18, 2015 at 1:58 AM, Jacques Nadeau <jacques@apache.org> wrote:
> 
>> Hi George,
>> 
>> Please keep in mind that help here comes on a volunteer basis.  I know that
>> everyone in the community has been working very hard on the 1.0 release so
>> it may take a couple days for the community to help resolve your issue.
>> 
>> A couple of thoughts on first glance:
>> - HBase's data model also starts with a first level map (each column family
>> is considered a map in Drill's view of HBase).  I'm wondering if that has
>> something to do with the issue.
>> - A simple query against flat json data using the same pattern seems to
>> execute fine:
>> 
>> SELECT *
>> FROM   cp.`employee.json` T
>> WHERE  T.employee_id >= 1
>>       AND EXISTS (SELECT 1
>>                   FROM   cp.`employee.json` AS U
>>                   WHERE  U.employee_id = T.employee_id);
>> 
>> 
>> Can you try to reproduce the issue with a built in dataset?  Drill comes
>> packaged with TPC-H (can be access with the cp.`tpch/[tablename].parquet`
>> such as cp.`tpch/nation.parquet`) as well as Foodmart
>> (cp.`[tablename]`.json such as cp.`employee.json`) That will help us to
>> determine whether it is a core planning/execution problem or something
>> specifically related to the HBase integration.
>> 
>> thanks,
>> Jacques
>> 
>> 
>> 
>> On Sun, May 17, 2015 at 9:38 AM, George Lu <luwenbin888@gmail.com> wrote:
>> 
>>> OK, Aman and all,
>>> 
>>> My data is quite simple, just binary format in HBase.
>>> 
>>> I don't have any nested map data, the join column is just the phone
>>> numbers.
>>> 
>>> Now I inserted the phone numbers into a single table and use sub query
>>> against.
>>> 
>>> 
>>> 
>> *******************************************************************************************************************************************
>>> 
>>> *SELECT * FROM activities WHERE T.row_key >= '0000014d5d741e9c' AND
>>> CAST(T.log.c AS INT) = 35 CONVERT_FROM(T.log.pn <http://T.log.pn
>>> ,'UTF8')
>>> IN (SELECT CONVERT_FROM(T2.log.pn <http://T2.log.pn>,'UTF8') FROM `user`
>>> AS
>>> T2);*
>>> 
>>> 
>>> 
>> ********************************************************************************************************************************************
>>> 
>>> *Query failed: SYSTEM ERROR: You tried to do a batch data read operation
>>> when you were in a state of NONE.  You can only do this type of operation
>>> when you are in a state of OK or OK_NEW_SCHEMA.*
>>> 
>>> *What is the above error means??*
>>> 
>>> *************************************If I query the `user` table
>>> directly****************************************
>>> 
>>> *+------------+------------+|  row_key   |    log
>>> |+------------+------------+| [B@584bfadb | {"pn":"MTU2MTg5MjAzNzk="}
>>> |+------------+------------+and if I use CONVERT_FROM UTF8, the result
>> is:*
>>> +------------+
>>> |   EXPR$0   |
>>> +------------+
>>> | 15618920379 |
>>> +------------+
>>> 
>>> 
>> ******************************************************************************************************************
>>> 
>>> *Is it that I miss something or Drill does not support the un-correlated
>>> sub query like above??*
>>> 
>>> For the exist query,
>>> 
>>> SELECT * FROM activities T WHERE T.row_key >= '0000014d5d741e9c' AND
>>> CAST(T.log.c AS INT) = 35 AND EXISTS (SELECT 1 FROM `user` AS U WHERE
>>> U.log.pn = T.log.pn);
>>> 
>>> *Query failed: SYSTEM ERROR: Map, Array or repeated scalar type should
>> not
>>> be used in group by, order by or in a comparison operator. Drill does not
>>> support compare between MAP:REQUIRED and MAP:REQUIRED.*
>>> 
>>> 
>>> *I need to emphasize that I don't have whatever map data in the compare
>> or
>>> join column, the two columns are plain phone numbers.*
>>> 
>>> *I have no idea whether Drill supports the EXISTS or IN sub queries, or
>> is
>>> there any bugs in Drill 0.9.0.*
>>> 
>>> *If anyone has any idea of any experience, please let me know and I
>> really
>>> appreciate it.*
>>> 
>>> *Thanks!*
>>> 
>>> *Regards,*
>>> *George Lu*
>>> 
>>> 
>>> 
>>> 
>>> On Sun, May 17, 2015 at 2:10 PM, Aman Sinha <asinha@maprtech.com> wrote:
>>> 
>>>> For the EXIST query, it looks like you are comparing two maps in the
>> join
>>>> condition below:
>>>> *CONVERT_FROM(T.log.pn <http://t.log.pn/>
>>>> <http://T.log.pn <http://t.log.pn/>>,'UTF8') = CONVERT_FROM(
>>>> HistoryData.log.pn <http://historydata.log.pn/><
>>> http://HistoryData.log.pn
>>>> <http://historydata.log.pn/>>,'UTF8') "
>>>> 
>>>> The following site may be helpful:
>>>> http://drill.apache.org/docs/nested-data-functions/
>>>> You need to provide the key for the map and compare the 'value' portion
>>> of
>>>> the key-value map.
>>>> If you are unable to get it working, pls provide a small reproducible
>>> test
>>>> case including sample data.
>>>> 
>>>> Aman
>>>> 
>>>> 
>>>> On Sat, May 16, 2015 at 6:53 PM, George Lu <luwenbin888@gmail.com>
>>> wrote:
>>>> 
>>>>> Hi Aman and all,
>>>>> 
>>>>> Just now, I tried below queries:
>>>>> 
>>>>> *Query 1*
>>>>> 
>>>>> SELECT * FROM activities T WHERE T.row_key >= '0000014d584dc0e4' AND
>>>>> CAST(T.log.c AS INT) = 35 AND CONVERT_FROM(T.log.d,'UTF8') <>
>>>> 'ScanHandler'
>>>>> AND CAST(T.log.`at` AS INT) = 2 AND *CONVERT_FROM(T.log.pn
>>>>> <http://T.log.pn>,'UTF8')* IN (SELECT *CONVERT_FROM(T1.log.pn
>>>>> <http://T1.log.pn>,'UTF8')* FROM activities T1 WHERE T1.row_key
>=
>>>>> '0000014d532765d8' AND T1.row_key < '0000014d584dc0e4' AND
>>> CAST(T1.log.c
>>>> AS
>>>>> INT) = 35 AND CONVERT_FROM(T1.log.d,'UTF8') <> 'ScanHandler' AND
>>>>> CAST(T1.log.`at` AS INT) = 2);
>>>>> 
>>>>> Drill gave me below error:
>>>>> *Query failed: SYSTEM ERROR: You tried to do a batch data read
>>> operation
>>>>> when you were in a state of NONE.  You can only do this type of
>>> operation
>>>>> when you are in a state of OK or OK_NEW_SCHEMA.*
>>>>> 
>>>>> *What does that mean??*
>>>>> 
>>>>> *Query 2*
>>>>> 
>>>>> SELECT * FROM activities T WHERE T.row_key >= '0000014d584dc0e4' AND
>>>>> CAST(T.log.c AS INT) = 35 AND CONVERT_FROM(T.log.d,'UTF8') <>
>>>> 'ScanHandler'
>>>>> AND CAST(T.log.`at` AS INT) = 2 AND *EXISTS *(SELECT 1 FROM
>> activities
>>>>> HistoryData WHERE HistoryData.row_key >= '0000014d532765d8' AND
>>>>> HistoryData.row_key < '0000014d584dc0e4' AND *CONVERT_FROM(T.log.pn
>>>>> <http://T.log.pn>,'UTF8') = CONVERT_FROM(HistoryData.log.pn
>>>>> <http://HistoryData.log.pn>,'UTF8') *AND CAST(HistoryData.log.c
AS
>>> INT)
>>>> =
>>>>> 35 AND CONVERT_FROM(HistoryData.log.d,'UTF8') <> 'ScanHandler'
AND
>>>>> CAST(HistoryData.log.`at` AS INT) = 2);
>>>>> 
>>>>> Drill gave me below error:
>>>>> *Query failed: SYSTEM ERROR: Map, Array or repeated scalar type
>> should
>>>> not
>>>>> be used in group by, order by or in a comparison operator. Drill does
>>> not
>>>>> support compare between MAP:REQUIRED and MAP:REQUIRED.*
>>>>> 
>>>>> 
>>>>> If I just query the sub query in the first query:
>>>>> SELECT *CONVERT_FROM(T1.log.pn <http://T1.log.pn>,'UTF8')* FROM
>>>> activities
>>>>> T1 WHERE T1.row_key >= '0000014d532765d8' AND T1.row_key <
>>>>> '0000014d584dc0e4' AND CAST(T1.log.c AS INT) = 35 AND
>>>>> CONVERT_FROM(T1.log.d,'UTF8') <> 'ScanHandler' AND CAST(T1.log.`at`
>> AS
>>>> INT)
>>>>> = 2
>>>>> which gave me correct result
>>>>> +------------+
>>>>> |   EXPR$0   |
>>>>> +------------+
>>>>> | *****166102 |
>>>>> | *****2166102 |
>>>>> | *****2166102 |
>>>>> | *****2166102 |
>>>>> | *****0519988 |
>>>>> 
>>>>> +------------+
>>>>> 
>>>>> The sub query just return a list of phone numbers.
>>>>> 
>>>>> So, if I use hard codes values in sub query
>>>>> SELECT * FROM activities T WHERE T.row_key >= '0000014d584dc0e4' AND
>>>>> CAST(T.log.c AS INT) = 35 AND CONVERT_FROM(T.log.d,'UTF8') <>
>>>> 'ScanHandler'
>>>>> AND CAST(T.log.`at` AS INT) = 2 AND CONVERT_FROM(T.log.pn,'UTF8')
>> NOT
>>> IN
>>>>> ('18132','181321','181321','181321','119988');
>>>>> 
>>>>> This query work fine.
>>>>> 
>>>>> So, can anyone kindly help me to identify why the above two sub IN OR
>>>>> EXISTS query fail?
>>>>> 
>>>>> Thanks!
>>>>> 
>>>>> Regards,
>>>>> George Lu
>>>>> 
>>>>> 
>>>>> 
>>>>> On Sun, May 17, 2015 at 9:06 AM, Aman Sinha <asinha@maprtech.com>
>>> wrote:
>>>>> 
>>>>>> Hi George,
>>>>>> for the first query (NOT IN), did you try doing a CONVERT_FROM(
>>>> T.log.pn
>>>>>> ,'UTF8')
>>>>>> instead of CAST ?
>>>>>> 
>>>>>> Also, are you able to do an IN and EXISTS query ?
>>>>>> 
>>>>>> Aman
>>>>>> 
>>>>>> On Sat, May 16, 2015 at 5:49 PM, George Lu <luwenbin888@gmail.com>
>>>>> wrote:
>>>>>> 
>>>>>>> Any one can answer my question?
>>>>>>> 
>>>>>>> Thanks
>>>>>>> 
>>>>>>> George
>>>>>>> 
>>>>>>> On Sat, May 16, 2015 at 6:00 PM, George Lu <
>> luwenbin888@gmail.com>
>>>>>> wrote:
>>>>>>> 
>>>>>>>> Now, if I change to use not exists
>>>>>>>> 
>>>>>>>> SELECT * FROM activities T WHERE T.row_key >=
>> '0000014d584dc0e4'
>>>> AND
>>>>>>>> CAST(T.log.c AS INT) = 35 AND CONVERT_FROM(T.log.d,'UTF8')
<>
>>>>>>> 'ScanHandler'
>>>>>>>> AND CAST(T.log.`at` AS INT) = 2 AND *NOT EXISTS* (SELECT
>>> T1.log.pn
>>>>>> FROM
>>>>>>>> activities T1 WHERE T1.row_key >= '0000014d532765d8' AND
>>>> T1.row_key <
>>>>>>>> '0000014d584dc0e4' AND *CONVERT_FROM(T1.log.pn <
>> http://T1.log.pn
>>>>>>>> ,'UTF8')
>>>>>>>> = CONVERT_FROM(T.log.pn <http://T.log.pn>,'UTF8')*
AND
>>>> CAST(T1.log.c
>>>>>> AS
>>>>>>>> INT) = 35 AND CONVERT_FROM(T1.log.d,'UTF8') <> 'ScanHandler'
>> AND
>>>>>>>> CAST(T1.log.`at` AS INT) = 2)
>>>>>>>> 
>>>>>>>> Drill give me this error:
>>>>>>>> Query failed: SYSTEM ERROR: Map, Array or repeated scalar
type
>>>> should
>>>>>> not
>>>>>>>> be used in group by, order by or in a comparison operator.
>> Drill
>>>> does
>>>>>> not
>>>>>>>> support compare between MAP:REQUIRED and MAP:REQUIRED.
>>>>>>>> 
>>>>>>>> Fragment 6:0
>>>>>>>> 
>>>>>>>> The bold compare outer table's phone number with inner table's
>>>> phone
>>>>>>>> number.
>>>>>>>> 
>>>>>>>> What is the error? I seriously have no idea about it..
>>>>>>>> 
>>>>>>>> 
>>>>>>>> 
>>>>>>>> On Sat, May 16, 2015 at 5:22 PM, George Lu <
>>> luwenbin888@gmail.com>
>>>>>>> wrote:
>>>>>>>> 
>>>>>>>>> Hi all,
>>>>>>>>> 
>>>>>>>>> I have a simple queries using NOT IN, I have a data set
stored
>>> in
>>>>>> HBase
>>>>>>>>> and I have to find some customer's phone number only
appears
>>> today
>>>>> and
>>>>>>> not
>>>>>>>>> in previous day.
>>>>>>>>> 
>>>>>>>>> Here is my query ( I can execute two queries separately)
>>>>>>>>> 
>>>>>>>>> SELECT * FROM activities T WHERE row_key >= '0000014d584dc0e4'
>>> AND
>>>>>>>>> CAST(T.log.c AS INT) = 35 AND CONVERT_FROM(T.log.d,'UTF8')
<>
>>>>>>> 'ScanHandler'
>>>>>>>>> AND CAST(T.log.`at` AS INT) = 2 AND *CAST(T.log.pn <
>>>> http://T.log.pn
>>>>>> 
>>>>>> AS
>>>>>>>>> CHAR(11)) NOT IN* (SELECT *CAST(T.log.pn <http://T.log.pn>
AS
>>>>>>> CHAR(11))*
>>>>>>>>> FROM activities T WHERE row_key >= '0000014d532765d8'
AND
>>> row_key
>>>> <
>>>>>>>>> '0000014d584dc0e4' AND CAST(T.log.c AS INT) = 35 AND
>>>>>>>>> CONVERT_FROM(T.log.d,'UTF8') <> 'ScanHandler' AND
>>> CAST(T.log.`at`
>>>> AS
>>>>>>> INT) =
>>>>>>>>> 2)
>>>>>>>>> 
>>>>>>>>> After executing it, I got below error..
>>>>>>>>> Can anyone explain to me how to make it correct?
>>>>>>>>> 
>>>>>>>>> Fragment 1:0
>>>>>>>>> 
>>>>>>>>> [120da024-767f-4526-a950-3dd6c95877e0 on prod9:31010]
>>>>>>>>> org.apache.drill.common.exceptions.UserException: SYSTEM
>> ERROR:
>>>>>> Failure
>>>>>>>>> while reading vector.  Expected vector class of
>>>>>>>>> org.apache.drill.exec.vector.NullableIntVector but was
holding
>>>>> vector
>>>>>>> class
>>>>>>>>> org.apache.drill.exec.vector.NullableVarBinaryVector.
>>>>>>>>> 
>>>>>>>>> Fragment 1:0
>>>>>>>>> 
>>>>>>>>> [120da024-767f-4526-a950-3dd6c95877e0 on prod9:31010]
>>>>>>>>> at
>>>>>>>>> 
>>>>>>> 
>>>>>> 
>>>>> 
>>>> 
>>> 
>> org.apache.drill.common.exceptions.UserException$Builder.build(UserException.java:465)
>>>>>>>>> ~[drill-common-0.9.0-rebuffed.jar:0.9.0]
>>>>>>>>> at
>>>>>>>>> 
>>>>>>> 
>>>>>> 
>>>>> 
>>>> 
>>> 
>> org.apache.drill.exec.work.fragment.FragmentExecutor.sendFinalState(FragmentExecutor.java:262)
>>>>>>>>> [drill-java-exec-0.9.0-rebuffed.jar:0.9.0]
>>>>>>>>> at
>>>>>>>>> 
>>>>>>> 
>>>>>> 
>>>>> 
>>>> 
>>> 
>> org.apache.drill.exec.work.fragment.FragmentExecutor.run(FragmentExecutor.java:232)
>>>>>>>>> [drill-java-exec-0.9.0-rebuffed.jar:0.9.0]
>>>>>>>>> at
>>>>>>>>> 
>>>>>>> 
>>>>>> 
>>>>> 
>>>> 
>>> 
>> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
>>>>>>>>> [na:1.8.0_25]
>>>>>>>>> at
>>>>>>>>> 
>>>>>>> 
>>>>>> 
>>>>> 
>>>> 
>>> 
>> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
>>>>>>>>> [na:1.8.0_25]
>>>>>>>>> at java.lang.Thread.run(Thread.java:745) [na:1.8.0_25]
>>>>>>>>> Caused by: java.lang.IllegalStateException: Failure while
>>> reading
>>>>>>>>> vector.  Expected vector class of
>>>>>>>>> org.apache.drill.exec.vector.NullableIntVector but was
holding
>>>>> vector
>>>>>>> class
>>>>>>>>> org.apache.drill.exec.vector.NullableVarBinaryVector.
>>>>>>>>> at
>>>>>>>>> 
>>>>>>> 
>>>>>> 
>>>>> 
>>>> 
>>> 
>> org.apache.drill.exec.record.VectorContainer.getValueAccessorById(VectorContainer.java:241)
>>>>>>>>> ~[drill-java-exec-0.9.0-rebuffed.jar:0.9.0]
>>>>>>>>> at
>>>>>>>>> 
>>>>>>> 
>>>>>> 
>>>>> 
>>>> 
>>> 
>> org.apache.drill.exec.record.AbstractRecordBatch.getValueAccessorById(AbstractRecordBatch.java:197)
>>>>>>>>> ~[drill-java-exec-0.9.0-rebuffed.jar:0.9.0]
>>>>>>>>> at
>>>>>>>>> 
>>>>>>> 
>>>>>> 
>>>>> 
>>>> 
>>> 
>> org.apache.drill.exec.physical.impl.validate.IteratorValidatorBatchIterator.getValueAccessorById(IteratorValidatorBatchIterator.java:110)
>>>>>>>>> ~[drill-java-exec-0.9.0-rebuffed.jar:0.9.0]
>>>>>>>>> at
>>>>>>>>> 
>>>>>>> 
>>>>>> 
>>>>> 
>>>> 
>>> 
>> org.apache.drill.exec.test.generated.NestedLoopJoinGen653.doSetup(NestedLoopJoinTemplate.java:89)
>>>>>>>>> ~[na:na]
>>>>>>>>> at
>>>>>>>>> 
>>>>>>> 
>>>>>> 
>>>>> 
>>>> 
>>> 
>> org.apache.drill.exec.test.generated.NestedLoopJoinGen653.setupNestedLoopJoin(NestedLoopJoinTemplate.java:75)
>>>>>>>>> ~[na:na]
>>>>>>>>> at
>>>>>>>>> 
>>>>>>> 
>>>>>> 
>>>>> 
>>>> 
>>> 
>> org.apache.drill.exec.physical.impl.join.NestedLoopJoinBatch.innerNext(NestedLoopJoinBatch.java:164)
>>>>>>>>> ~[drill-java-exec-0.9.0-rebuffed.jar:0.9.0]
>>>>>>>>> at
>>>>>>>>> 
>>>>>>> 
>>>>>> 
>>>>> 
>>>> 
>>> 
>> org.apache.drill.exec.record.AbstractRecordBatch.next(AbstractRecordBatch.java:144)
>>>>>>>>> ~[drill-java-exec-0.9.0-rebuffed.jar:0.9.0]
>>>>>>>>> at
>>>>>>>>> 
>>>>>>> 
>>>>>> 
>>>>> 
>>>> 
>>> 
>> org.apache.drill.exec.physical.impl.validate.IteratorValidatorBatchIterator.next(IteratorValidatorBatchIterator.java:118)
>>>>>>>>> ~[drill-java-exec-0.9.0-rebuffed.jar:0.9.0]
>>>>>>>>> at
>>>>>>>>> 
>>>>>>> 
>>>>>> 
>>>>> 
>>>> 
>>> 
>> org.apache.drill.exec.record.AbstractRecordBatch.next(AbstractRecordBatch.java:101)
>>>>>>>>> ~[drill-java-exec-0.9.0-rebuffed.jar:0.9.0]
>>>>>>>>> at
>>>>>>>>> 
>>>>>>> 
>>>>>> 
>>>>> 
>>>> 
>>> 
>> org.apache.drill.exec.record.AbstractRecordBatch.next(AbstractRecordBatch.java:91)
>>>>>>>>> ~[drill-java-exec-0.9.0-rebuffed.jar:0.9.0]
>>>>>>>>> at
>>>>>>>>> 
>>>>>>> 
>>>>>> 
>>>>> 
>>>> 
>>> 
>> org.apache.drill.exec.record.AbstractSingleRecordBatch.innerNext(AbstractSingleRecordBatch.java:51)
>>>>>>>>> ~[drill-java-exec-0.9.0-rebuffed.jar:0.9.0]
>>>>>>>>> at
>>>>>>>>> 
>>>>>>> 
>>>>>> 
>>>>> 
>>>> 
>>> 
>> org.apache.drill.exec.physical.impl.project.ProjectRecordBatch.innerNext(ProjectRecordBatch.java:130)
>>>>>>>>> ~[drill-java-exec-0.9.0-rebuffed.jar:0.9.0]
>>>>>>>>> at
>>>>>>>>> 
>>>>>>> 
>>>>>> 
>>>>> 
>>>> 
>>> 
>> org.apache.drill.exec.record.AbstractRecordBatch.next(AbstractRecordBatch.java:144)
>>>>>>>>> ~[drill-java-exec-0.9.0-rebuffed.jar:0.9.0]
>>>>>>>>> at
>>>>>>>>> 
>>>>>>> 
>>>>>> 
>>>>> 
>>>> 
>>> 
>> org.apache.drill.exec.physical.impl.validate.IteratorValidatorBatchIterator.next(IteratorValidatorBatchIterator.java:118)
>>>>>>>>> ~[drill-java-exec-0.9.0-rebuffed.jar:0.9.0]
>>>>>>>>> at
>>>>>>>>> 
>>>>>>> 
>>>>>> 
>>>>> 
>>>> 
>>> 
>> org.apache.drill.exec.record.AbstractRecordBatch.next(AbstractRecordBatch.java:101)
>>>>>>>>> ~[drill-java-exec-0.9.0-rebuffed.jar:0.9.0]
>>>>>>>>> at
>>>>>>>>> 
>>>>>>> 
>>>>>> 
>>>>> 
>>>> 
>>> 
>> org.apache.drill.exec.test.generated.HashJoinProbeGen657.executeProbePhase(HashJoinProbeTemplate.java:119)
>>>>>>>>> ~[na:na]
>>>>>>>>> at
>>>>>>>>> 
>>>>>>> 
>>>>>> 
>>>>> 
>>>> 
>>> 
>> org.apache.drill.exec.test.generated.HashJoinProbeGen657.probeAndProject(HashJoinProbeTemplate.java:223)
>>>>>>>>> ~[na:na]
>>>>>>>>> at
>>>>>>>>> 
>>>>>>> 
>>>>>> 
>>>>> 
>>>> 
>>> 
>> org.apache.drill.exec.physical.impl.join.HashJoinBatch.innerNext(HashJoinBatch.java:223)
>>>>>>>>> ~[drill-java-exec-0.9.0-rebuffed.jar:0.9.0]
>>>>>>>>> at
>>>>>>>>> 
>>>>>>> 
>>>>>> 
>>>>> 
>>>> 
>>> 
>> org.apache.drill.exec.record.AbstractRecordBatch.next(AbstractRecordBatch.java:144)
>>>>>>>>> ~[drill-java-exec-0.9.0-rebuffed.jar:0.9.0]
>>>>>>>>> at
>>>>>>>>> 
>>>>>>> 
>>>>>> 
>>>>> 
>>>> 
>>> 
>> org.apache.drill.exec.physical.impl.validate.IteratorValidatorBatchIterator.next(IteratorValidatorBatchIterator.java:118)
>>>>>>>>> ~[drill-java-exec-0.9.0-rebuffed.jar:0.9.0]
>>>>>>>>> at
>>>>>>>>> 
>>>>>>> 
>>>>>> 
>>>>> 
>>>> 
>>> 
>> org.apache.drill.exec.record.AbstractRecordBatch.next(AbstractRecordBatch.java:101)
>>>>>>>>> ~[drill-java-exec-0.9.0-rebuffed.jar:0.9.0]
>>>>>>>>> at
>>>>>>>>> 
>>>>>>> 
>>>>>> 
>>>>> 
>>>> 
>>> 
>> org.apache.drill.exec.record.AbstractRecordBatch.next(AbstractRecordBatch.java:91)
>>>>>>>>> ~[drill-java-exec-0.9.0-rebuffed.jar:0.9.0]
>>>>>>>>> at
>>>>>>>>> 
>>>>>>> 
>>>>>> 
>>>>> 
>>>> 
>>> 
>> org.apache.drill.exec.record.AbstractSingleRecordBatch.innerNext(AbstractSingleRecordBatch.java:51)
>>>>>>>>> ~[drill-java-exec-0.9.0-rebuffed.jar:0.9.0]
>>>>>>>>> at
>>>>>>>>> 
>>>>>>> 
>>>>>> 
>>>>> 
>>>> 
>>> 
>> org.apache.drill.exec.record.AbstractRecordBatch.next(AbstractRecordBatch.java:144)
>>>>>>>>> ~[drill-java-exec-0.9.0-rebuffed.jar:0.9.0]
>>>>>>>>> at
>>>>>>>>> 
>>>>>>> 
>>>>>> 
>>>>> 
>>>> 
>>> 
>> org.apache.drill.exec.physical.impl.validate.IteratorValidatorBatchIterator.next(IteratorValidatorBatchIterator.java:118)
>>>>>>>>> ~[drill-java-exec-0.9.0-rebuffed.jar:0.9.0]
>>>>>>>>> at
>>>>>>>>> 
>>>>>>> 
>>>>>> 
>>>>> 
>>>> 
>>> 
>> org.apache.drill.exec.record.AbstractRecordBatch.next(AbstractRecordBatch.java:101)
>>>>>>>>> ~[drill-java-exec-0.9.0-rebuffed.jar:0.9.0]
>>>>>>>>> at
>>>>>>>>> 
>>>>>>> 
>>>>>> 
>>>>> 
>>>> 
>>> 
>> org.apache.drill.exec.record.AbstractRecordBatch.next(AbstractRecordBatch.java:91)
>>>>>>>>> ~[drill-java-exec-0.9.0-rebuffed.jar:0.9.0]
>>>>>>>>> at
>>>>>>>>> 
>>>>>>> 
>>>>>> 
>>>>> 
>>>> 
>>> 
>> org.apache.drill.exec.record.AbstractSingleRecordBatch.innerNext(AbstractSingleRecordBatch.java:51)
>>>>>>>>> ~[drill-java-exec-0.9.0-rebuffed.jar:0.9.0]
>>>>>>>>> at
>>>>>>>>> 
>>>>>>> 
>>>>>> 
>>>>> 
>>>> 
>>> 
>> org.apache.drill.exec.physical.impl.svremover.RemovingRecordBatch.innerNext(RemovingRecordBatch.java:96)
>>>>>>>>> ~[drill-java-exec-0.9.0-rebuffed.jar:0.9.0]
>>>>>>>>> at
>>>>>>>>> 
>>>>>>> 
>>>>>> 
>>>>> 
>>>> 
>>> 
>> org.apache.drill.exec.record.AbstractRecordBatch.next(AbstractRecordBatch.java:144)
>>>>>>>>> ~[drill-java-exec-0.9.0-rebuffed.jar:0.9.0]
>>>>>>>>> at
>>>>>>>>> 
>>>>>>> 
>>>>>> 
>>>>> 
>>>> 
>>> 
>> org.apache.drill.exec.physical.impl.validate.IteratorValidatorBatchIterator.next(IteratorValidatorBatchIterator.java:118)
>>>>>>>>> ~[drill-java-exec-0.9.0-rebuffed.jar:0.9.0]
>>>>>>>>> at
>>>>>>>>> 
>>>>>>> 
>>>>>> 
>>>>> 
>>>> 
>>> 
>> org.apache.drill.exec.record.AbstractRecordBatch.next(AbstractRecordBatch.java:101)
>>>>>>>>> ~[drill-java-exec-0.9.0-rebuffed.jar:0.9.0]
>>>>>>>>> at
>>>>>>>>> 
>>>>>>> 
>>>>>> 
>>>>> 
>>>> 
>>> 
>> org.apache.drill.exec.record.AbstractRecordBatch.next(AbstractRecordBatch.java:91)
>>>>>>>>> ~[drill-java-exec-0.9.0-rebuffed.jar:0.9.0]
>>>>>>>>> at
>>>>>>>>> 
>>>>>>> 
>>>>>> 
>>>>> 
>>>> 
>>> 
>> org.apache.drill.exec.record.AbstractSingleRecordBatch.innerNext(AbstractSingleRecordBatch.java:51)
>>>>>>>>> ~[drill-java-exec-0.9.0-rebuffed.jar:0.9.0]
>>>>>>>>> at
>>>>>>>>> 
>>>>>>> 
>>>>>> 
>>>>> 
>>>> 
>>> 
>> org.apache.drill.exec.physical.impl.project.ProjectRecordBatch.innerNext(ProjectRecordBatch.java:130)
>>>>>>>>> ~[drill-java-exec-0.9.0-rebuffed.jar:0.9.0]
>>>>>>>>> at
>>>>>>>>> 
>>>>>>> 
>>>>>> 
>>>>> 
>>>> 
>>> 
>> org.apache.drill.exec.record.AbstractRecordBatch.next(AbstractRecordBatch.java:144)
>>>>>>>>> ~[drill-java-exec-0.9.0-rebuffed.jar:0.9.0]
>>>>>>>>> at
>>>>>>>>> 
>>>>>>> 
>>>>>> 
>>>>> 
>>>> 
>>> 
>> org.apache.drill.exec.physical.impl.validate.IteratorValidatorBatchIterator.next(IteratorValidatorBatchIterator.java:118)
>>>>>>>>> ~[drill-java-exec-0.9.0-rebuffed.jar:0.9.0]
>>>>>>>>> at
>>>>>>>>> 
>>>>>>> 
>>>>>> 
>>>>> 
>>>> 
>>> 
>> org.apache.drill.exec.physical.impl.BaseRootExec.next(BaseRootExec.java:74)
>>>>>>>>> ~[drill-java-exec-0.9.0-rebuffed.jar:0.9.0]
>>>>>>>>> at
>>>>>>>>> 
>>>>>>> 
>>>>>> 
>>>>> 
>>>> 
>>> 
>> org.apache.drill.exec.physical.impl.SingleSenderCreator$SingleSenderRootExec.innerNext(SingleSenderCreator.java:91)
>>>>>>>>> ~[drill-java-exec-0.9.0-rebuffed.jar:0.9.0]
>>>>>>>>> at
>>>>>>>>> 
>>>>>>> 
>>>>>> 
>>>>> 
>>>> 
>>> 
>> org.apache.drill.exec.physical.impl.BaseRootExec.next(BaseRootExec.java:64)
>>>>>>>>> ~[drill-java-exec-0.9.0-rebuffed.jar:0.9.0]
>>>>>>>>> at
>>>>>>>>> 
>>>>>>> 
>>>>>> 
>>>>> 
>>>> 
>>> 
>> org.apache.drill.exec.work.fragment.FragmentExecutor$1.run(FragmentExecutor.java:199)
>>>>>>>>> ~[drill-java-exec-0.9.0-rebuffed.jar:0.9.0]
>>>>>>>>> at
>>>>>>>>> 
>>>>>>> 
>>>>>> 
>>>>> 
>>>> 
>>> 
>> org.apache.drill.exec.work.fragment.FragmentExecutor$1.run(FragmentExecutor.java:193)
>>>>>>>>> ~[drill-java-exec-0.9.0-rebuffed.jar:0.9.0]
>>>>>>>>> at java.security.AccessController.doPrivileged(Native
Method)
>>>>>>>>> ~[na:1.8.0_25]
>>>>>>>>> at javax.security.auth.Subject.doAs(Subject.java:422)
>>>> ~[na:1.8.0_25]
>>>>>>>>> at
>>>>>>>>> 
>>>>>>> 
>>>>>> 
>>>>> 
>>>> 
>>> 
>> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1556)
>>>>>>>>> ~[hadoop-common-2.4.1.jar:na]
>>>>>>>>> at
>>>>>>>>> 
>>>>>>> 
>>>>>> 
>>>>> 
>>>> 
>>> 
>> org.apache.drill.exec.work.fragment.FragmentExecutor.run(FragmentExecutor.java:193)
>>>>>>>>> [drill-java-exec-0.9.0-rebuffed.jar:0.9.0]
>>>>>>>>> ... 3 common frames omitted
>>>>>>>>> 2015-05-16 17:09:29,745 [BitServer-3] INFO
>>>>>>>>> o.a.d.e.w.fragment.FragmentExecutor -
>>>>>>>>> 2aa8f6cb-b8d7-692a-9c53-ab76dff01031:1:0: State change
>> requested
>>>>> from
>>>>>>>>> FAILED --> CANCELLATION_REQUESTED for
>>>>>>>>> 2015-05-16 17:09:29,745 [BitServer-3] WARN
>>>>>>>>> o.a.d.e.w.fragment.FragmentExecutor - Ignoring unexpected
>> state
>>>>>>> transition
>>>>>>>>> FAILED => CANCELLATION_REQUESTED.
>>>>>>>>> 2015-05-16 17:09:29,746 [BitServer-3] INFO
>>>>>>>>> o.a.d.e.w.fragment.FragmentExecutor -
>>>>>>>>> 2aa8f6cb-b8d7-692a-9c53-ab76dff01031:1:3: State change
>> requested
>>>>> from
>>>>>>>>> RUNNING --> CANCELLATION_REQUESTED for
>>>>>>>>> 2015-05-16 17:09:29,746 [BitServer-3] INFO
>>>>>>>>> o.a.d.e.w.f.AbstractStatusReporter - State changed for
>>>>>>>>> 2aa8f6cb-b8d7-692a-9c53-ab76dff01031:1:3. New state:
>>>>>>> CANCELLATION_REQUESTED
>>>>>>>>> 2015-05-16 17:09:29,829
>>>>>> [2aa8f6cb-b8d7-692a-9c53-ab76dff01031:frag:1:3]
>>>>>>>>> INFO  o.a.d.e.w.fragment.FragmentExecutor -
>>>>>>>>> 2aa8f6cb-b8d7-692a-9c53-ab76dff01031:1:3: State change
>> requested
>>>>> from
>>>>>>>>> CANCELLATION_REQUESTED --> FAILED for
>>>>>>>>> 2015-05-16 17:09:29,840
>>>>>> [2aa8f6cb-b8d7-692a-9c53-ab76dff01031:frag:1:3]
>>>>>>>>> ERROR o.a.d.c.e.DrillRuntimeException - SYSTEM ERROR:
You
>> tried
>>> to
>>>>> do
>>>>>> a
>>>>>>>>> batch data read operation when you were in a state of
NOT_YET.
>>>> You
>>>>>> can
>>>>>>>>> only do this type of operation when you are in a state
of OK
>> or
>>>>>>>>> OK_NEW_SCHEMA.
>>>>>>>>> 
>>>>>>>>> Thanks!
>>>>>>>>> 
>>>>>>>>> Regards,
>>>>>>>>> George Lu
>>>>>>>>> 
>>>>>>>> 
>>>>>>>> 
>>>>>>> 
>>>>>> 
>>>>> 
>>>> 
>>> 
>> 


Mime
  • Unnamed multipart/alternative (inline, None, 0 bytes)
View raw message