kylin-issues mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Shaofeng SHI (JIRA)" <j...@apache.org>
Subject [jira] [Commented] (KYLIN-3930) ArrayIndexOutOfBoundsException when building
Date Mon, 08 Apr 2019 12:17:00 GMT

    [ https://issues.apache.org/jira/browse/KYLIN-3930?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16812384#comment-16812384
] 

Shaofeng SHI commented on KYLIN-3930:
-------------------------------------

The non-sharded storage are not support after v1.5 I think, though it didn't report error.
Please keep in old version, or switch to the sharded HBase storage (storage type = 2).

> ArrayIndexOutOfBoundsException when building
> --------------------------------------------
>
>                 Key: KYLIN-3930
>                 URL: https://issues.apache.org/jira/browse/KYLIN-3930
>             Project: Kylin
>          Issue Type: Bug
>          Components: Job Engine
>    Affects Versions: all
>            Reporter: Jacky Woo
>            Priority: Major
>             Fix For: v2.6.2
>
>         Attachments: KYLIN-3930.master.01.patch
>
>
> h2. ArrayIndexOutOfBoundsException when building.
> I hive a cube building error with kylin-2.5.0:
> {code:java}
> 2019-03-31 02:45:18,460 ERROR [main] org.apache.kylin.engine.mr.KylinMapper:
> java.lang.ArrayIndexOutOfBoundsException
>         at java.lang.System.arraycopy(Native Method)
>         at org.apache.kylin.engine.mr.common.NDCuboidBuilder.buildKeyInternal(NDCuboidBuilder.java:106)
>         at org.apache.kylin.engine.mr.common.NDCuboidBuilder.buildKey(NDCuboidBuilder.java:71)
>         at org.apache.kylin.engine.mr.steps.NDCuboidMapper.doMap(NDCuboidMapper.java:112)
>         at org.apache.kylin.engine.mr.steps.NDCuboidMapper.doMap(NDCuboidMapper.java:47)
>         at org.apache.kylin.engine.mr.KylinMapper.map(KylinMapper.java:77)
>         at org.apache.hadoop.mapreduce.Mapper.run(Mapper.java:146)
>         at org.apache.hadoop.mapred.MapTask.runNewMapper(MapTask.java:796)
>         at org.apache.hadoop.mapred.MapTask.run(MapTask.java:342)
>         at org.apache.hadoop.mapred.YarnChild$2.run(YarnChild.java:164)
>         at java.security.AccessController.doPrivileged(Native Method)
>         at javax.security.auth.Subject.doAs(Subject.java:422)
>         at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1698)
>         at org.apache.hadoop.mapred.YarnChild.main(YarnChild.java:158)
> {code}
> I checked the code of "NDCuboidBuilder.buildKeyInternal" method
> {code:java}
> private void buildKeyInternal(Cuboid parentCuboid, Cuboid childCuboid, ByteArray[] splitBuffers,
ByteArray newKeyBodyBuf) {
>         RowKeyEncoder rowkeyEncoder = rowKeyEncoderProvider.getRowkeyEncoder(childCuboid);
>         // rowkey columns
>         long mask = Long.highestOneBit(parentCuboid.getId());
>         long parentCuboidId = parentCuboid.getId();
>         long childCuboidId = childCuboid.getId();
>         long parentCuboidIdActualLength = (long)Long.SIZE - Long.numberOfLeadingZeros(parentCuboid.getId());
>         int index = rowKeySplitter.getBodySplitOffset(); // skip shard and cuboidId
>         int offset = RowConstants.ROWKEY_SHARDID_LEN + RowConstants.ROWKEY_CUBOIDID_LEN;
// skip shard and cuboidId
>         for (int i = 0; i < parentCuboidIdActualLength; i++) {
>             if ((mask & parentCuboidId) > 0) {// if the this bit position equals
>                 // 1
>                 if ((mask & childCuboidId) > 0) {// if the child cuboid has this
>                     // column
>                     System.arraycopy(splitBuffers[index].array(), splitBuffers[index].offset(),
newKeyBodyBuf.array(), offset, splitBuffers[index].length());
>                     offset += splitBuffers[index].length();
>                 }
>                 index++;
>             }
>             mask = mask >> 1;
>         }
>         rowkeyEncoder.fillHeader(newKeyBodyBuf.array());
>     }
> {code}
> Found that "offset = SHARDID_LEN + CUBOIDID_LEN" , which is wrong when cube is not sharding.
In my case my cube's storage type is 0, which means it is not sharding.
> So, I set offset according to cube sharding, like below:
> {code:java}
> int offset = rowKeySplitter.getHeaderLength(); // skip shard and cuboidId
> {code}
> After modifying building succeeds in my environment.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

Mime
View raw message