From dev-return-6311-apmail-crunch-dev-archive=crunch.apache.org@crunch.apache.org Thu Feb 2 16:42:03 2017 Return-Path: X-Original-To: apmail-crunch-dev-archive@www.apache.org Delivered-To: apmail-crunch-dev-archive@www.apache.org Received: from mail.apache.org (hermes.apache.org [140.211.11.3]) by minotaur.apache.org (Postfix) with SMTP id 0D20A19E49 for ; Thu, 2 Feb 2017 16:42:03 +0000 (UTC) Received: (qmail 77829 invoked by uid 500); 2 Feb 2017 16:42:02 -0000 Delivered-To: apmail-crunch-dev-archive@crunch.apache.org Received: (qmail 77663 invoked by uid 500); 2 Feb 2017 16:42:02 -0000 Mailing-List: contact dev-help@crunch.apache.org; run by ezmlm Precedence: bulk List-Help: List-Unsubscribe: List-Post: List-Id: Reply-To: dev@crunch.apache.org Delivered-To: mailing list dev@crunch.apache.org Received: (qmail 77317 invoked by uid 500); 2 Feb 2017 16:42:02 -0000 Delivered-To: apmail-incubator-crunch-dev@incubator.apache.org Received: (qmail 77314 invoked by uid 99); 2 Feb 2017 16:42:02 -0000 Received: from pnap-us-west-generic-nat.apache.org (HELO spamd4-us-west.apache.org) (209.188.14.142) by apache.org (qpsmtpd/0.29) with ESMTP; Thu, 02 Feb 2017 16:42:02 +0000 Received: from localhost (localhost [127.0.0.1]) by spamd4-us-west.apache.org (ASF Mail Server at spamd4-us-west.apache.org) with ESMTP id E2ACFC0258 for ; Thu, 2 Feb 2017 16:42:01 +0000 (UTC) X-Virus-Scanned: Debian amavisd-new at spamd4-us-west.apache.org X-Spam-Flag: NO X-Spam-Score: -1.999 X-Spam-Level: X-Spam-Status: No, score=-1.999 tagged_above=-999 required=6.31 tests=[KAM_LAZY_DOMAIN_SECURITY=1, RP_MATCHES_RCVD=-2.999] autolearn=disabled Received: from mx1-lw-us.apache.org ([10.40.0.8]) by localhost (spamd4-us-west.apache.org [10.40.0.11]) (amavisd-new, port 10024) with ESMTP id 6v6EMLg_KrGl for ; Thu, 2 Feb 2017 16:42:00 +0000 (UTC) Received: from mailrelay1-us-west.apache.org (mailrelay1-us-west.apache.org [209.188.14.139]) by mx1-lw-us.apache.org (ASF Mail Server at mx1-lw-us.apache.org) with ESMTP id C2C615FBB2 for ; Thu, 2 Feb 2017 16:41:59 +0000 (UTC) Received: from jira-lw-us.apache.org (unknown [207.244.88.139]) by mailrelay1-us-west.apache.org (ASF Mail Server at mailrelay1-us-west.apache.org) with ESMTP id E4E58E02D5 for ; Thu, 2 Feb 2017 16:41:51 +0000 (UTC) Received: from jira-lw-us.apache.org (localhost [127.0.0.1]) by jira-lw-us.apache.org (ASF Mail Server at jira-lw-us.apache.org) with ESMTP id 90BCB2528B for ; Thu, 2 Feb 2017 16:41:51 +0000 (UTC) Date: Thu, 2 Feb 2017 16:41:51 +0000 (UTC) From: "Attila Sasvari (JIRA)" To: crunch-dev@incubator.apache.org Message-ID: In-Reply-To: References: Subject: [jira] [Commented] (CRUNCH-619) Run on HBase 2 MIME-Version: 1.0 Content-Type: text/plain; charset=utf-8 Content-Transfer-Encoding: 7bit X-JIRA-FingerPrint: 30527f35849b9dde25b450d4833f0394 [ https://issues.apache.org/jira/browse/CRUNCH-619?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15850149#comment-15850149 ] Attila Sasvari commented on CRUNCH-619: --------------------------------------- I applied the patch and some Spark integration tests failed. {noformat} Tests in error: SparkHFileTargetIT.setUpClass:129 ? RetriesExhausted Failed after attempts=36,... SparkWordCountHBaseIT.setUp:110 ? RetriesExhausted Failed after attempts=36, e... SparkWordCountHBaseIT.setUp:110 ? RetriesExhausted Failed after attempts=36, e... {noformat} I checked {{org.apache.hadoop.hbase.ipc.CallTimeoutException}} was thrown during the execution of SparkHFileTargetIT: {code} org.apache.crunch.SparkHFileTargetIT Time elapsed: 67.833 sec <<< ERROR! org.apache.hadoop.hbase.client.RetriesExhaustedException: Failed after attempts=36, exceptions: Thu Feb 02 16:55:00 CET 2017, null, java.net.SocketTimeoutException: callTimeout=60000, callDuration=60136: Call to /192.168.1.102:64404 failed on local exception: org.apache.hadoop.hbase.ipc.CallTimeoutException: Call id=0, waitTime=60002, rpcTimetout=59999 row '' on table 'hbase:meta' at region=hbase:meta,,1.1588230740, hostname=192.168.1.102,64404,1486050837780, seqNum=0 at org.apache.hadoop.hbase.client.RpcRetryingCallerWithReadReplicas.throwEnrichedException(RpcRetryingCallerWithReadReplicas.java:255) at org.apache.hadoop.hbase.client.ScannerCallableWithReplicas.call(ScannerCallableWithReplicas.java:229) at org.apache.hadoop.hbase.client.ScannerCallableWithReplicas.call(ScannerCallableWithReplicas.java:59) at org.apache.hadoop.hbase.client.RpcRetryingCallerImpl.callWithoutRetries(RpcRetryingCallerImpl.java:177) at org.apache.hadoop.hbase.client.ClientScanner.call(ClientScanner.java:314) at org.apache.hadoop.hbase.client.ClientScanner.nextScanner(ClientScanner.java:290) at org.apache.hadoop.hbase.client.ClientScanner.initializeScannerInConstruction(ClientScanner.java:169) at org.apache.hadoop.hbase.client.ClientScanner.(ClientScanner.java:162) at org.apache.hadoop.hbase.client.ClientSimpleScanner.(ClientSimpleScanner.java:39) at org.apache.hadoop.hbase.client.HTable.getScanner(HTable.java:378) at org.apache.hadoop.hbase.HBaseTestingUtility.startMiniHBaseCluster(HBaseTestingUtility.java:1105) at org.apache.hadoop.hbase.HBaseTestingUtility.startMiniCluster(HBaseTestingUtility.java:1057) at org.apache.hadoop.hbase.HBaseTestingUtility.startMiniCluster(HBaseTestingUtility.java:929) at org.apache.hadoop.hbase.HBaseTestingUtility.startMiniCluster(HBaseTestingUtility.java:911) at org.apache.hadoop.hbase.HBaseTestingUtility.startMiniCluster(HBaseTestingUtility.java:898) at org.apache.crunch.SparkHFileTargetIT.setUpClass(SparkHFileTargetIT.java:129) Caused by: java.net.SocketTimeoutException: callTimeout=60000, callDuration=60136: Call to /192.168.1.102:64404 failed on local exception: org.apache.hadoop.hbase.ipc.CallTimeoutException: Call id=0, waitTime=60002, rpcTimetout=59999 row '' on table 'hbase:meta' at region=hbase:meta,,1.1588230740, hostname=192.168.1.102,64404,1486050837780, seqNum=0 at org.apache.hadoop.hbase.client.RpcRetryingCallerImpl.callWithRetries(RpcRetryingCallerImpl.java:144) at org.apache.hadoop.hbase.client.ResultBoundedCompletionService$QueueingFuture.run(ResultBoundedCompletionService.java:80) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) at java.lang.Thread.run(Thread.java:745) Caused by: java.io.IOException: Call to /192.168.1.102:64404 failed on local exception: org.apache.hadoop.hbase.ipc.CallTimeoutException: Call id=0, waitTime=60002, rpcTimetout=59999 at org.apache.hadoop.hbase.ipc.IPCUtil.wrapException(IPCUtil.java:172) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:387) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:94) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:407) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:403) at org.apache.hadoop.hbase.ipc.Call.setTimeout(Call.java:96) at org.apache.hadoop.hbase.ipc.RpcConnection$1.run(RpcConnection.java:195) at io.netty.util.HashedWheelTimer$HashedWheelTimeout.expire(HashedWheelTimer.java:581) at io.netty.util.HashedWheelTimer$HashedWheelBucket.expireTimeouts(HashedWheelTimer.java:655) at io.netty.util.HashedWheelTimer$Worker.run(HashedWheelTimer.java:367) at java.lang.Thread.run(Thread.java:745) Caused by: org.apache.hadoop.hbase.ipc.CallTimeoutException: Call id=0, waitTime=60002, rpcTimetout=59999 at org.apache.hadoop.hbase.ipc.RpcConnection$1.run(RpcConnection.java:196) at io.netty.util.HashedWheelTimer$HashedWheelTimeout.expire(HashedWheelTimer.java:581) at io.netty.util.HashedWheelTimer$HashedWheelBucket.expireTimeouts(HashedWheelTimer.java:655) at io.netty.util.HashedWheelTimer$Worker.run(HashedWheelTimer.java:367) at java.lang.Thread.run(Thread.java:745) {code} HbaseMiniCluster cannot be contacted for some reason. I also noticed the following: {code} 44274 [VolumeScannerThread(/root/crunch/crunch-hbase/target/test-data/a3979225-61d0-46fb-9b7a-227cf12cb8c5/dfscluster_5bb4ef9f-6747-48d2-9f0a-389634b8446d/dfs/data/data2)] ERROR org.apache.hadoop.hdfs.server.datanode.VolumeScanner - VolumeScanner(/root/crunch/crunch-hbase/target/test-data/a3979225-61d0-46fb-9b7a-227cf12cb8c5/dfscluster_5bb4ef9f-6747-48d2-9f0a-389634b8446d/dfs/data/data2, DS-fd97dce5-3b9a-43e8-b02f-73d0789ccb54) exiting because of exception java.lang.NoSuchMethodError: org.codehaus.jackson.map.ObjectMapper.writerWithDefaultPrettyPrinter()Lorg/codehaus/jackson/map/ObjectWriter; at org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsVolumeImpl$BlockIteratorImpl.save(FsVolumeImpl.java:676) at org.apache.hadoop.hdfs.server.datanode.VolumeScanner.saveBlockIterator(VolumeScanner.java:314) at org.apache.hadoop.hdfs.server.datanode.VolumeScanner.runLoop(VolumeScanner.java:535) at org.apache.hadoop.hdfs.server.datanode.VolumeScanner.run(VolumeScanner.java:619) {code} It is related to the hadoop update in root pom.xml (bumped to 2.7.1). To load the proper classes, I added the following dependencies to crunch-spark pom.xml {code} com.fasterxml.jackson.core jackson-annotations 2.4.4 jar org.codehaus.jackson jackson-mapper-asl 1.9.13 org.codehaus.jackson jackson-core-lgpl 1.9.13 {code} > Run on HBase 2 > -------------- > > Key: CRUNCH-619 > URL: https://issues.apache.org/jira/browse/CRUNCH-619 > Project: Crunch > Issue Type: Improvement > Affects Versions: 0.14.0 > Reporter: Tom White > Assignee: Tom White > Attachments: CRUNCH-619.patch > > -- This message was sent by Atlassian JIRA (v6.3.15#6346)