From hbase-user-return-4566-apmail-hadoop-hbase-user-archive=hadoop.apache.org@hadoop.apache.org Wed Jun 10 23:13:24 2009 Return-Path: Delivered-To: apmail-hadoop-hbase-user-archive@minotaur.apache.org Received: (qmail 71210 invoked from network); 10 Jun 2009 23:13:23 -0000 Received: from hermes.apache.org (HELO mail.apache.org) (140.211.11.3) by minotaur.apache.org with SMTP; 10 Jun 2009 23:13:23 -0000 Received: (qmail 66402 invoked by uid 500); 10 Jun 2009 23:13:34 -0000 Delivered-To: apmail-hadoop-hbase-user-archive@hadoop.apache.org Received: (qmail 66364 invoked by uid 500); 10 Jun 2009 23:13:34 -0000 Mailing-List: contact hbase-user-help@hadoop.apache.org; run by ezmlm Precedence: bulk List-Help: List-Unsubscribe: List-Post: List-Id: Reply-To: hbase-user@hadoop.apache.org Delivered-To: mailing list hbase-user@hadoop.apache.org Received: (qmail 66350 invoked by uid 99); 10 Jun 2009 23:13:34 -0000 Received: from athena.apache.org (HELO athena.apache.org) (140.211.11.136) by apache.org (qpsmtpd/0.29) with ESMTP; Wed, 10 Jun 2009 23:13:34 +0000 X-ASF-Spam-Status: No, hits=2.2 required=10.0 tests=HTML_MESSAGE,SPF_PASS X-Spam-Check-By: apache.org Received-SPF: pass (athena.apache.org: domain of ryanobjc@gmail.com designates 209.85.210.185 as permitted sender) Received: from [209.85.210.185] (HELO mail-yx0-f185.google.com) (209.85.210.185) by apache.org (qpsmtpd/0.29) with ESMTP; Wed, 10 Jun 2009 23:13:24 +0000 Received: by yxe15 with SMTP id 15so248802yxe.5 for ; Wed, 10 Jun 2009 16:13:03 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=gamma; h=domainkey-signature:mime-version:received:in-reply-to:references :date:message-id:subject:from:to:content-type; bh=3p/MxDh1ZweU4V/LyfYUOBB+qFmbf7Dk2Kqmfcxmh9s=; b=J1opwOrfO8M/J+F8vi7QSx1rYZ7a4INU68kfbPlUC1PUoavHKkFzKgWBm5dTHk1qSy rkoa0fgFGAxczZm79QUC7W1CswTnRcbMW2PISSOtzc0oJOwmsNzEhK7Drdemr6ed7hml TWYBjxXLemtUepHfZjWAYj0W1/JwuNjetqLzI= DomainKey-Signature: a=rsa-sha1; c=nofws; d=gmail.com; s=gamma; h=mime-version:in-reply-to:references:date:message-id:subject:from:to :content-type; b=Zw0ahYDGWCOBYtvU5Tc77QRuOciQ4HIVtBGPZapcB7nNLn/swFLx9MgBn62UodoW2Y sueL/6yBBtNam/kntUFhPq8OA4WNN76TEZhdy4SlkPKi8nnHr8fsQO8lGNbOCeL8uq9G RQV2km7zVu1hcL4/6i124DjeQa9TW0mAFrO/U= MIME-Version: 1.0 Received: by 10.151.9.17 with SMTP id m17mr3726435ybi.23.1244675583543; Wed, 10 Jun 2009 16:13:03 -0700 (PDT) In-Reply-To: <23971190.post@talk.nabble.com> References: <23952252.post@talk.nabble.com> <23954242.post@talk.nabble.com> <23966757.post@talk.nabble.com> <23967196.post@talk.nabble.com> <23971190.post@talk.nabble.com> Date: Wed, 10 Jun 2009 16:13:03 -0700 Message-ID: <78568af10906101613q34d7a5d0tc63b1266e380cfd@mail.gmail.com> Subject: Re: Help with Map/Reduce program From: Ryan Rawson To: hbase-user@hadoop.apache.org Content-Type: multipart/alternative; boundary=000e0cd47b06048a5e046c069e0c X-Virus-Checked: Checked by ClamAV on apache.org --000e0cd47b06048a5e046c069e0c Content-Type: text/plain; charset=ISO-8859-1 Content-Transfer-Encoding: 7bit Hey, A scanner's lease expires in 60 seconds. I'm not sure what version you are using, but try: table.setScannerCaching(1); This way you won't retrieve 60 rows that each take 1-2 seconds to process. This is the new default value in 0.20, but I don't know if it ended up in 0.19.x anywhere. On Wed, Jun 10, 2009 at 2:14 PM, llpind wrote: > > Okay, I think I got it figured out. > > although when scanning large row keys I do get the following exception: > > NativeException: java.lang.RuntimeException: > org.apache.hadoop.hbase.UnknownScannerException: > org.apache.hadoop.hbase.UnknownScannerException: -4424757523660246367 > at > > org.apache.hadoop.hbase.regionserver.HRegionServer.close(HRegionServer.java:1745) > at sun.reflect.GeneratedMethodAccessor12.invoke(Unknown Source) > at > > sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25) > at java.lang.reflect.Method.invoke(Method.java:597) > at > org.apache.hadoop.hbase.ipc.HBaseRPC$Server.call(HBaseRPC.java:632) > at > org.apache.hadoop.hbase.ipc.HBaseServer$Handler.run(HBaseServer.java:912) > > from org/apache/hadoop/hbase/client/HTable.java:1741:in `hasNext' > from sun/reflect/NativeMethodAccessorImpl.java:-2:in `invoke0' > from sun/reflect/NativeMethodAccessorImpl.java:39:in `invoke' > from sun/reflect/DelegatingMethodAccessorImpl.java:25:in `invoke' > from java/lang/reflect/Method.java:597:in `invoke' > from org/jruby/javasupport/JavaMethod.java:298:in > `invokeWithExceptionHandling' > from org/jruby/javasupport/JavaMethod.java:259:in `invoke' > from org/jruby/java/invokers/InstanceMethodInvoker.java:36:in `call' > from org/jruby/runtime/callsite/CachingCallSite.java:73:in `call' > from org/jruby/ast/CallNoArgNode.java:61:in `interpret' > from org/jruby/ast/WhileNode.java:124:in `interpret' > from org/jruby/ast/NewlineNode.java:101:in `interpret' > from org/jruby/ast/BlockNode.java:68:in `interpret' > from org/jruby/internal/runtime/methods/DefaultMethod.java:156:in > `interpretedCall' > from org/jruby/internal/runtime/methods/DefaultMethod.java:133:in > `call' > from org/jruby/internal/runtime/methods/DefaultMethod.java:246:in > `call' > ... 108 levels... > from org/jruby/internal/runtime/methods/DynamicMethod.java:226:in > `call' > from org/jruby/internal/runtime/methods/CompiledMethod.java:216:in > `call' > from org/jruby/internal/runtime/methods/CompiledMethod.java:71:in > `call' > from org/jruby/runtime/callsite/CachingCallSite.java:260:in > `cacheAndCall' > from org/jruby/runtime/callsite/CachingCallSite.java:75:in `call' > from home/hadoop/hbase193/bin/$_dot_dot_/bin/hirb.rb:441:in > `__file__' > from home/hadoop/hbase193/bin/$_dot_dot_/bin/hirb.rb:-1:in > `__file__' > from home/hadoop/hbase193/bin/$_dot_dot_/bin/hirb.rb:-1:in `load' > from org/jruby/Ruby.java:564:in `runScript' > from org/jruby/Ruby.java:467:in `runNormally' > from org/jruby/Ruby.java:340:in `runFromMain' > from org/jruby/Main.java:214:in `run' > from org/jruby/Main.java:100:in `run' > from org/jruby/Main.java:84:in `main' > from /home/hadoop/hbase193/bin/../bin/hirb.rb:346:in `scan' > > > =================================================== > > Is there an easy way around this problem? > > > > > Billy Pearson-2 wrote: > > > > Yes that's what scanners are good for they will return all the > > columns:lables combos for a row > > What does the MR job stats say for rows processed for the maps and > > reduces? > > > > Billy Pearson > > > > > > > > "llpind" wrote in > > message news:23967196.post@talk.nabble.com... > >> > >> also, > >> > >> I think what we want is a way to wildcard everything after colFam1: > >> (e.g. > >> colFam1:*). Is there a way to do this in HBase? > >> > >> This is assuming we dont know the column name, we want them all. > >> > >> > >> llpind wrote: > >>> > >>> Thanks. > >>> > >>> Yea I've got that colFam for sure in the HBase table: > >>> > >>> {NAME => 'tableA', FAMILIES => [{NAME => 'colFam1', VERSIONS => '3', > >>> COMPRESSION => 'NONE', LENGTH => '2147483647', > >>> TTL => '-1', IN_MEMORY => 'false', BLOCKCACHE => 'false'}, {NAME => > >>> 'colFam2', VERSIONS => '3', COMPRESSION => > >>> 'NONE', LENGTH => '2147483647', TTL => '-1', IN_MEMORY => 'false', > >>> BLOCKCACHE => 'false'}]} > >>> > >>> > >>> I've been trying to play with rowcounter, and not having much luck > >>> either. > >>> > >>> I run the command: > >>> hadoop19/bin/hadoop org.apache.hadoop.hbase.mapred.Driver rowcounter > >>> /home/hadoop/dev/rowcounter7 tableA colFam1: > >>> > >>> > >>> The map/reduce finishes just like it does with my own program, but with > >>> all part files empty in /home/hadoop/dev/rowcounter7. > >>> > >>> Any Ideas? > >>> > >>> > >> > >> -- > >> View this message in context: > >> > http://www.nabble.com/Help-with-Map-Reduce-program-tp23952252p23967196.html > >> Sent from the HBase User mailing list archive at Nabble.com. > >> > >> > > > > > > > > > > -- > View this message in context: > http://www.nabble.com/Help-with-Map-Reduce-program-tp23952252p23971190.html > Sent from the HBase User mailing list archive at Nabble.com. > > --000e0cd47b06048a5e046c069e0c--