drill-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Igor Guzenko <ihor.huzenko....@gmail.com>
Subject Re: Issue while setting up hive storage plugin in kerberized cluster
Date Sat, 30 Nov 2019 20:55:44 GMT
Hello Rameshwar,

This are very good questions, I also would be glad if we had such
compatibility matrix for users.
But as far as I know all Drill developers are very busy with fixes and
improvements for the upcoming 1.17.0 release.
I did some search in Jira and found that current versions are:

-  Hive 2.3.2 (https://issues.apache.org/jira/browse/DRILL-5978)
-  Kafka 2.3.1 (https://issues.apache.org/jira/browse/DRILL-6739) for
future 1.17.0 release, was 0.11.0.1 earlier.
-  Hadoop 3.x (https://issues.apache.org/jira/browse/DRILL-6540) also
planned for future release.

For now you can detect any version by searching through Jira tickets,
commits or pom.xml files for Drill version which interests you.

Thanks,
Igor

On Fri, Nov 29, 2019 at 8:37 PM Rameshwar Mane <mr.manerm@gmail.com> wrote:

> Hi igor ,
>
> I tried setting "hive.server2.enable.doAs (
> https://link.getmailspring.com/link/A61A2FC0-C11A-4900-B159-58171E99FF17@getmailspring.com/0?redirect=hive.server2.enable.doAs&recipient=dXNlckBkcmlsbC5hcGFjaGUub3Jn)":
> "true" and ive also set the "hive.metastore.kerberos.principal" these
> properties i found while looking into mapr drill documentation. These
> settings helped e to create hive-storage plugin successfully. I was able to
> list databases present in hive datastore.
> But while i was trying to list tables present in hive i was not able to
> get any response from metastore. When I looked into logs then i found that
> the drill is trying to execute a command in hive metastore
> "get_tables_by_types"
> method that is not available in the hive version that i am using.
>
> the contents for drillbit.log are :
>
> 2019-11-29 18:27:08,900 [221e9d03-7c28-6462-bf0b-efe9eb6c4995:foreman]
> INFO o.a.drill.exec.work.foreman.Foreman - Query text for query with id
> 221e9d03-7c28-6462-bf0b-efe9eb6c4995 issued by drill: show tables in
> hive.geo
> 2019-11-29 18:27:08,939 [221e9d03-7c28-6462-bf0b-efe9eb6c4995:frag:0:0]
> WARN o.a.d.e.s.h.c.TableNameCacheLoader - Failure while attempting to get
> hive tables. Retries once.
> org.apache.hadoop.hive.metastore.api.MetaException: Got exception:
> org.apache.thrift.TApplicationException Invalid method name:
> 'get_tables_by_type'
> at
> org.apache.hadoop.hive.metastore.MetaStoreUtils.logAndThrowMetaException(MetaStoreUtils.java:1382)
> ~[drill-hive-exec-shaded-1.16.0.jar:1.16.0]
> at
> org.apache.hadoop.hive.metastore.HiveMetaStoreClient.getTables(HiveMetaStoreClient.java:1405)
> ~[drill-hive-exec-shaded-1.16.0.jar:1.16.0]
> at
> org.apache.drill.exec.store.hive.client.TableNameCacheLoader.load(TableNameCacheLoader.java:59)
> [drill-storage-hive-core-1.16.0.jar:1.16.0]
> at
> org.apache.drill.exec.store.hive.client.TableNameCacheLoader.load(TableNameCacheLoader.java:41)
> [drill-storage-hive-core-1.16.0.jar:1.16.0]
> at
> org.apache.drill.shaded.guava.com.google.common.cache.LocalCache$LoadingValueReference.loadFuture(LocalCache.java:3708)
> [drill-shaded-guava-23.0.jar:23.0]
> at
> org.apache.drill.shaded.guava.com.google.common.cache.LocalCache$Segment.loadSync(LocalCache.java:2416)
> [drill-shaded-guava-23.0.jar:23.0]
> at
> org.apache.drill.shaded.guava.com.google.common.cache.LocalCache$Segment.lockedGetOrLoad(LocalCache.java:2299)
> [drill-shaded-guava-23.0.jar:23.0]
> at
> org.apache.drill.shaded.guava.com.google.common.cache.LocalCache$Segment.get(LocalCache.java:2212)
> [drill-shaded-guava-23.0.jar:23.0]
> at
> org.apache.drill.shaded.guava.com.google.common.cache.LocalCache.get(LocalCache.java:4147)
> [drill-shaded-guava-23.0.jar:23.0]
> at
> org.apache.drill.shaded.guava.com.google.common.cache.LocalCache.getOrLoad(LocalCache.java:4151)
> [drill-shaded-guava-23.0.jar:23.0]
> at
> org.apache.drill.shaded.guava.com.google.common.cache.LocalCache$LocalLoadingCache.get(LocalCache.java:5140)
> [drill-shaded-guava-23.0.jar:23.0]
> at
> org.apache.drill.exec.store.hive.client.HiveMetadataCache.getTableNamesAndTypes(HiveMetadataCache.java:114)
> [drill-storage-hive-core-1.16.0.jar:1.16.0]
> at
> org.apache.drill.exec.store.hive.client.DrillHiveMetaStoreClient.getTableNamesAndTypes(DrillHiveMetaStoreClient.java:90)
> [drill-storage-hive-core-1.16.0.jar:1.16.0]
> at
> org.apache.drill.exec.store.hive.client.DrillHiveMetaStoreClientWithAuthorization.getTableNamesAndTypes(DrillHiveMetaStoreClientWithAuthorization.java:95)
> [drill-storage-hive-core-1.16.0.jar:1.16.0]
> at
> org.apache.drill.exec.store.hive.schema.HiveDatabaseSchema.ensureInitTables(HiveDatabaseSchema.java:76)
> [drill-storage-hive-core-1.16.0.jar:1.16.0]
> at
> org.apache.drill.exec.store.hive.schema.HiveDatabaseSchema.getTableNamesAndTypes(HiveDatabaseSchema.java:63)
> [drill-storage-hive-core-1.16.0.jar:1.16.0]
> at
> org.apache.drill.exec.store.ischema.InfoSchemaRecordGenerator$Tables.visitTables(InfoSchemaRecordGenerator.java:340)
> [drill-java-exec-1.16.0.jar:1.16.0]
> at
> org.apache.drill.exec.store.ischema.InfoSchemaRecordGenerator.scanSchema(InfoSchemaRecordGenerator.java:254)
> [drill-java-exec-1.16.0.jar:1.16.0]
> at
> org.apache.drill.exec.store.ischema.InfoSchemaRecordGenerator.scanSchema(InfoSchemaRecordGenerator.java:247)
> [drill-java-exec-1.16.0.jar:1.16.0]
> at
> org.apache.drill.exec.store.ischema.InfoSchemaRecordGenerator.scanSchema(InfoSchemaRecordGenerator.java:247)
> [drill-java-exec-1.16.0.jar:1.16.0]
> at
> org.apache.drill.exec.store.ischema.InfoSchemaRecordGenerator.scanSchema(InfoSchemaRecordGenerator.java:234)
> [drill-java-exec-1.16.0.jar:1.16.0]
> at
> org.apache.drill.exec.store.ischema.InfoSchemaTableType.getRecordReader(InfoSchemaTableType.java:58)
> [drill-java-exec-1.16.0.jar:1.16.0]
> at
> org.apache.drill.exec.store.ischema.InfoSchemaBatchCreator.getBatch(InfoSchemaBatchCreator.java:34)
> [drill-java-exec-1.16.0.jar:1.16.0]
> at
> org.apache.drill.exec.store.ischema.InfoSchemaBatchCreator.getBatch(InfoSchemaBatchCreator.java:30)
> [drill-java-exec-1.16.0.jar:1.16.0]
> at
> org.apache.drill.exec.physical.impl.ImplCreator$2.run(ImplCreator.java:146)
> [drill-java-exec-1.16.0.jar:1.16.0]
> at
> org.apache.drill.exec.physical.impl.ImplCreator$2.run(ImplCreator.java:142)
> [drill-java-exec-1.16.0.jar:1.16.0]
> at java.security.AccessController.doPrivileged(Native Method)
> [na:1.8.0_151]
> at javax.security.auth.Subject.doAs(Subject.java:422) [na:1.8.0_151]
> at
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1746)
> [hadoop-common-2.7.4.jar:na]
> at
> org.apache.drill.exec.physical.impl.ImplCreator.getRecordBatch(ImplCreator.java:142)
> [drill-java-exec-1.16.0.jar:1.16.0]
> at
> org.apache.drill.exec.physical.impl.ImplCreator.getChildren(ImplCreator.java:182)
> [drill-java-exec-1.16.0.jar:1.16.0]
> at
> org.apache.drill.exec.physical.impl.ImplCreator.getRecordBatch(ImplCreator.java:137)
> [drill-java-exec-1.16.0.jar:1.16.0]
> at
> org.apache.drill.exec.physical.impl.ImplCreator.getChildren(ImplCreator.java:182)
> [drill-java-exec-1.16.0.jar:1.16.0]
> at
> org.apache.drill.exec.physical.impl.ImplCreator.getRootExec(ImplCreator.java:110)
> [drill-java-exec-1.16.0.jar:1.16.0]
> at
> org.apache.drill.exec.physical.impl.ImplCreator.getExec(ImplCreator.java:87)
> [drill-java-exec-1.16.0.jar:1.16.0]
> at org.apache.drill.exec.work.fragment.FragmentExecutor.run(FragmentExecutor.java:263)
> [drill-java-exec-1.16.0.jar:1.16.0]
> at
> org.apache.drill.common.SelfCleaningRunnable.run(SelfCleaningRunnable.java:38)
> [drill-common-1.16.0.jar:1.16.0]
> at
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
> [na:1.8.0_151]
> at
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
> [na:1.8.0_151]
> at java.lang.Thread.run(Thread.java:748) [na:1.8.0_151]
> 2019-11-29 18:27:09,098 [221e9d03-7c28-6462-bf0b-efe9eb6c4995:frag:0:0]
> WARN o.a.d.e.s.h.s.HiveDatabaseSchema - Exception was thrown while getting
> table names and type for db 'geo'.
> org.apache.thrift.TException: java.util.concurrent.ExecutionException:
> MetaException(message:Got exception:
> org.apache.thrift.TApplicationException Invalid method name:
> 'get_tables_by_type')
> at
> org.apache.drill.exec.store.hive.client.HiveMetadataCache.getTableNamesAndTypes(HiveMetadataCache.java:116)
> ~[drill-storage-hive-core-1.16.0.jar:1.16.0]
> at
> org.apache.drill.exec.store.hive.client.DrillHiveMetaStoreClient.getTableNamesAndTypes(DrillHiveMetaStoreClient.java:90)
> ~[drill-storage-hive-core-1.16.0.jar:1.16.0]
> at
> org.apache.drill.exec.store.hive.client.DrillHiveMetaStoreClientWithAuthorization.getTableNamesAndTypes(DrillHiveMetaStoreClientWithAuthorization.java:95)
> ~[drill-storage-hive-core-1.16.0.jar:1.16.0]
> at
> org.apache.drill.exec.store.hive.schema.HiveDatabaseSchema.ensureInitTables(HiveDatabaseSchema.java:76)
> [drill-storage-hive-core-1.16.0.jar:1.16.0]
> at
> org.apache.drill.exec.store.hive.schema.HiveDatabaseSchema.getTableNamesAndTypes(HiveDatabaseSchema.java:63)
> [drill-storage-hive-core-1.16.0.jar:1.16.0]
> at
> org.apache.drill.exec.store.ischema.InfoSchemaRecordGenerator$Tables.visitTables(InfoSchemaRecordGenerator.java:340)
> [drill-java-exec-1.16.0.jar:1.16.0]
> at
> org.apache.drill.exec.store.ischema.InfoSchemaRecordGenerator.scanSchema(InfoSchemaRecordGenerator.java:254)
> [drill-java-exec-1.16.0.jar:1.16.0]
> at
> org.apache.drill.exec.store.ischema.InfoSchemaRecordGenerator.scanSchema(InfoSchemaRecordGenerator.java:247)
> [drill-java-exec-1.16.0.jar:1.16.0]
> at
> org.apache.drill.exec.store.ischema.InfoSchemaRecordGenerator.scanSchema(InfoSchemaRecordGenerator.java:247)
> [drill-java-exec-1.16.0.jar:1.16.0]
> at
> org.apache.drill.exec.store.ischema.InfoSchemaRecordGenerator.scanSchema(InfoSchemaRecordGenerator.java:234)
> [drill-java-exec-1.16.0.jar:1.16.0]
> at
> org.apache.drill.exec.store.ischema.InfoSchemaTableType.getRecordReader(InfoSchemaTableType.java:58)
> [drill-java-exec-1.16.0.jar:1.16.0]
> at
> org.apache.drill.exec.store.ischema.InfoSchemaBatchCreator.getBatch(InfoSchemaBatchCreator.java:34)
> [drill-java-exec-1.16.0.jar:1.16.0]
> at
> org.apache.drill.exec.store.ischema.InfoSchemaBatchCreator.getBatch(InfoSchemaBatchCreator.java:30)
> [drill-java-exec-1.16.0.jar:1.16.0]
> at
> org.apache.drill.exec.physical.impl.ImplCreator$2.run(ImplCreator.java:146)
> [drill-java-exec-1.16.0.jar:1.16.0]
> at
> org.apache.drill.exec.physical.impl.ImplCreator$2.run(ImplCreator.java:142)
> [drill-java-exec-1.16.0.jar:1.16.0]
> at java.security.AccessController.doPrivileged(Native Method)
> [na:1.8.0_151]
> at javax.security.auth.Subject.doAs(Subject.java:422) [na:1.8.0_151]
> at
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1746)
> [hadoop-common-2.7.4.jar:na]
> at
> org.apache.drill.exec.physical.impl.ImplCreator.getRecordBatch(ImplCreator.java:142)
> [drill-java-exec-1.16.0.jar:1.16.0]
> at
> org.apache.drill.exec.physical.impl.ImplCreator.getChildren(ImplCreator.java:182)
> [drill-java-exec-1.16.0.jar:1.16.0]
> at
> org.apache.drill.exec.physical.impl.ImplCreator.getRecordBatch(ImplCreator.java:137)
> [drill-java-exec-1.16.0.jar:1.16.0]
> at
> org.apache.drill.exec.physical.impl.ImplCreator.getChildren(ImplCreator.java:182)
> [drill-java-exec-1.16.0.jar:1.16.0]
> at
> org.apache.drill.exec.physical.impl.ImplCreator.getRootExec(ImplCreator.java:110)
> [drill-java-exec-1.16.0.jar:1.16.0]
> at
> org.apache.drill.exec.physical.impl.ImplCreator.getExec(ImplCreator.java:87)
> [drill-java-exec-1.16.0.jar:1.16.0]
> at org.apache.drill.exec.work.fragment.FragmentExecutor.run(FragmentExecutor.java:263)
> [drill-java-exec-1.16.0.jar:1.16.0]
> at
> org.apache.drill.common.SelfCleaningRunnable.run(SelfCleaningRunnable.java:38)
> [drill-common-1.16.0.jar:1.16.0]
> at
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
> [na:1.8.0_151]
> at
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
> [na:1.8.0_151]
> at java.lang.Thread.run(Thread.java:748) [na:1.8.0_151]
> Caused by: java.util.concurrent.ExecutionException:
> MetaException(message:Got exception:
> org.apache.thrift.TApplicationException Invalid method name:
> 'get_tables_by_type')
> at
> org.apache.drill.shaded.guava.com.google.common.util.concurrent.AbstractFuture.getDoneValue(AbstractFuture.java:502)
> ~[drill-shaded-guava-23.0.jar:23.0]
> at
> org.apache.drill.shaded.guava.com.google.common.util.concurrent.AbstractFuture.get(AbstractFuture.java:461)
> ~[drill-shaded-guava-23.0.jar:23.0]
> at
> org.apache.drill.shaded.guava.com.google.common.util.concurrent.AbstractFuture$TrustedFuture.get(AbstractFuture.java:83)
> ~[drill-shaded-guava-23.0.jar:23.0]
> at
> org.apache.drill.shaded.guava.com.google.common.util.concurrent.Uninterruptibles.getUninterruptibly(Uninterruptibles.java:142)
> ~[drill-shaded-guava-23.0.jar:23.0]
> at
> org.apache.drill.shaded.guava.com.google.common.cache.LocalCache$Segment.getAndRecordStats(LocalCache.java:2453)
> ~[drill-shaded-guava-23.0.jar:23.0]
> at
> org.apache.drill.shaded.guava.com.google.common.cache.LocalCache$Segment.loadSync(LocalCache.java:2417)
> ~[drill-shaded-guava-23.0.jar:23.0]
> at
> org.apache.drill.shaded.guava.com.google.common.cache.LocalCache$Segment.lockedGetOrLoad(LocalCache.java:2299)
> ~[drill-shaded-guava-23.0.jar:23.0]
> at
> org.apache.drill.shaded.guava.com.google.common.cache.LocalCache$Segment.get(LocalCache.java:2212)
> ~[drill-shaded-guava-23.0.jar:23.0]
> at
> org.apache.drill.shaded.guava.com.google.common.cache.LocalCache.get(LocalCache.java:4147)
> ~[drill-shaded-guava-23.0.jar:23.0]
> at
> org.apache.drill.shaded.guava.com.google.common.cache.LocalCache.getOrLoad(LocalCache.java:4151)
> ~[drill-shaded-guava-23.0.jar:23.0]
> at
> org.apache.drill.shaded.guava.com.google.common.cache.LocalCache$LocalLoadingCache.get(LocalCache.java:5140)
> ~[drill-shaded-guava-23.0.jar:23.0]
> at
> org.apache.drill.exec.store.hive.client.HiveMetadataCache.getTableNamesAndTypes(HiveMetadataCache.java:114)
> ~[drill-storage-hive-core-1.16.0.jar:1.16.0]
> ... 28 common frames omitted
> Caused by: org.apache.hadoop.hive.metastore.api.MetaException: Got
> exception: org.apache.thrift.TApplicationException Invalid method name:
> 'get_tables_by_type'
> at
> org.apache.hadoop.hive.metastore.MetaStoreUtils.logAndThrowMetaException(MetaStoreUtils.java:1382)
> ~[drill-hive-exec-shaded-1.16.0.jar:1.16.0]
> at
> org.apache.hadoop.hive.metastore.HiveMetaStoreClient.getTables(HiveMetaStoreClient.java:1405)
> ~[drill-hive-exec-shaded-1.16.0.jar:1.16.0]
> at
> org.apache.drill.exec.store.hive.client.TableNameCacheLoader.load(TableNameCacheLoader.java:71)
> ~[drill-storage-hive-core-1.16.0.jar:1.16.0]
> at
> org.apache.drill.exec.store.hive.client.TableNameCacheLoader.load(TableNameCacheLoader.java:41)
> ~[drill-storage-hive-core-1.16.0.jar:1.16.0]
> at
> org.apache.drill.shaded.guava.com.google.common.cache.LocalCache$LoadingValueReference.loadFuture(LocalCache.java:3708)
> ~[drill-shaded-guava-23.0.jar:23.0]
> at
> org.apache.drill.shaded.guava.com.google.common.cache.LocalCache$Segment.loadSync(LocalCache.java:2416)
> ~[drill-shaded-guava-23.0.jar:23.0]
> ... 34 common frames omitted
> 2019-11-29 18:27:09,100 [221e9d03-7c28-6462-bf0b-efe9eb6c4995:frag:0:0]
> INFO o.a.d.e.w.fragment.FragmentExecutor -
> 221e9d03-7c28-6462-bf0b-efe9eb6c4995:0:0: State change requested
> AWAITING_ALLOCATION --> RUNNING
> 2019-11-29 18:27:09,100 [221e9d03-7c28-6462-bf0b-efe9eb6c4995:frag:0:0]
> INFO o.a.d.e.w.f.FragmentStatusReporter -
> 221e9d03-7c28-6462-bf0b-efe9eb6c4995:0:0: State to report: RUNNING
> 2019-11-29 18:27:09,103 [221e9d03-7c28-6462-bf0b-efe9eb6c4995:frag:0:0]
> INFO o.a.d.e.w.fragment.FragmentExecutor -
> 221e9d03-7c28-6462-bf0b-efe9eb6c4995:0:0: State change requested RUNNING
> --> FINISHED
> 2019-11-29 18:27:09,104 [221e9d03-7c28-6462-bf0b-efe9eb6c4995:frag:0:0]
> INFO o.a.d.e.w.f.FragmentStatusReporter -
> 221e9d03-7c28-6462-bf0b-efe9eb6c4995:0:0: State to report: FINISHED
>
> What i want to know is what are the versions of hive compatible with the
> different versions of hive?
> If possible do share the compatibility of all the components like hive,
> kafka, hbase with different versions of drill?
>
> Thanks and Regards
> Rameshwar Mane
> Big Data Engineer
>
> On Nov 29 2019, at 6:30 pm, Igor Guzenko <ihor.huzenko.igs@gmail.com>
> wrote:
> > Hi Rameshwar,
> >
> > I found difference in your plugin configuration, comparing to
> >
> https://mapr.com/docs/51/SecurityGuide/Modify-Hive-Storage-Plugin-In-Drill.html
> > .
> > "hive.server2.enable.doAs": "false" .
> > Also hive.metastore.kerberos.principal is absent in your configs. In my
> > opinion it's minor configuration
> > issue and to resolve it you just need to check every step patiently. Also
> > maybe some MapR docs could be helpful for you
> >
> https://mapr.com/docs/51/SecurityGuide/Configure-Kerberos-Authentication.html
> > .
> >
> > Thanks,
> > Igor
> >
> > On Fri, Nov 29, 2019 at 6:24 AM Rameshwar Mane <mr.manerm@gmail.com>
> wrote:
> > > The hive metastore is up and running, the metastore is accessible from
> the
> > > machine where drill is installed. I have followed all the steps
> present in
> > > the documentation.
> > >
> > > On Fri, 29 Nov 2019, 05:17 Igor Guzenko, <ihor.huzenko.igs@gmail.com>
> > > wrote:
> > >
> > > > Hello Rameshwar,
> > > > I think the issue is a little bit more complicated, at least you
> need to
> > > be
> > > > sure that hive metastore is
> > > > up and running, and is accessible from given machine. Could you
> please
> > > > create jira ticket containing
> > > > configuration info as much as possible ? (hive-site.xml, storage
> plugin
> > > > config, etc.) So then development team could try to reproduce & debug
> > > > the problem.
> > > >
> > > > Thank you in advance,
> > > > Igor
> > > >
> > > > On Thu, Nov 28, 2019 at 7:58 PM Rameshwar Mane <mr.manerm@gmail.com>
> > > > wrote:
> > > >
> > > > > Can anyone provide me any inputs regarding this issue???
> > > > > On Thu, 28 Nov 2019, 17:35 Rameshwar Mane, <mr.manerm@gmail.com>
> > > wrote:
> > > > >
> > > > > > Hi all,
> > > > > > I am trying to use drill to query a kerberized cluster. I have
> setup
> > > > > drill
> > > > > > with the provided in documentation for kerberized cluster.
> > > > > >
> > > > > > the storage plugin details i am trying to create for hive is
:
> > > > > > {
> > > > > > "type": "hive",
> > > > > > "configProps": {
> > > > > > "hive.metastore.uris":
> > > > >
> > > > > "thrift://xxxx-xxx-xxxx:9083,thrift://xxxx-xxx-xxxx:9083",
> > > > > > "hive.metastore.warehouse.dir":
> > > > >
> > > >
> > > > "/warehouse/tablespace/managed/hive",
> > > > > > "fs.default.name": "hdfs://xxxx-xxx-xxxx:8020",
> > > > > > "hive.security.authorization.enabled": "true",
> > > > > > "hive.security.authenticator.manager":
> > > > >
> > > > > "org.apache.hadoop.hive.ql.security.SessionStateUserAuthenticator",
> > > > > > "hive.security.authorization.manager":
> > > > >
> > > > >
> > > >
> > >
> "org.apcahe.hadoop.hive.ql.security.authorization.plugin.sqlstd.SQLStdHiveAuthorizerFactory",
> > > > > > "hive.metastore.sasl.enabled": "true",
> > > > > > "hive.server2.enable.doAs": "true"
> > > > > > },
> > > > > > "enabled": true
> > > > > > }
> > > > > >
> > > > > >
> > > > > > i am facing the following issue while trying to create hive
> storage
> > > > > > plugin:
> > > > > >
> > > > > >
> > > > > >
> > > > > >
> > > > > > Please retry: Error while creating / updating storage : Could
not
> > > > > connect to meta store using any of the URIs provided.
> > > > > > Most recent failure:
> > > > >
> > > >
> > >
> > > org.apache.thrift.transport.TTransportException:
> > > > > > GSS initiate failed at
> > > > >
> > > > >
> > > >
> > >
> org.apache.thrift.transport.TSaslTransport.sendAndThrowMessage(TSaslTransport.java:232)
> > > > > at
> > > >
> > > >
> org.apache.thrift.transport.TSaslTransport.open(TSaslTransport.java:316)
> > > > > at
> > > > > >
> > > > >
> > > >
> > >
> org.apache.thrift.transport.TSaslClientTransport.open(TSaslClientTransport.java:37)
> > > > > at
> > > > >
> > > >
> > >
> org.apache.hadoop.hive.thrift.client.TUGIAssumingTransport$1.run(TUGIAssumingTransport.java:52)
> > > > > at
> > > > > >
> > > > >
> > > >
> > >
> org.apache.hadoop.hive.thrift.client.TUGIAssumingTransport$1.run(TUGIAssumingTransport.java:49)
> > > > > at java.security.AccessController.doPrivileged(Native Method)
> > > > > > at javax.security.auth.Subject.doAs(Subject.java:422) at
> > > > >
> > > > >
> > > >
> > >
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1746)
> > > > > at
> > > > > >
> > > > >
> > > >
> > >
> org.apache.hadoop.hive.thrift.client.TUGIAssumingTransport.open(TUGIAssumingTransport.java:49)
> > > > > at
> > > > >
> > > >
> > >
> org.apache.hadoop.hive.metastore.HiveMetaStoreClient.open(HiveMetaStoreClient.java:480)
> > > > > at
> > > > > >
> > > > >
> > > >
> > >
> org.apache.hadoop.hive.metastore.HiveMetaStoreClient.<init>(HiveMetaStoreClient.java:247)
> > > > > at
> > > > >
> > > >
> > >
> org.apache.hadoop.hive.metastore.HiveMetaStoreClient.<init>(HiveMetaStoreClient.java:129)
> > > > > at
> > > > > >
> > > > >
> > > >
> > >
> org.apache.drill.exec.store.hive.client.DrillHiveMetaStoreClient.<init>(DrillHiveMetaStoreClient.java:54)
> > > > > at
> > > > > >
> > > > >
> > > >
> > >
> org.apache.drill.exec.store.hive.client.DrillHiveMetaStoreClientFactory.createCloseableClientWithCaching(DrillHiveMetaStoreClientFactory.java:101)
> > > > > at
> > > > > >
> > > > >
> > > >
> > >
> org.apache.drill.exec.store.hive.schema.HiveSchemaFactory.<init>(HiveSchemaFactory.java:77)
> > > > > at
> > > > >
> > > >
> > >
> org.apache.drill.exec.store.hive.HiveStoragePlugin.<init>(HiveStoragePlugin.java:77)
> > > > > at
> > > > > > sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native
> > > > >
> > > >
> > >
> > > Method)
> > > > > at
> > > > >
> > > >
> > >
> sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
> > > > > at
> > > > > >
> > > > >
> > > >
> > >
> sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
> > > > > at java.lang.reflect.Constructor.newInstance(Constructor.java:423)
> at
> > > > > >
> > > > >
> > > >
> > >
> org.apache.drill.exec.store.StoragePluginRegistryImpl.create(StoragePluginRegistryImpl.java:466)
> > > > > at
> > > > >
> > > >
> > >
> org.apache.drill.exec.store.StoragePluginRegistryImpl.createOrUpdate(StoragePluginRegistryImpl.java:131)
> > > > > > at
> > > > >
> > > > >
> > > >
> > >
> org.apache.drill.exec.server.rest.PluginConfigWrapper.createOrUpdateInStorage(PluginConfigWrapper.java:56)
> > > > > at
> > > > >
> > > >
> > >
> org.apache.drill.exec.server.rest.StorageResources.createOrUpdatePluginJSON(
> > > > > StorageResources.java:193) at
> > > > > >
> > > > >
> > > >
> > >
> org.apache.drill.exec.server.rest.StorageResources.createOrUpdatePlugin(StorageResources.java:210)
> > > > > at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at
> > > > > >
> > > > >
> > > >
> > >
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
> > > > > at
> > > > >
> > > >
> > >
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> > > > > at
> > > > > > java.lang.reflect.Method.invoke(Method.java:498) at
> > > > >
> > > > >
> > > >
> > >
> org.glassfish.jersey.server.model.internal.ResourceMethodInvocationHandlerFactory$1.invoke(ResourceMethodInvocationHandlerFactory.java:81)
> > > > > at
> > > > > >
> > > > >
> > > >
> > >
> org.glassfish.jersey.server.model.internal.AbstractJavaResourceMethodDispatcher$1.run(AbstractJavaResourceMethodDispatcher.java:144)
> > > > > at
> > > > > >
> > > > >
> > > >
> > >
> org.glassfish.jersey.server.model.internal.AbstractJavaResourceMethodDispatcher.invoke(AbstractJavaResourceMethodDispatcher.java:161)
> > > > > at
> > > > > >
> > > > >
> > > >
> > >
> org.glassfish.jersey.server.model.internal.JavaResourceMethodDispatcherProvider$TypeOutInvoker.doDispatch(JavaResourceMethodDispatcherProvider.java:205)
> > > > > at
> > > > > >
> > > > >
> > > >
> > >
> org.glassfish.jersey.server.model.internal.AbstractJavaResourceMethodDispatcher.dispatch(AbstractJavaResourceMethodDispatcher.java:99)
> > > > > at
> > > > > >
> > > > >
> > > >
> > >
> org.glassfish.jersey.server.model.ResourceMethodInvoker.invoke(ResourceMethodInvoker.java:389)
> > > > > at
> > > > >
> > > >
> > >
> org.glassfish.jersey.server.model.ResourceMethodInvoker.apply(ResourceMethodInvoker.java:347)
> > > > > at
> > > > > >
> > > > >
> > > >
> > >
> org.glassfish.jersey.server.model.ResourceMethodInvoker.apply(ResourceMethodInvoker.java:102)
> > > > > at
> > > >
> > > >
> org.glassfish.jersey.server.ServerRuntime$2.run(ServerRuntime.java:326)
> > > > > at
> > > > > > org.glassfish.jersey.internal.Errors$1.call(Errors.java:271)
at
> > > > >
> > > > > org.glassfish.jersey.internal.Errors$1.call(Errors.java:267) at
> > > > > org.glassfish.jersey.internal.Errors.process(Errors.java:315) at
> > > > > > org.glassfish.jersey.internal.Errors.process(Errors.java:297)
at
> > > > >
> > > > > org.glassfish.jersey.internal.Errors.process(Errors.java:267) at
> > > > > >
> > > > >
> > > >
> > >
> org.glassfish.jersey.process.internal.RequestScope.runInScope(RequestScope.java:317)
> > > > > at
> > > > >
> > > >
> > >
> org.glassfish.jersey.server.ServerRuntime.process(ServerRuntime.java:305) at
> > > > > >
> > > > >
> > > >
> > >
> org.glassfish.jersey.server.ApplicationHandler.handle(ApplicationHandler.java:1154)
> > > > > at
> > > > >
> > > >
> > >
> org.glassfish.jersey.servlet.WebComponent.serviceImpl(WebComponent.java:473)
> > > > > at
> > > > > >
> > > > >
> > >
> org.glassfish.jersey.servlet.WebComponent.service(WebComponent.java:427)
> > > > at
> > > > >
> > > >
> > >
> org.glassfish.jersey.servlet.ServletContainer.service(ServletContainer.java:388)
> > > > > at \
> > > > > >
> > > > >
> > > >
> > >
> org.glassfish.jersey.servlet.ServletContainer.service(ServletContainer.java:341)
> > > > > at
> > > > >
> > > >
> > >
> org.glassfish.jersey.servlet.ServletContainer.service(ServletContainer.java:228)
> > > > > at
> > > > > >
> > > org.eclipse.jetty.servlet.ServletHolder.handle(ServletHolder.java:848)
> > > > > at
> > > > >
> > > >
> > >
> org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:585)
> > > > > at
> > > > > >
> > > > >
> > > >
> > >
> org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:143)
> > > > > at
> > > > >
> > > >
> > >
> org.eclipse.jetty.security.SecurityHandler.handle(SecurityHandler.java:524)
> > > > > at
> > > > > >
> > > > >
> > > >
> > >
> org.apache.drill.exec.server.rest.auth.DrillHttpSecurityHandlerProvider.handle(DrillHttpSecurityHandlerProvider.java:151)
> > > > > at
> > > > > >
> > > > >
> > > >
> > >
> org.eclipse.jetty.server.session.SessionHandler.doHandle(SessionHandler.java:226)
> > > > > at
> > > > >
> > > >
> > >
> org.eclipse.jetty.server.handler.ContextHandler.doHandle(ContextHandler.java:1180)
> > > > > at
> > > > > >
> > > > >
> > >
> org.eclipse.jetty.servlet.ServletHandler.doScope(ServletHandler.java:513)
> > > > > at
> > > > >
> > > >
> > >
> org.eclipse.jetty.server.session.SessionHandler.doScope(SessionHandler.java:185)
> > > > > at
> > > > > >
> > > > >
> > > >
> > >
> org.eclipse.jetty.server.handler.ContextHandler.doScope(ContextHandler.java:1112)
> > > > > at
> > > > >
> > > >
> > >
> org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:141)
> > > > > at
> > > > > >
> > > > >
> > > >
> > >
> org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:134)
> > > > > at org.eclipse.jetty.server.Server.handle(Server.java:539) at
> > > > > >
> org.eclipse.jetty.server.HttpChannel.handle(HttpChannel.java:333) at
> > > > >
> > > > >
> > > >
> > >
> org.eclipse.jetty.server.HttpConnection.onFillable(HttpConnection.java:251)
> > > > > at
> > > > > > org.eclipse.jetty.io
> > > > >
> > > >
> > > >
> .AbstractConnection$ReadCallback.succeeded(AbstractConnection.java:283)
> > > > > at org.eclipse.jetty.io
> .FillInterest.fillable(FillInterest.java:108)
> > > >
> > >
> > > at
> > > > > > org.eclipse.jetty.io
> > > > >
> > > >
> > > > .SelectChannelEndPoint$2.run(SelectChannelEndPoint.java:93)
> > > > > at
> > > > > >
> > > > >
> > > >
> > >
> org.eclipse.jetty.util.thread.strategy.ExecuteProduceConsume.executeProduceConsume(ExecuteProduceConsume.java:303)
> > > > > at
> > > > > >
> > > > >
> > > >
> > >
> org.eclipse.jetty.util.thread.strategy.ExecuteProduceConsume.produceConsume(ExecuteProduceConsume.java:148)
> > > > > at
> > > > > >
> > > > >
> > > >
> > >
> org.eclipse.jetty.util.thread.strategy.ExecuteProduceConsume.run(ExecuteProduceConsume.java:136)
> > > > > at
> > > > > >
> > > > >
> > > >
> > >
> org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:671)
> > > > > at
> > > > > >
> > > > >
> > > >
> > >
> org.eclipse.jetty.util.thread.QueuedThreadPool$2.run(QueuedThreadPool.java:589)
> > > > > at
> > > > > > java.lang.Thread.run(Thread.java:748)
> > > > > >
> > > > > >
> > > > > >
> > > > > >
> > > > > >
> > > > > > am i providing any wrong configurations while creating the
> storage
> > > > > plugin?
> > > > > > I have created drill keytab using the steps provided in the
> > > > >
> > > >
> > > > documentation
> > > > > > and have a mit kerberos autorization service.
> > > > > >
> > > > > > let me know any solutions that can be used to solve this.
> > > > > > Thanks and Regards
> > > > > > *Rameshwar Mane*
> > > > > > Big Data Engineer
> > > > >
> > > >
> > >
> >
> >
>
>

Mime
  • Unnamed multipart/alternative (inline, None, 0 bytes)
View raw message