Hi all, 

I have complete a Design doc about how to use and config this new thrift server, and some design detail about change and impersonation. 

Hope for your suggestions and ideas.

SPIP DOC : https://docs.google.com/document/d/1ug4K5e2okF5Q2Pzi3qJiUILwwqkn0fVQaQ-Q95HEcJQ/edit#heading=h.x97c6tj78zo0
Design DOC : https://docs.google.com/document/d/1UKE9QTtHqSZBq0V_vEn54PlWaWPiRAKf_JvcT0skaSo/edit#heading=h.q1ed5q1ldh14
Thrift server about configurations https://docs.google.com/document/d/1uI35qJmQO4FKE6pr0h3zetZqww-uI8QsQjxaYY_qb1s/edit?usp=drive_web&ouid=110963191229426834922

Best Regards

--------- Forwarded Message ---------

From: angers.zhu <angers.zhu@gmail.com>
Date: 12/18/2019 22:29
To: dev-owner@spark.apache.org <dev-owner@spark.apache.org>
Subject: Re: [VOTE][SPARK-29018][SPIP]:Build spark thrift server based on protocol v11

Add spark-dev group access privilege to google.

On 12/18/2019 22:02Sandeep Katta<sandeep0102.opensource@gmail.com> wrote:
I couldn't access the doc, please give permission to the spark-dev group

On Wed, 18 Dec 2019 at 18:05, angers.zhu <angers.zhu@gmail.com> wrote:

With the development of Spark and Hive,in current sql/hive-thriftserver module, 

we need to do a lot of work to solve code conflicts for different built-in hive versions.

It's an annoying and unending work in current ways. And these issues have limited 

our ability and convenience to develop new features for Spark’s thrift server. 

    We suppose to implement a new thrift server and JDBC driver based on Hive’s latest v11 

TCLService.thrift thrift protocol. Finally, the new thrift server have below feature:

  1. Build new module spark-service as spark’s thrift server 

  2. Don't need as much reflection and inherited code as `hive-thriftser` modules

  3. Support all functions current `sql/hive-thriftserver` support

  4. Use all code maintained by spark itself, won’t depend on Hive

  5. Support origin functions use spark’s own way, won't limited by Hive's code

  6. Support running without hive metastore or with hive metastore

  7. Support user impersonation by Multi-tenant splited hive authentication and DFS authentication

  8. Support session hook for with spark’s own code

  9. Add a new jdbc driver spark-jdbc, with spark’s own connection url  “jdbc:spark:<host>:<port>/<db>”

  10. Support both hive-jdbc and spark-jdbc client, then we can support most clients and BI platform


[ ] +1: Accept the proposal as an official SPIP
[ ] +0
[ ] -1: I don't think this is a good idea because ...

I'll start with my +1