I'm  +1 for this SPIP for these two reasons:

1. The current thriftserver has some issues that are not easy to solve, such as: SPARK-28636.
2. The difference between the version of ORC we are using and the built-in Hive is using is getting bigger and bigger. We can't ensure that there will be no compatibility issues in the future. If thriftserver does not depend on Hive, it will be much easier to upgrade the built-in Hive in the future.

On Sat, Dec 21, 2019 at 9:28 PM angers.zhu <angers.zhu@gmail.com> wrote:
Hi all, 

I have complete a Design doc about how to use and config this new thrift server, and some design detail about change and impersonation. 

Hope for your suggestions and ideas.

Best Regards

--------- Forwarded Message ---------

Date: 12/18/2019 22:29
Subject: Re: [VOTE][SPARK-29018][SPIP]:Build spark thrift server based on protocol v11

Add spark-dev group access privilege to google.

On 12/18/2019 22:02Sandeep Katta<sandeep0102.opensource@gmail.com> wrote:
I couldn't access the doc, please give permission to the spark-dev group

On Wed, 18 Dec 2019 at 18:05, angers.zhu <angers.zhu@gmail.com> wrote:

With the development of Spark and Hive,in current sql/hive-thriftserver module, 

we need to do a lot of work to solve code conflicts for different built-in hive versions.

It's an annoying and unending work in current ways. And these issues have limited 

our ability and convenience to develop new features for Spark’s thrift server. 

    We suppose to implement a new thrift server and JDBC driver based on Hive’s latest v11 

TCLService.thrift thrift protocol. Finally, the new thrift server have below feature:

  1. Build new module spark-service as spark’s thrift server 

  2. Don't need as much reflection and inherited code as `hive-thriftser` modules

  3. Support all functions current `sql/hive-thriftserver` support

  4. Use all code maintained by spark itself, won’t depend on Hive

  5. Support origin functions use spark’s own way, won't limited by Hive's code

  6. Support running without hive metastore or with hive metastore

  7. Support user impersonation by Multi-tenant splited hive authentication and DFS authentication

  8. Support session hook for with spark’s own code

  9. Add a new jdbc driver spark-jdbc, with spark’s own connection url  “jdbc:spark:<host>:<port>/<db>”

  10. Support both hive-jdbc and spark-jdbc client, then we can support most clients and BI platform


[ ] +1: Accept the proposal as an official SPIP
[ ] +0
[ ] -1: I don't think this is a good idea because ...

I'll start with my +1