thx,but `hive.stats.autogather` is not work for sparkSQL.
----- 原始邮件 -----
发件人:Mich Talebzadeh <mich.talebzadeh@gmail.com>
收件人:kongtrio@sina.com
抄送人:user <user@spark.apache.org>
主题:Re: Is Spark SQL able to auto update partition stats like hive by setting hive.stats.autogather=true
日期:2020年12月19日 06点45分

Hi,

A fellow forum member kindly spotted a lousy error of mine, where a comma was missing at the line above the red line.

This appears to be accepted

spark = SparkSession.builder \
.appName(
"app1") \
.enableHiveSupport() \
.getOrCreate()
# Hive settings
settings = [
(
"hive.exec.dynamic.partition", "true"),
("hive.exec.dynamic.partition.mode", "nonstrict"),
("spark.sql.orc.filterPushdown", "true"),
("hive.msck.path.validation", "ignore"),
("spark.sql.caseSensitive", "true"),
("spark.speculation", "false"),
("hive.metastore.authorization.storage.checks", "false"),
("hive.metastore.client.connect.retry.delay", "5s"),
("hive.metastore.client.socket.timeout", "1800s"),
("hive.metastore.connect.retries", "12"),
("hive.metastore.execute.setugi", "false"),
("hive.metastore.failure.retries", "12"),
("hive.metastore.schema.verification", "false"),
("hive.metastore.schema.verification.record.version", "false"),
("hive.metastore.server.max.threads", "100000"),
("hive.metastore.authorization.storage.checks", "/apps/hive/warehouse"),
("hive.stats.autogather", "true")
]

spark.sparkContext._conf.setAll(settings)
However, I have not tested it myself.


HTH

Mich


LinkedIn  https://www.linkedin.com/profile/view?id=AAEAAAAWh2gBxianrbJd6zP6AcPCCdOABUrV8Pw

 



Disclaimer: Use it at your own risk. Any and all responsibility for any loss, damage or destruction of data or any other property which may arise from relying on this email's technical content is explicitly disclaimed. The author will in no case be liable for any monetary damages arising from such loss, damage or destruction.

 



On Fri, 18 Dec 2020 at 18:53, Mich Talebzadeh <mich.talebzadeh@gmail.com> wrote:
I am afraid not supported for spark sql


I tried it as below

spark = SparkSession.builder \
.appName("app1") \
.enableHiveSupport() \
.getOrCreate()
# Hive settings
settings = [
("hive.exec.dynamic.partition", "true"),
("hive.exec.dynamic.partition.mode", "nonstrict"),
("spark.sql.orc.filterPushdown", "true"),
("hive.msck.path.validation", "ignore"),
("spark.sql.caseSensitive", "true"),
("spark.speculation", "false"),
("hive.metastore.authorization.storage.checks", "false"),
("hive.metastore.client.connect.retry.delay", "5s"),
("hive.metastore.client.socket.timeout", "1800s"),
("hive.metastore.connect.retries", "12"),
("hive.metastore.execute.setugi", "false"),
("hive.metastore.failure.retries", "12"),
("hive.metastore.schema.verification", "false"),
("hive.metastore.schema.verification.record.version", "false"),
("hive.metastore.server.max.threads", "100000"),
("hive.metastore.authorization.storage.checks", "/apps/hive/warehouse")
("hive.stats.autogather", "true")
]
spark.sparkContext._conf.setAll(settings)

got this error

    ("hive.stats.autogather", "true")
TypeError: 'tuple' object is not callable

HTH


LinkedIn  https://www.linkedin.com/profile/view?id=AAEAAAAWh2gBxianrbJd6zP6AcPCCdOABUrV8Pw

 



Disclaimer: Use it at your own risk. Any and all responsibility for any loss, damage or destruction of data or any other property which may arise from relying on this email's technical content is explicitly disclaimed. The author will in no case be liable for any monetary damages arising from such loss, damage or destruction.

 



On Fri, 18 Dec 2020 at 06:00, 疯狂的哈丘 <kongtrio@sina.com> wrote:
`spark.sql.statistics.size.autoUpdate.enabled` is only work for table stats update.But for partition stats,I can only update it with `ANALYZE TABLE tablename PARTITION(part) COMPUTE STATISTICS`.So is Spark SQL able to auto update partition stats like hive by setting hive.stats.autogather=true?