hive-issues mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Peter Vary (JIRA)" <>
Subject [jira] [Commented] (HIVE-6980) Drop table by using direct sql
Date Fri, 13 Apr 2018 15:50:00 GMT


Peter Vary commented on HIVE-6980:

[~selinazh]: do you plan to work on this? We have customers having problems with long dropping
times. If you do not have time, I would be happy to work on this. If you have some pointers
how were you able to convince DataNucleus to leave the drops for the DB FK constraints it
would be very helpful. I was trying to do this by setting {{datanucleus.deletionPolicy}} to
{{DataNucleus}} but the log still shows that separate DN query is issued to drop the child
data even though the FK is present in the database, and working.

If we are not able to move forward with the cascade solution, then I could provide a directsql
solution instead, which is more straightforward



> Drop table by using direct sql
> ------------------------------
>                 Key: HIVE-6980
>                 URL:
>             Project: Hive
>          Issue Type: Improvement
>          Components: Metastore
>    Affects Versions: 0.12.0
>            Reporter: Selina Zhang
>            Assignee: Selina Zhang
>            Priority: Major
> Dropping table which has lots of partitions is slow. Even after applying the patch of
HIVE-6265, the drop table still takes hours (100K+ partitions). 
> The fixes come with two parts:
> 1. use directSQL to query the partitions protect mode;
> the current implementation needs to transfer the Partition object to client and check
the protect mode for each partition. I'd like to move this part of logic to metastore. The
check will be done by direct sql (if direct sql is disabled, execute the same logic in the
> 2. use directSQL to drop partitions for table;
> there maybe two solutions here:
> 1. add "DELETE CASCADE" in the schema. In this way we only need to delete entries from
partitions table use direct sql. May need to change datanucleus.deletionPolicy = DataNucleus.

> 2. clean up the dependent tables by issue DELETE statement. This also needs to turn on
> Both of above solutions should be able to fix the problem. The DELETE CASCADE has to
change schemas and prepare upgrade scripts. The second solutions added maintenance cost if
new tables added in the future releases.
> Please advice. 

This message was sent by Atlassian JIRA

View raw message