spark-issues mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Cheng Lian (JIRA)" <j...@apache.org>
Subject [jira] [Commented] (SPARK-15269) Creating external table leaves empty directory under warehouse directory
Date Tue, 24 May 2016 00:23:13 GMT

    [ https://issues.apache.org/jira/browse/SPARK-15269?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15297408#comment-15297408
] 

Cheng Lian commented on SPARK-15269:
------------------------------------

Two facts make this issue pretty hard to be fixed cleanly:

# When persisting an external Spark SQL data source table to Hive metastore, we can't store
data location URI of the external table in the standard Hive {{o.a.h.hive.ql.metadata.Table.dataLocation}}
field, because Hive only accepts directory paths as location URI while Spark SQL also allows
reading from a single file. Due to this reason, we have to store the actual data location
as a SerDe property and ignore the standard {{dataLocation}} field.
# When creating a table, {{Hive.createTable}} always tries to create an empty table directory
under default warehouse directory when {{o.a.h.hive.ql.metadata.Table.dataLocation}} is null.
However, for external tables, this directory won't be deleted while dropping the table.

This leads to the following contradiction:

- We can't set {{Table.dataLocation}} because it have to be a directory path, while we must
also allow file paths as data locations.
- We have to set {{Table.dataLocation}} because otherwise Hive creates an unexpected empty
directory but doesn't remove it while dropping the external table, and thus causes the bug
described in this ticket.

Here are two options:

# Workaround this contradiction by setting {{Table.dataLocation}} to a random location and
then delete it manually after creating the external table
#- Pros: Fix the bug, and keeps backwards compatibility
#- Cons: Sounds like a pretty ad-hoc dirty fix
# Same as Hive, only allow using directory paths as data locations when creating Spark SQL
external data source tables in Spark 2.0
#- Pros: Cleaner fix
#- Cons: Breaks backwards compatibility.

I'm working a fix using the first approach.


> Creating external table leaves empty directory under warehouse directory
> ------------------------------------------------------------------------
>
>                 Key: SPARK-15269
>                 URL: https://issues.apache.org/jira/browse/SPARK-15269
>             Project: Spark
>          Issue Type: Bug
>          Components: SQL, Tests
>    Affects Versions: 2.0.0
>            Reporter: Cheng Lian
>            Assignee: Xin Wu
>
> Adding the following test case in {{HiveDDLSuite}} may reproduce this issue:
> {code}
>   test("foo") {
>     withTempPath { dir =>
>       val path = dir.getCanonicalPath
>       spark.range(1).write.json(path)
>       withTable("ddl_test1") {
>         sql(s"CREATE TABLE ddl_test1 USING json OPTIONS (PATH '$path')")
>         sql("DROP TABLE ddl_test1")
>         sql(s"CREATE TABLE ddl_test1 USING json AS SELECT 1 AS a")
>       }
>     }
>   }
> {code}
> Note that the first {{CREATE TABLE}} command creates an external table since data source
tables are always external when {{PATH}} option is specified.
> When executing the second {{CREATE TABLE}} command, which creates a managed table with
the same name, it fails because there's already an unexpected directory with the same name
as the table name in the warehouse directory:
> {noformat}
> [info] - foo *** FAILED *** (7 seconds, 649 milliseconds)
> [info]   org.apache.spark.sql.AnalysisException: path file:/Users/lian/local/src/spark/workspace-b/target/tmp/warehouse-205e25e7-8918-4615-acf1-10e06af7c35c/ddl_test1
already exists.;
> [info]   at org.apache.spark.sql.execution.datasources.InsertIntoHadoopFsRelation.run(InsertIntoHadoopFsRelation.scala:88)
> [info]   at org.apache.spark.sql.execution.command.ExecutedCommandExec.sideEffectResult$lzycompute(commands.scala:57)
> [info]   at org.apache.spark.sql.execution.command.ExecutedCommandExec.sideEffectResult(commands.scala:55)
> [info]   at org.apache.spark.sql.execution.command.ExecutedCommandExec.doExecute(commands.scala:69)
> [info]   at org.apache.spark.sql.execution.SparkPlan$$anonfun$execute$1.apply(SparkPlan.scala:115)
> [info]   at org.apache.spark.sql.execution.SparkPlan$$anonfun$execute$1.apply(SparkPlan.scala:115)
> [info]   at org.apache.spark.sql.execution.SparkPlan$$anonfun$executeQuery$1.apply(SparkPlan.scala:136)
> [info]   at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:151)
> [info]   at org.apache.spark.sql.execution.SparkPlan.executeQuery(SparkPlan.scala:133)
> [info]   at org.apache.spark.sql.execution.SparkPlan.execute(SparkPlan.scala:114)
> [info]   at org.apache.spark.sql.execution.QueryExecution.toRdd$lzycompute(QueryExecution.scala:85)
> [info]   at org.apache.spark.sql.execution.QueryExecution.toRdd(QueryExecution.scala:85)
> [info]   at org.apache.spark.sql.execution.datasources.DataSource.write(DataSource.scala:417)
> [info]   at org.apache.spark.sql.execution.command.CreateDataSourceTableAsSelectCommand.run(createDataSourceTables.scala:231)
> [info]   at org.apache.spark.sql.execution.command.ExecutedCommandExec.sideEffectResult$lzycompute(commands.scala:57)
> [info]   at org.apache.spark.sql.execution.command.ExecutedCommandExec.sideEffectResult(commands.scala:55)
> [info]   at org.apache.spark.sql.execution.command.ExecutedCommandExec.doExecute(commands.scala:69)
> [info]   at org.apache.spark.sql.execution.SparkPlan$$anonfun$execute$1.apply(SparkPlan.scala:115)
> [info]   at org.apache.spark.sql.execution.SparkPlan$$anonfun$execute$1.apply(SparkPlan.scala:115)
> [info]   at org.apache.spark.sql.execution.SparkPlan$$anonfun$executeQuery$1.apply(SparkPlan.scala:136)
> [info]   at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:151)
> [info]   at org.apache.spark.sql.execution.SparkPlan.executeQuery(SparkPlan.scala:133)
> [info]   at org.apache.spark.sql.execution.SparkPlan.execute(SparkPlan.scala:114)
> [info]   at org.apache.spark.sql.execution.QueryExecution.toRdd$lzycompute(QueryExecution.scala:85)
> [info]   at org.apache.spark.sql.execution.QueryExecution.toRdd(QueryExecution.scala:85)
> [info]   at org.apache.spark.sql.Dataset.<init>(Dataset.scala:186)
> [info]   at org.apache.spark.sql.Dataset.<init>(Dataset.scala:167)
> [info]   at org.apache.spark.sql.Dataset$.ofRows(Dataset.scala:62)
> [info]   at org.apache.spark.sql.SparkSession.sql(SparkSession.scala:541)
> [info]   at org.apache.spark.sql.test.SQLTestUtils$$anonfun$sql$1.apply(SQLTestUtils.scala:59)
> [info]   at org.apache.spark.sql.test.SQLTestUtils$$anonfun$sql$1.apply(SQLTestUtils.scala:59)
> [info]   at org.apache.spark.sql.hive.execution.HiveDDLSuite$$anonfun$23$$anonfun$apply$mcV$sp$34$$anonfun$apply$6.apply$mcV$sp(HiveDDLSuite.scala:597)
> [info]   at org.apache.spark.sql.test.SQLTestUtils$class.withTable(SQLTestUtils.scala:166)
> [info]   at org.apache.spark.sql.hive.execution.HiveDDLSuite.withTable(HiveDDLSuite.scala:32)
> [info]   at org.apache.spark.sql.hive.execution.HiveDDLSuite$$anonfun$23$$anonfun$apply$mcV$sp$34.apply(HiveDDLSuite.scala:594)
> [info]   at org.apache.spark.sql.hive.execution.HiveDDLSuite$$anonfun$23$$anonfun$apply$mcV$sp$34.apply(HiveDDLSuite.scala:590)
> [info]   at org.apache.spark.sql.test.SQLTestUtils$class.withTempPath(SQLTestUtils.scala:114)
> [info]   at org.apache.spark.sql.hive.execution.HiveDDLSuite.withTempPath(HiveDDLSuite.scala:32)
> [info]   at org.apache.spark.sql.hive.execution.HiveDDLSuite$$anonfun$23.apply$mcV$sp(HiveDDLSuite.scala:590)
> [info]   at org.apache.spark.sql.hive.execution.HiveDDLSuite$$anonfun$23.apply(HiveDDLSuite.scala:590)
> [info]   at org.apache.spark.sql.hive.execution.HiveDDLSuite$$anonfun$23.apply(HiveDDLSuite.scala:590)
> [info]   at org.scalatest.Transformer$$anonfun$apply$1.apply$mcV$sp(Transformer.scala:22)
> [info]   at org.scalatest.OutcomeOf$class.outcomeOf(OutcomeOf.scala:85)
> [info]   at org.scalatest.OutcomeOf$.outcomeOf(OutcomeOf.scala:104)
> [info]   at org.scalatest.Transformer.apply(Transformer.scala:22)
> [info]   at org.scalatest.Transformer.apply(Transformer.scala:20)
> [info]   at org.scalatest.FunSuiteLike$$anon$1.apply(FunSuiteLike.scala:166)
> [info]   at org.apache.spark.SparkFunSuite.withFixture(SparkFunSuite.scala:57)
> [info]   at org.scalatest.FunSuiteLike$class.invokeWithFixture$1(FunSuiteLike.scala:163)
> [info]   at org.scalatest.FunSuiteLike$$anonfun$runTest$1.apply(FunSuiteLike.scala:175)
> [info]   at org.scalatest.FunSuiteLike$$anonfun$runTest$1.apply(FunSuiteLike.scala:175)
> [info]   at org.scalatest.SuperEngine.runTestImpl(Engine.scala:306)
> [info]   at org.scalatest.FunSuiteLike$class.runTest(FunSuiteLike.scala:175)
> [info]   at org.apache.spark.sql.hive.execution.HiveDDLSuite.org$scalatest$BeforeAndAfterEach$$super$runTest(HiveDDLSuite.scala:32)
> [info]   at org.scalatest.BeforeAndAfterEach$class.runTest(BeforeAndAfterEach.scala:255)
> [info]   at org.apache.spark.sql.hive.execution.HiveDDLSuite.runTest(HiveDDLSuite.scala:32)
> [info]   at org.scalatest.FunSuiteLike$$anonfun$runTests$1.apply(FunSuiteLike.scala:208)
> [info]   at org.scalatest.FunSuiteLike$$anonfun$runTests$1.apply(FunSuiteLike.scala:208)
> [info]   at org.scalatest.SuperEngine$$anonfun$traverseSubNodes$1$1.apply(Engine.scala:413)
> [info]   at org.scalatest.SuperEngine$$anonfun$traverseSubNodes$1$1.apply(Engine.scala:401)
> [info]   at scala.collection.immutable.List.foreach(List.scala:381)
> [info]   at org.scalatest.SuperEngine.traverseSubNodes$1(Engine.scala:401)
> [info]   at org.scalatest.SuperEngine.org$scalatest$SuperEngine$$runTestsInBranch(Engine.scala:396)
> [info]   at org.scalatest.SuperEngine.runTestsImpl(Engine.scala:483)
> [info]   at org.scalatest.FunSuiteLike$class.runTests(FunSuiteLike.scala:208)
> [info]   at org.scalatest.FunSuite.runTests(FunSuite.scala:1555)
> [info]   at org.scalatest.Suite$class.run(Suite.scala:1424)
> [info]   at org.scalatest.FunSuite.org$scalatest$FunSuiteLike$$super$run(FunSuite.scala:1555)
> [info]   at org.scalatest.FunSuiteLike$$anonfun$run$1.apply(FunSuiteLike.scala:212)
> [info]   at org.scalatest.FunSuiteLike$$anonfun$run$1.apply(FunSuiteLike.scala:212)
> [info]   at org.scalatest.SuperEngine.runImpl(Engine.scala:545)
> [info]   at org.scalatest.FunSuiteLike$class.run(FunSuiteLike.scala:212)
> [info]   at org.apache.spark.SparkFunSuite.org$scalatest$BeforeAndAfterAll$$super$run(SparkFunSuite.scala:29)
> [info]   at org.scalatest.BeforeAndAfterAll$class.liftedTree1$1(BeforeAndAfterAll.scala:257)
> [info]   at org.scalatest.BeforeAndAfterAll$class.run(BeforeAndAfterAll.scala:256)
> [info]   at org.apache.spark.SparkFunSuite.run(SparkFunSuite.scala:29)
> [info]   at org.scalatest.tools.Framework.org$scalatest$tools$Framework$$runSuite(Framework.scala:357)
> [info]   at org.scalatest.tools.Framework$ScalaTestTask.execute(Framework.scala:502)
> [info]   at sbt.ForkMain$Run$2.call(ForkMain.java:296)
> [info]   at sbt.ForkMain$Run$2.call(ForkMain.java:286)
> [info]   at java.util.concurrent.FutureTask.run(FutureTask.java:266)
> [info]   at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
> [info]   at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
> [info]   at java.lang.Thread.run(Thread.java:745)
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscribe@spark.apache.org
For additional commands, e-mail: issues-help@spark.apache.org


Mime
View raw message