spark-issues mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Debdut Mukherjee (JIRA)" <j...@apache.org>
Subject [jira] [Updated] (SPARK-28364) Unable to read complete data from an external hive table stored as ORC that points to a managed table's data files which is getting stored in sub-directories.
Date Fri, 12 Jul 2019 08:23:00 GMT

     [ https://issues.apache.org/jira/browse/SPARK-28364?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]

Debdut Mukherjee updated SPARK-28364:
-------------------------------------
    Description: 
Unable to read complete data from an external hive table stored as ORC that points to a managed
table's data files (ORC) which is getting stored in sub-directories.

The count also does not match unless the path is given with a ***.

*Example This works:-*

"adl://<adls_name>.azuredatalakestore.net/clusters/<cluster path>/hive/warehouse/db2.db/tbl1/***"  

But the above creates a blank directory named ' * ' in ADLS(Azure Data Lake Store)

 

The below one does not work when a SELECT COUNT ( * ) is executed on this external file.
It gives partial count.

CREATE EXTERNAL TABLE IF NOT EXISTS db1.tbl1 (

Col_1 string,

Col_2 string

STORED AS ORC

LOCATION "adl://<adls_name>.azuredatalakestore.net/clusters/<cluster path>/hive/warehouse/db2.db/tbl1/"

)

 

I was looking for a resolution in google, and even adding below lines to the Databricks Notebook
did not solve the problem.

sqlContext.setConf("mapred.input.dir.recursive","true"); sqlContext.setConf("mapreduce.input.fileinputformat.input.dir.recursive","true");

 

  was:
Unable to read complete data from an external hive table stored as ORC that points to a managed
table's data files which is getting stored in sub-directories.

The count also does not match unless the path is given with a ***.

*Example This works:-*

"adl://<adls_name>.azuredatalakestore.net/clusters/<cluster path>/hive/warehouse/db2.db/tbl1/***"  

But the above creates a blank directory named ' * ' in ADLS(Azure Data Lake Store)

 

The below one does not work when a SELECT COUNT ( * ) is executed on this external file.
It gives partial count.

CREATE EXTERNAL TABLE IF NOT EXISTS db1.tbl1 (

Col_1 string,

Col_2 string

STORED AS ORC

LOCATION "adl://<adls_name>.azuredatalakestore.net/clusters/<cluster path>/hive/warehouse/db2.db/tbl1/"

)

 

I was looking for a resolution in google, and even adding below lines to the Databricks Notebook
did not solve the problem.

sqlContext.setConf("mapred.input.dir.recursive","true"); sqlContext.setConf("mapreduce.input.fileinputformat.input.dir.recursive","true");

 


> Unable to read complete data from an external hive table stored as ORC that points to
a managed table's data files which is getting stored in sub-directories.
> --------------------------------------------------------------------------------------------------------------------------------------------------------------
>
>                 Key: SPARK-28364
>                 URL: https://issues.apache.org/jira/browse/SPARK-28364
>             Project: Spark
>          Issue Type: Bug
>          Components: SQL
>    Affects Versions: 2.4.0
>            Reporter: Debdut Mukherjee
>            Priority: Major
>         Attachments: pic.PNG
>
>
> Unable to read complete data from an external hive table stored as ORC that points to
a managed table's data files (ORC) which is getting stored in sub-directories.
> The count also does not match unless the path is given with a ***.
> *Example This works:-*
> "adl://<adls_name>.azuredatalakestore.net/clusters/<cluster path>/hive/warehouse/db2.db/tbl1/***"  
> But the above creates a blank directory named ' * ' in ADLS(Azure Data Lake Store)
>  
> The below one does not work when a SELECT COUNT ( * ) is executed on this external file.
It gives partial count.
> CREATE EXTERNAL TABLE IF NOT EXISTS db1.tbl1 (
> Col_1 string,
> Col_2 string
> STORED AS ORC
> LOCATION "adl://<adls_name>.azuredatalakestore.net/clusters/<cluster path>/hive/warehouse/db2.db/tbl1/"
> )
>  
> I was looking for a resolution in google, and even adding below lines to the Databricks
Notebook did not solve the problem.
> sqlContext.setConf("mapred.input.dir.recursive","true"); sqlContext.setConf("mapreduce.input.fileinputformat.input.dir.recursive","true");
>  



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscribe@spark.apache.org
For additional commands, e-mail: issues-help@spark.apache.org


Mime
View raw message