spark-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Shmuel Blitz <>
Subject Spark 2.1 table loaded from Hive Metastore has null values
Date Mon, 07 Aug 2017 10:05:16 GMT
(Also asked on SO at
I am trying to migrate table definitions from one Hive metastore to another.

The source cluster has:

   - Spark 1.6.0
   - Hive 1.1.0 (cdh)
   - HDFS

The destination cluster is an EMR cluster with:

   - Spark 2.1.1
   - Hive 2.1.1
   - S3

To migrate the tables I did the following:
  1. Copy data from HDFS to S3
  2. Run SHOW CREATE TABLE my_table; in the source cluster
  3. Modify the returned create query - change LOCATION from the HDFS path
to the S3 path
  4. Run the modified query on the destination cluster's Hive
  5. Run SELECT * FROM my_table;. This returns 0 rows (expected)
  6. Run MSCK REPAIR TABLE my_table;. This passes as expected and registers
the partitions in the metastore.
  7. Run SELECT * FROM my_table LIMIT 10; - 10 lines are returned with
correct values
  8. On the destination cluster, from Spark that is configured to work with
the Hive Metastore, run the following code: spark.sql("SELECT * FROM
my_table limit 10").show() - This returns null values!

The result returned from the Spark SQL query has all the correct columns,
and the correct number of lines, but all the values are null.

To get Spark to correctly load the values, I can add the following
properties to the TBLPROPERTIES part of the create query:

'spark.sql.sources.schema.part.0'='<json-schema as seen by spark>'
'spark.sql.sources.schema.partCol.0'='<partition name 1>',
'spark.sql.sources.schema.partCol.1'='<partition name 2>',

The other side of this problem is that in the source cluster, Spark reads
the table values without any problem and without the extra TBLPROPERTIES.

Why is this happening? How can it be fixed?
[image: Logo]
Shmuel Blitz
*Big Data Developer*

View raw message