spark-issues mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Andrew Ash (JIRA)" <j...@apache.org>
Subject [jira] [Resolved] (SPARK-1757) Support saving null primitives with .saveAsParquetFile()
Date Tue, 13 May 2014 03:50:14 GMT

     [ https://issues.apache.org/jira/browse/SPARK-1757?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]

Andrew Ash resolved SPARK-1757.
-------------------------------

       Resolution: Fixed
    Fix Version/s: 1.0.0

https://github.com/apache/spark/pull/690

> Support saving null primitives with .saveAsParquetFile()
> --------------------------------------------------------
>
>                 Key: SPARK-1757
>                 URL: https://issues.apache.org/jira/browse/SPARK-1757
>             Project: Spark
>          Issue Type: Bug
>          Components: SQL
>    Affects Versions: 1.0.0
>            Reporter: Andrew Ash
>             Fix For: 1.0.0
>
>
> See stack trace below:
> {noformat}
> 14/05/07 21:45:51 INFO analysis.Analyzer: Max iterations (2) reached for batch MultiInstanceRelations
> 14/05/07 21:45:51 INFO analysis.Analyzer: Max iterations (2) reached for batch CaseInsensitiveAttributeReferences
> 14/05/07 21:45:51 INFO optimizer.Optimizer$: Max iterations (2) reached for batch ConstantFolding
> 14/05/07 21:45:51 INFO optimizer.Optimizer$: Max iterations (2) reached for batch Filter
Pushdown
> java.lang.RuntimeException: Unsupported datatype StructType(List())
>         at scala.sys.package$.error(package.scala:27)
>         at org.apache.spark.sql.parquet.ParquetTypesConverter$.fromDataType(ParquetRelation.scala:201)
>         at org.apache.spark.sql.parquet.ParquetTypesConverter$$anonfun$1.apply(ParquetRelation.scala:235)
>         at org.apache.spark.sql.parquet.ParquetTypesConverter$$anonfun$1.apply(ParquetRelation.scala:235)
>         at scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:244)
>         at scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:244)
>         at scala.collection.immutable.List.foreach(List.scala:318)
>         at scala.collection.TraversableLike$class.map(TraversableLike.scala:244)
>         at scala.collection.AbstractTraversable.map(Traversable.scala:105)
>         at org.apache.spark.sql.parquet.ParquetTypesConverter$.convertFromAttributes(ParquetRelation.scala:234)
>         at org.apache.spark.sql.parquet.ParquetTypesConverter$.writeMetaData(ParquetRelation.scala:267)
>         at org.apache.spark.sql.parquet.ParquetRelation$.createEmpty(ParquetRelation.scala:143)
>         at org.apache.spark.sql.parquet.ParquetRelation$.create(ParquetRelation.scala:122)
>         at org.apache.spark.sql.execution.SparkStrategies$ParquetOperations$.apply(SparkStrategies.scala:139)
>         at org.apache.spark.sql.catalyst.planning.QueryPlanner$$anonfun$1.apply(QueryPlanner.scala:58)
>         at org.apache.spark.sql.catalyst.planning.QueryPlanner$$anonfun$1.apply(QueryPlanner.scala:58)
>         at scala.collection.Iterator$$anon$13.hasNext(Iterator.scala:371)
>         at org.apache.spark.sql.catalyst.planning.QueryPlanner.apply(QueryPlanner.scala:59)
>         at org.apache.spark.sql.SQLContext$QueryExecution.sparkPlan$lzycompute(SQLContext.scala:264)
>         at org.apache.spark.sql.SQLContext$QueryExecution.sparkPlan(SQLContext.scala:264)
>         at org.apache.spark.sql.SQLContext$QueryExecution.executedPlan$lzycompute(SQLContext.scala:265)
>         at org.apache.spark.sql.SQLContext$QueryExecution.executedPlan(SQLContext.scala:265)
>         at org.apache.spark.sql.SQLContext$QueryExecution.toRdd$lzycompute(SQLContext.scala:268)
>         at org.apache.spark.sql.SQLContext$QueryExecution.toRdd(SQLContext.scala:268)
>         at org.apache.spark.sql.SchemaRDDLike$class.saveAsParquetFile(SchemaRDDLike.scala:66)
>         at org.apache.spark.sql.SchemaRDD.saveAsParquetFile(SchemaRDD.scala:96)
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.2#6252)

Mime
View raw message