spark-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Jerry Lam <>
Subject Re: Spark SQL: Storing AVRO Schema in Parquet
Date Fri, 09 Jan 2015 17:02:35 GMT
Hi Raghavendra,

This makes a lot of sense. Thank you.
The problem is that I'm using Spark SQL right now to generate the parquet

What I think I need to do is to use Spark directly and transform all rows
from SchemaRDD to avro objects and supply it to use saveAsNewAPIHadoopFile
(from the PairRDD). From there, I can supply the avro schema to parquet via

It is not difficult just not as simple as I would like because SchemaRDD
can write to Parquet file using its schema and if I can supply the avro
schema to parquet, it save me the transformation step for avro objects.

I'm thinking of overriding the saveAsParquetFile method to allows me to
persist the avro schema inside parquet. Is this possible at all?

Best Regards,


On Fri, Jan 9, 2015 at 2:05 AM, Raghavendra Pandey <> wrote:

> I cam across this
> You can take
> a look.
> On Fri Jan 09 2015 at 12:08:49 PM Raghavendra Pandey <
>> wrote:
>> I have the similar kind of requirement where I want to push avro data
>> into parquet. But it seems you have to do it on your own. There
>> is parquet-mr project that uses hadoop to do so. I am trying to write a
>> spark job to do similar kind of thing.
>> On Fri, Jan 9, 2015 at 3:20 AM, Jerry Lam <> wrote:
>>> Hi spark users,
>>> I'm using spark SQL to create parquet files on HDFS. I would like to
>>> store the avro schema into the parquet meta so that non spark sql
>>> applications can marshall the data without avro schema using the avro
>>> parquet reader. Currently, schemaRDD.saveAsParquetFile does not allow to do
>>> that. Is there another API that allows me to do this?
>>> Best Regards,
>>> Jerry

View raw message