spark-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From maheshtwc <>
Subject Re: Spark streaming RDDs to Parquet records
Date Fri, 20 Jun 2014 03:33:20 GMT
Unfortunately, I couldn’t figure it out without involving Avro.

Here is something that may be useful since it uses Avro generic records (so no case classes
needed) and transforms to Parquet.


From: "Anita Tailor [via Apache Spark User List]" <<>>
Date: Thursday, June 19, 2014 at 12:53 PM
To: Mahesh Padmanabhan <<>>
Subject: Re: Spark streaming RDDs to Parquet records

I have similar case where I have RDD [List[Any], List[Long] ] and wants to save it as Parquet
My understanding is that only RDD of case classes can be converted to SchemaRDD. So is there
any way I can save this RDD as Parquet file without using Avro?

Thanks in advance

On 18 June 2014 05:03, Michael Armbrust <[hidden email]</user/SendEmail.jtp?type=node&node=7939&i=0>>
If you convert the data to a SchemaRDD you can save it as Parquet:

On Tue, Jun 17, 2014 at 11:47 PM, Padmanabhan, Mahesh (contractor) <[hidden email]</user/SendEmail.jtp?type=node&node=7939&i=1>>
Thanks Krishna. Seems like you have to use Avro and then convert that to Parquet. I was hoping
to directly convert RDDs to Parquet files. I’ll look into this some more.


From: Krishna Sankar <[hidden email]</user/SendEmail.jtp?type=node&node=7939&i=2>>
Reply-To: "[hidden email]</user/SendEmail.jtp?type=node&node=7939&i=3>" <[hidden
Date: Tuesday, June 17, 2014 at 2:41 PM
To: "[hidden email]</user/SendEmail.jtp?type=node&node=7939&i=5>" <[hidden
Subject: Re: Spark streaming RDDs to Parquet records


 *   One direction could be : create a parquet schema, convert & save the records to hdfs.
 *   This might help


On Tue, Jun 17, 2014 at 12:52 PM, maheshtwc <[hidden email]</user/SendEmail.jtp?type=node&node=7939&i=7>>

Is there an easy way to convert RDDs within a DStream into Parquet records?
Here is some incomplete pseudo code:

// Create streaming context
val ssc = new StreamingContext(...)

// Obtain a DStream of events
val ds = KafkaUtils.createStream(...)

// Get Spark context to get to the SQL context
val sc = ds.context.sparkContext

val sqlContext = new org.apache.spark.sql.SQLContext(sc)

// For each RDD
ds.foreachRDD((rdd: RDD[Array[Byte]]) => {

    // What do I do next?


View this message in context:
Sent from the Apache Spark User List mailing list archive at

This E-mail and any of its attachments may contain Time Warner Cable proprietary information,
which is privileged, confidential, or subject to copyright belonging to Time Warner Cable.
This E-mail is intended solely for the use of the individual or entity to which it is addressed.
If you are not the intended recipient of this E-mail, you are hereby notified that any dissemination,
distribution, copying, or action taken in relation to the contents of and attachments to this
E-mail is strictly prohibited and may be unlawful. If you have received this E-mail in error,
please notify the sender immediately and permanently delete the original and any copy of this
E-mail and any printout.

If you reply to this email, your message will be added to the discussion below:
To unsubscribe from Spark streaming RDDs to Parquet records, click here<>.

View this message in context:
Sent from the Apache Spark User List mailing list archive at
View raw message