spark-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Matei Zaharia <>
Subject Re: help on SparkContext.sequenceFile()
Date Fri, 18 Oct 2013 16:37:19 GMT
Don't worry about the implicit params, those are filled in by the compiler. All you need to
do is provide a key and value type, and a path. Look at how sequenceFile gets used in this

In particular, the K and V in Spark can be any Writable class, *or* primitive types like Int,
Double, etc, or String. For the latter ones, Spark automatically uses the correct Hadoop Writable
(e.g. IntWritable, DoubleWritable, Text).


On Oct 17, 2013, at 5:35 PM, Shay Seng <> wrote:

> Hey gurus,
> I'm having a little trouble deciphering the docs for 
> sequenceFile[K, V](path: String, minSplits: Int = defaultMinSplits)(implicit km: ClassManifest[K],
vm: ClassManifest[V], kcf: () ⇒WritableConverter[K], vcf: () ⇒ WritableConverter[V]):
RDD[(K, V)]
> Does anyone have a short example snippet?
> tks
> shay

View raw message