Don't worry about the implicit params, those are filled in by the compiler. All you need to do is provide a key and value type, and a path. Look at how sequenceFile gets used in this test:;a=blob;f=core/src/test/scala/spark/FileSuite.scala;hb=af3c9d50

In particular, the K and V in Spark can be any Writable class, *or* primitive types like Int, Double, etc, or String. For the latter ones, Spark automatically uses the correct Hadoop Writable (e.g. IntWritable, DoubleWritable, Text).


On Oct 17, 2013, at 5:35 PM, Shay Seng <> wrote:

Hey gurus,

I'm having a little trouble deciphering the docs for 

sequenceFile[KV](path: StringminSplits: Int = defaultMinSplits)(implicit km: ClassManifest[K]vm: ClassManifest[V]kcf: () ⇒WritableConverter[K]vcf: () ⇒ WritableConverter[V])RDD[(K, V)]

Does anyone have a short example snippet?