Hi,

How would I run a given function in Spark, over a single input object? 
Would I first add the input to the file system, then somehow invoke the Spark function on just that input? or should I rather twist the Spark streaming api for it?

Assume I'd like to run a piece of computation that normally runs over a large dataset, over just one new added datum. I'm a bit reticent adapting my code to Spark without knowing the limits of this scenario.

Many thanks!
Matan