On Wed, Oct 30, 2013 at 6:02 AM, Ashish Rangole <arangole@gmail.com> wrote:

I am assuming you are using Spark 0.8.0 and not 0.6.0. Have you looked at the worker logs to see what is happening there?


Yes, you are exactly correct. I have been using latest version of Spark.

On that, I never configured the worker, I guess. In a single machine (stand-alone) with two cores - Worker are various slave machines right ? (as told here http://spark.incubator.apache.org/docs/latest/spark-standalone.html#cluster-launch-scripts )  But, I have a single machine ( say my ubuntu machine). Is it possible to do that ?





Regards,

Ramkumar Chokkalingam ,
University of Washington.
LinkedIn