spark-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Venkat, Ankam" <>
Subject Python Logistic Regression error
Date Sun, 23 Nov 2014 19:38:04 GMT
Can you please suggest sample data for running the

I am trying to use a sample data file at

I am running this on CDH5.2 Quickstart VM.

[cloudera@quickstart mllib]$ spark-submit lr.txt 3

But, getting below error.

14/11/23 11:23:55 ERROR TaskSetManager: Task 0 in stage 0.0 failed 4 times; aborting job
14/11/23 11:23:55 INFO TaskSchedulerImpl: Removed TaskSet 0.0, whose tasks have all completed,
from pool
14/11/23 11:23:55 INFO TaskSchedulerImpl: Cancelling stage 0
14/11/23 11:23:55 INFO DAGScheduler: Failed to run runJob at PythonRDD.scala:296
Traceback (most recent call last):
  File "/usr/lib/spark/examples/lib/mllib/", line 50, in <module>
    model = LogisticRegressionWithSGD.train(points, iterations)
  File "/usr/lib/spark/python/pyspark/mllib/", line 110, in train
  File "/usr/lib/spark/python/pyspark/mllib/", line 430, in _regression_train_wrapper
    initial_weights = _get_initial_weights(initial_weights, data)
  File "/usr/lib/spark/python/pyspark/mllib/", line 415, in _get_initial_weights
    initial_weights = _convert_vector(data.first().features)
  File "/usr/lib/spark/python/pyspark/", line 1127, in first
    rs = self.take(1)
  File "/usr/lib/spark/python/pyspark/", line 1109, in take
    res = self.context.runJob(self, takeUpToNumLeft, p, True)
  File "/usr/lib/spark/python/pyspark/", line 770, in runJob
    it = self._jvm.PythonRDD.runJob(, mappedRDD._jrdd, javaPartitions, allowLocal)
  File "/usr/lib/spark/python/lib/", line 538, in
  File "/usr/lib/spark/python/lib/", line 300, in get_return_value
py4j.protocol.Py4JJavaError: An error occurred while calling z:org.apache.spark.api.python.PythonRDD.runJob.
: org.apache.spark.SparkException: Job aborted due to stage failure: Task 0 in stage 0.0 failed
4 times, most recent failure: Lost task 0.3 in stage 0.0 (TID 3, org.apache.spark.api.python.PythonException:
Traceback (most recent call last):
  File "/usr/lib/spark/python/pyspark/", line 79, in main
    serializer.dump_stream(func(split_index, iterator), outfile)
  File "/usr/lib/spark/python/pyspark/", line 196, in dump_stream
    self.serializer.dump_stream(self._batched(iterator), stream)
  File "/usr/lib/spark/python/pyspark/", line 127, in dump_stream
    for obj in iterator:
  File "/usr/lib/spark/python/pyspark/", line 185, in _batched
    for item in iterator:
  File "/usr/lib/spark/python/pyspark/", line 1105, in takeUpToNumLeft
    yield next(iterator)
  File "/usr/lib/spark/examples/lib/mllib/", line 37, in parsePoint
    values = [float(s) for s in line.split(' ')]
ValueError: invalid literal for float(): 1:0.4551273600657362

This communication is the property of CenturyLink and may contain confidential or privileged
information. Unauthorized use of this communication is strictly prohibited and may be unlawful.
If you have received this communication in error, please immediately notify the sender by
reply e-mail and destroy all copies of the communication and any attachments.

View raw message