spark-issues mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Jeremy Freeman (JIRA)" <>
Subject [jira] [Created] (SPARK-5089) Vector conversion broken for non-float64 arrays
Date Mon, 05 Jan 2015 17:01:34 GMT
Jeremy Freeman created SPARK-5089:

             Summary: Vector conversion broken for non-float64 arrays
                 Key: SPARK-5089
             Project: Spark
          Issue Type: Bug
          Components: MLlib, PySpark
    Affects Versions: 1.2.0
            Reporter: Jeremy Freeman

Prior to performing many MLlib operations in PySpark (e.g. KMeans), data are automatically
converted to `DenseVectors`. If the data are numpy arrays with dtype `float64` this works.
If data are numpy arrays with lower precision (e.g. `float16` or `float32`), they should be
upcast to `float64`, but due to a small bug in this line this currently doesn't happen (casting
is not inplace). 

if ar.dtype != np.float64:
Non-float64 values are in turn mangled during SerDe. This can have significant consequences.
For example, the following yields confusing and erroneous results:

from numpy import random
from pyspark.mllib.clustering import KMeans
data = sc.parallelize(random.randn(100,10).astype('float32'))
model = KMeans.train(data, k=3)
>> 5 # should be 10!

But this works fine:

data = sc.parallelize(random.randn(100,10).astype('float64'))
model = KMeans.train(data, k=3)
>> 10 # this is correct

The fix is trivial, I'll submit a PR shortly.

This message was sent by Atlassian JIRA

To unsubscribe, e-mail:
For additional commands, e-mail:

View raw message