spark-issues mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Sean Malory (Jira)" <>
Subject [jira] [Created] (SPARK-32306) `approx_percentile` in Spark SQL gives incorrect results
Date Tue, 14 Jul 2020 12:36:00 GMT
Sean Malory created SPARK-32306:

             Summary: `approx_percentile` in Spark SQL gives incorrect results
                 Key: SPARK-32306
             Project: Spark
          Issue Type: Bug
          Components: PySpark, SQL
    Affects Versions: 2.4.4
            Reporter: Sean Malory

The `approx_percentile` function in Spark SQL does not give the correct result. I'm not sure
how incorrect it is; it may just be a boundary issue. From the docs:
{quote}The accuracy parameter (default: 10000) is a positive numeric literal which controls
approximation accuracy at the cost of memory. Higher value of accuracy yields better accuracy,
1.0/accuracy is the relative error of the approximation.
This is not true. Here is a minimum example in `pyspark` where, essentially, the median of
5 and 8 is being calculated as 5:
import pyspark.sql.functions as psf

df = spark.createDataFrame(
    [('bar', 5), ('bar', 8)], ['name', 'val']
median = psf.expr('percentile_approx(val, 0.5, 2147483647)')

df.groupBy('name').agg(median.alias('median'))    # gives the median as 5
I've tested this with Spark v2.4.4, pyspark v2.4.5- although I suspect this is an issue with
the underlying algorithm.

This message was sent by Atlassian Jira

To unsubscribe, e-mail:
For additional commands, e-mail:

View raw message