spark-issues mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Nicholas Chammas (JIRA)" <j...@apache.org>
Subject [jira] [Created] (SPARK-19553) Add GroupedData.countApprox()
Date Fri, 10 Feb 2017 19:37:41 GMT
Nicholas Chammas created SPARK-19553:
----------------------------------------

             Summary: Add GroupedData.countApprox()
                 Key: SPARK-19553
                 URL: https://issues.apache.org/jira/browse/SPARK-19553
             Project: Spark
          Issue Type: Improvement
          Components: SQL
    Affects Versions: 2.1.0
            Reporter: Nicholas Chammas
            Priority: Minor


We already have a [{{pyspark.sql.functions.approx_count_distinct()}}|http://spark.apache.org/docs/latest/api/python/pyspark.sql.html#pyspark.sql.functions.approx_count_distinct]
that can be applied to grouped data, but it seems odd that you can't just get regular approximate
count for grouped data.

I imagine the API would mirror that for [{{RDD.countApprox()}}|http://spark.apache.org/docs/latest/api/python/pyspark.html#pyspark.RDD.countApprox],
but I'm not sure:

{code}
(df
    .groupBy('col1')
    .countApprox(timeout=300, confidence=0.95)
    .show())
{code}

Or, if we want to mirror the {{approx_count_distinct()}} function, we can do that too. I'd
want to understand why that function doesn't take a timeout or confidence parameter, though.
Also, what does {{rsd}} mean? It's not documented.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscribe@spark.apache.org
For additional commands, e-mail: issues-help@spark.apache.org


Mime
View raw message