spark-issues mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Nicholas Chammas (JIRA)" <>
Subject [jira] [Commented] (SPARK-19553) Add GroupedData.countApprox()
Date Mon, 13 Feb 2017 17:21:41 GMT


Nicholas Chammas commented on SPARK-19553:

Quick API question for you [~marmbrus]: Is this feature request appropriate? If yes, would
it be better expressed as a SQL function or as a method on {{GroupedData}}?

> Add GroupedData.countApprox()
> -----------------------------
>                 Key: SPARK-19553
>                 URL:
>             Project: Spark
>          Issue Type: Improvement
>          Components: SQL
>    Affects Versions: 2.1.0
>            Reporter: Nicholas Chammas
>            Priority: Minor
> We already have a [{{pyspark.sql.functions.approx_count_distinct()}}|]
that can be applied to grouped data, but it seems odd that you can't just get regular approximate
count for grouped data.
> I imagine the API would mirror that for [{{RDD.countApprox()}}|],
but I'm not sure:
> {code}
> (df
>     .groupBy('col1')
>     .countApprox(timeout=300, confidence=0.95)
>     .show())
> {code}
> Or, if we want to mirror the {{approx_count_distinct()}} function, we can do that too.
I'd want to understand why that function doesn't take a timeout or confidence parameter, though.
Also, what does {{rsd}} mean? It's not documented.

This message was sent by Atlassian JIRA

To unsubscribe, e-mail:
For additional commands, e-mail:

View raw message