spark-issues mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Xiangrui Meng (JIRA)" <>
Subject [jira] [Resolved] (SPARK-4547) OOM when making bins in BinaryClassificationMetrics
Date Wed, 31 Dec 2014 21:38:13 GMT


Xiangrui Meng resolved SPARK-4547.
       Resolution: Fixed
    Fix Version/s: 1.3.0

Issue resolved by pull request 3702

> OOM when making bins in BinaryClassificationMetrics
> ---------------------------------------------------
>                 Key: SPARK-4547
>                 URL:
>             Project: Spark
>          Issue Type: Bug
>          Components: MLlib
>    Affects Versions: 1.1.0
>            Reporter: Sean Owen
>            Assignee: Sean Owen
>            Priority: Minor
>             Fix For: 1.3.0
> Also following up on
-- this one I intend to make a PR for a bit later. The conversation was basically:
> {quote}
> Recently I was using BinaryClassificationMetrics to build an AUC curve for a classifier
over a reasonably large number of points (~12M). The scores were all probabilities, so tended
to be almost entirely unique.
> The computation does some operations by key, and this ran out of memory. It's something
you can solve with more than the default amount of memory, but in this case, it seemed unuseful
to create an AUC curve with such fine-grained resolution.
> I ended up just binning the scores so there were ~1000 unique values
> and then it was fine.
> {quote}
> and:
> {quote}
> Yes, if there are many distinct values, we need binning to compute the AUC curve. Usually,
the scores are not evenly distribution, we cannot simply truncate the digits. Estimating the
quantiles for binning is necessary, similar to RangePartitioner:
> Limiting the number of bins is definitely useful.
> {quote}

This message was sent by Atlassian JIRA

To unsubscribe, e-mail:
For additional commands, e-mail:

View raw message