flink-issues mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "ASF GitHub Bot (JIRA)" <j...@apache.org>
Subject [jira] [Commented] (FLINK-1745) Add exact k-nearest-neighbours algorithm to machine learning library
Date Wed, 07 Oct 2015 12:02:26 GMT

    [ https://issues.apache.org/jira/browse/FLINK-1745?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14946736#comment-14946736

ASF GitHub Bot commented on FLINK-1745:

Github user danielblazevski commented on the pull request:

    @chiwanpark, in lines 203-207
    +                  val useQuadTree = resultParameters.get(useQuadTreeParam).getOrElse(
    +                    training.values.head.size + math.log(math.log(training.values.length)/
    +                      math.log(4.0)) < math.log(training.values.length)/math.log(4.0)
    +                    (metric.isInstanceOf[EuclideanDistanceMetric] ||
    +                      metric.isInstanceOf[SquaredEuclideanDistanceMetric]))
    the code decides whether to use quadtree or not if no value is specified.  This codes
decides based on the number of training + test points + dimension, and is a conservative estimate
so that when it uses the quadtree, the quadtree will improve performance compared to the brute-force
method -- basically the quadtree scales poorly with dimension, but really well with the number
of points. 
    As for using a `Vector` for `minVec` and `maxVec`, I plug in `minVec` and `maxVec` to
construct the root Node, and I found it best to use a ListBuffer in the constructor for the
Node class when partitioning the boxes into sub-boxes.

> Add exact k-nearest-neighbours algorithm to machine learning library
> --------------------------------------------------------------------
>                 Key: FLINK-1745
>                 URL: https://issues.apache.org/jira/browse/FLINK-1745
>             Project: Flink
>          Issue Type: New Feature
>          Components: Machine Learning Library
>            Reporter: Till Rohrmann
>            Assignee: Daniel Blazevski
>              Labels: ML, Starter
> Even though the k-nearest-neighbours (kNN) [1,2] algorithm is quite trivial it is still
used as a mean to classify data and to do regression. This issue focuses on the implementation
of an exact kNN (H-BNLJ, H-BRJ) algorithm as proposed in [2].
> Could be a starter task.
> Resources:
> [1] [http://en.wikipedia.org/wiki/K-nearest_neighbors_algorithm]
> [2] [https://www.cs.utah.edu/~lifeifei/papers/mrknnj.pdf]

This message was sent by Atlassian JIRA

View raw message