flink-issues mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "ASF GitHub Bot (JIRA)" <j...@apache.org>
Subject [jira] [Commented] (FLINK-1745) Add exact k-nearest-neighbours algorithm to machine learning library
Date Wed, 07 Oct 2015 13:55:26 GMT

    [ https://issues.apache.org/jira/browse/FLINK-1745?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14946878#comment-14946878
] 

ASF GitHub Bot commented on FLINK-1745:
---------------------------------------

Github user chiwanpark commented on the pull request:

    https://github.com/apache/flink/pull/1220#issuecomment-146202529
  
    It sounds weird for me. If the user sets `useQuadTree` to false, the algorithm should
not use quadtree. Otherwise if the user sets `useQuadTree` to true, the algorithm should check
whether quadtree can be used or not.
    
    I don't think that `ListBuffer` is better than `Vector`. For example, we can implement
`partitionBox` like following:
    
    ```scala
    def partitionBox(cPart: Seq[Vector], L: Vector): Seq[Vector] = {
      var next = cPart
      (0 until L.size).foreach { i =>
        next = next.flatMap { v =>
          val (up, down) = (v.copy, v)
          up.update(i, up(i) - L(i) / 4)
          down.update(i, down(i) + L(i) / 4)
    
          Seq(up, down)
        }
      }
    
      next
    }
    ```
    
    There are still some style issues in this PR. I recommend reformatting all codes in this
PR using IDE such as IntelliJ IDEA.


> Add exact k-nearest-neighbours algorithm to machine learning library
> --------------------------------------------------------------------
>
>                 Key: FLINK-1745
>                 URL: https://issues.apache.org/jira/browse/FLINK-1745
>             Project: Flink
>          Issue Type: New Feature
>          Components: Machine Learning Library
>            Reporter: Till Rohrmann
>            Assignee: Daniel Blazevski
>              Labels: ML, Starter
>
> Even though the k-nearest-neighbours (kNN) [1,2] algorithm is quite trivial it is still
used as a mean to classify data and to do regression. This issue focuses on the implementation
of an exact kNN (H-BNLJ, H-BRJ) algorithm as proposed in [2].
> Could be a starter task.
> Resources:
> [1] [http://en.wikipedia.org/wiki/K-nearest_neighbors_algorithm]
> [2] [https://www.cs.utah.edu/~lifeifei/papers/mrknnj.pdf]



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

Mime
View raw message