flink-issues mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "ASF GitHub Bot (JIRA)" <j...@apache.org>
Subject [jira] [Commented] (FLINK-2157) Create evaluation framework for ML library
Date Wed, 01 Jul 2015 14:39:05 GMT

    [ https://issues.apache.org/jira/browse/FLINK-2157?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14610390#comment-14610390
] 

ASF GitHub Bot commented on FLINK-2157:
---------------------------------------

Github user thvasilo commented on the pull request:

    https://github.com/apache/flink/pull/871#issuecomment-117700685
  
    Copying from the JIRA:
    
    Turns out it's more complicated to have a score function that is available for chained
Predictors as well.
    
    If score is defined as a function of a Predictor subclass, such as Classifier, then it
will not be available to a chained Classifier, since the chaining will produce a ChainedPredictor.
    
    If we define score in the Predictor instead, we will need to provide an implementation
for ChainedPredictor as well, since that extends Predictor.
    
    The only way forward then if we want to have a score function, is to follow the Operation
paradigm, and have implicit score operations that get attached to concrete predictors, and
define a default one for ChainedPredictor as well.
    
    I would suggest that for this PR, we skip the score function, keep the Scorer object and
work with that for the time being. We can revisit the score function at a later point.


> Create evaluation framework for ML library
> ------------------------------------------
>
>                 Key: FLINK-2157
>                 URL: https://issues.apache.org/jira/browse/FLINK-2157
>             Project: Flink
>          Issue Type: New Feature
>          Components: Machine Learning Library
>            Reporter: Till Rohrmann
>            Assignee: Theodore Vasiloudis
>              Labels: ML
>             Fix For: 0.10
>
>
> Currently, FlinkML lacks means to evaluate the performance of trained models. It would
be great to add some {{Evaluators}} which can calculate some score based on the information
about true and predicted labels. This could also be used for the cross validation to choose
the right hyper parameters.
> Possible scores could be F score [1], zero-one-loss score, etc.
> Resources
> [1] [http://en.wikipedia.org/wiki/F1_score]



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

Mime
View raw message