spark-issues mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Venkata Vineel (JIRA)" <>
Subject [jira] [Commented] (SPARK-5133) Feature Importance for Decision Tree (Ensembles)
Date Mon, 06 Jul 2015 09:03:04 GMT


Venkata Vineel commented on SPARK-5133:

[~peter.prettenhofer]  Please assign it to yourself, so that others don't get confused and
take it up.Can you help to choose and start working on similar bugs/features that others haven't
already submitted PRs for.

> Feature Importance for Decision Tree (Ensembles)
> ------------------------------------------------
>                 Key: SPARK-5133
>                 URL:
>             Project: Spark
>          Issue Type: New Feature
>          Components: ML, MLlib
>            Reporter: Peter Prettenhofer
>   Original Estimate: 168h
>  Remaining Estimate: 168h
> Add feature importance to decision tree model and tree ensemble models.
> If people are interested in this feature I could implement it given a mentor (API decisions,
etc). Please find a description of the feature below:
> Decision trees intrinsically perform feature selection by selecting appropriate split
points. This information can be used to assess the relative importance of a feature. 
> Relative feature importance gives valuable insight into a decision tree or tree ensemble
and can even be used for feature selection.
> More information on feature importance (via decrease in impurity) can be found in ESLII
(10.13.1) or here [1].
> R's randomForest package uses a different technique for assessing variable importance
that is based on permutation tests.
> All necessary information to create relative importance scores should be available in
the tree representation (class Node; split, impurity gain, (weighted) nr of samples?).
> [1]

This message was sent by Atlassian JIRA

To unsubscribe, e-mail:
For additional commands, e-mail:

View raw message