##### Site index · List index
Message view
Top
From "ASF GitHub Bot (JIRA)" <j...@apache.org>
Subject [jira] [Commented] (FLINK-1979) Implement Loss Functions
Date Wed, 01 Jun 2016 01:46:12 GMT
```
]

ASF GitHub Bot commented on FLINK-1979:
---------------------------------------

Github user chiwanpark commented on a diff in the pull request:

---
@@ -47,21 +47,106 @@ object SquaredLoss extends PartialLossFunction {

/** Calculates the loss depending on the label and the prediction
*
-    * @param prediction
-    * @param label
-    * @return
+    * @param prediction The predicted value
+    * @param label The true value
+    * @return The loss
*/
override def loss(prediction: Double, label: Double): Double = {
0.5 * (prediction - label) * (prediction - label)
}

/** Calculates the derivative of the [[PartialLossFunction]]
*
-    * @param prediction
-    * @param label
-    * @return
+    * @param prediction The predicted value
+    * @param label The true value
+    * @return The derivative of the loss function
*/
override def derivative(prediction: Double, label: Double): Double = {
(prediction - label)
}
}
+
+/** Logistic loss function which can be used with the [[GenericLossFunction]]
+  *
+  *
+  * The [[LogisticLoss]] function implements `log(1 + -exp(prediction*label))`
+  * for binary classification with label in {-1, 1}
+  */
+object LogisticLoss extends PartialLossFunction {
+
+  /** Calculates the loss depending on the label and the prediction
+    *
+    * @param prediction The predicted value
+    * @param label The true value
+    * @return The loss
+    */
+  override def loss(prediction: Double, label: Double): Double = {
+    val z = prediction * label
+
+    // based on implementation in scikit-learn
+    // approximately equal and saves the computation of the log
+    if (z > 18) {
+      return math.exp(-z)
+    }
+    else if (z < -18) {
+      return -z
+    }
+
+    math.log(1 + math.exp(-z))
+  }
+
+  /** Calculates the derivative of the loss function with respect to the prediction
+    *
+    * @param prediction The predicted value
+    * @param label The true value
+    * @return The derivative of the loss function
+    */
+  override def derivative(prediction: Double, label: Double): Double = {
+    val z = prediction * label
+
+    // based on implementation in scikit-learn
+    // approximately equal and saves the computation of the log
+    if (z > 18) {
+      return -label * math.exp(-z)
+    }
+    else if (z < -18) {
+      return -label
+    }
+
+    -label/(math.exp(z) + 1)
+  }
--- End diff --

As I said above, following is better:

```scala
if (z > 18) {
-label * math.exp(-z)
} else if (z < -18) {
-label
} else {
-label / (math.exp(z) + 1)
}
```

> Implement Loss Functions
> ------------------------
>
>          Issue Type: Improvement
>          Components: Machine Learning Library
>            Reporter: Johannes Günther
>            Assignee: Johannes Günther
>            Priority: Minor
>              Labels: ML
>
> For convex optimization problems, optimizer methods like SGD rely on a pluggable implementation
of a loss function and its first derivative.

--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

```
Mime
View raw message