##### Site index · List index
Message view
Top
From "ASF GitHub Bot (JIRA)" <j...@apache.org>
Date Wed, 29 Apr 2015 09:53:07 GMT

]

ASF GitHub Bot commented on FLINK-1807:
---------------------------------------

Github user tillrohrmann commented on a diff in the pull request:

--- Diff: docs/libs/ml/optimization.md ---
@@ -0,0 +1,218 @@
+---
+mathjax: include
+title: "ML - Optimization"
+displayTitle: <a href="index.md">ML</a> - Optimization
+---
+<!--
+Licensed to the Apache Software Foundation (ASF) under one
+or more contributor license agreements.  See the NOTICE file
+distributed with this work for additional information
+to you under the Apache License, Version 2.0 (the
+"License"); you may not use this file except in compliance
+with the License.  You may obtain a copy of the License at
+
+
+Unless required by applicable law or agreed to in writing,
+"AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+KIND, either express or implied.  See the License for the
+specific language governing permissions and limitations
+-->
+
+{:toc}
+
+
+## Mathematical Formulation
+
+The optimization framework in Flink is a developer-oriented package that can be used
to solve
+[optimization](https://en.wikipedia.org/wiki/Mathematical_optimization)
+problems common in Machine Learning (ML) tasks. In the supervised learning context, this
usually
+involves finding a model, as defined by a set of parameters $w$, that minimize a function
$f(\wv)$
+given a set of $(\x, y)$ examples,
+where $\x$ is a feature vector and $y$ is a real number, which can represent either a
real value in
+the regression case, or a class label in the classification case. In supervised learning,
the
+function to be minimized is usually of the form:
+
+$$+$$+ f(\wv) := + \frac1n \sum_{i=1}^n L(\wv;\x_i,y_i) + + \lambda\, R(\wv) + \label{eq:objectiveFunc} + \ . +$$ +$$
+
+where $L$ is the loss function and $R(\wv)$ the regularization penalty. We use $L$ to
measure how
+well the model fits the observed data, and we use $R$ in order to impose a complexity
cost to the
+model, with $\lambda > 0$ being the regularization parameter.
+
+### Loss Functions
+
+In supervised learning, we use loss functions in order to measure the model fit, by
+penalizing errors in the predictions $p$ made by the model compared to the true $y$ for
each
+example. Different loss function can be used for regression (e.g. Squared Loss) and classification
+(e.g. Hinge Loss).
+
+Some common loss functions are:
+
+* Squared Loss: $\frac{1}{2} (\wv^T \x - y)^2, \quad y \in \R$
+* Hinge Loss: $\max (0, 1-y \wv^T \x), \quad y \in \{-1, +1\}$
--- End diff --

maybe we can add a small spacing between y and \wv^T\x

> Stochastic gradient descent optimizer for ML library
> ----------------------------------------------------
>
>          Issue Type: Improvement
>          Components: Machine Learning Library
>            Reporter: Till Rohrmann
>            Assignee: Theodore Vasiloudis
>              Labels: ML
>
> Stochastic gradient descent (SGD) is a widely used optimization technique in different
ML algorithms. Thus, it would be helpful to provide a generalized SGD implementation which
can be instantiated with the respective gradient computation. Such a building block would
make the development of future algorithms easier.

--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Mime
View raw message