[ https://issues.apache.org/jira/browse/FLINK-1992?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14553811#comment-14553811 ]
ASF GitHub Bot commented on FLINK-1992:
---------------------------------------
Github user tillrohrmann commented on a diff in the pull request:
https://github.com/apache/flink/pull/692#discussion_r30779835
--- Diff: flink-staging/flink-ml/src/main/scala/org/apache/flink/ml/optimization/GradientDescent.scala ---
@@ -36,19 +36,20 @@ import org.apache.flink.ml.optimization.Solver._
* At the moment, the whole partition is used for SGD, making it effectively a batch gradient
* descent. Once a sampling operator has been introduced, the algorithm can be optimized
*
- * @param runParameters The parameters to tune the algorithm. Currently these include:
- * [[Solver.LossFunction]] for the loss function to be used,
- * [[Solver.RegularizationType]] for the type of regularization,
- * [[Solver.RegularizationParameter]] for the regularization parameter,
+ * The parameters to tune the algorithm are:
+ * [[Solver.LossFunctionParameter]] for the loss function to be used,
+ * [[Solver.RegularizationTypeParameter]] for the type of regularization,
+ * [[Solver.RegularizationValueParameter]] for the regularization parameter,
* [[IterativeSolver.Iterations]] for the maximum number of iteration,
* [[IterativeSolver.Stepsize]] for the learning rate used.
+ * [[IterativeSolver.ConvergenceThreshold]] when provided the algorithm will
+ * stop the iterations if the change in the value of the objective
+ * function between successive iterations is is smaller than this value.
*/
-class GradientDescent(runParameters: ParameterMap) extends IterativeSolver {
+class GradientDescent() extends IterativeSolver() {
--- End diff --
Do we need the parenthesis?
> Add convergence criterion to SGD optimizer
> ------------------------------------------
>
> Key: FLINK-1992
> URL: https://issues.apache.org/jira/browse/FLINK-1992
> Project: Flink
> Issue Type: Improvement
> Components: Machine Learning Library
> Reporter: Till Rohrmann
> Assignee: Theodore Vasiloudis
> Priority: Minor
> Labels: ML
> Fix For: 0.9
>
>
> Currently, Flink's SGD optimizer runs for a fixed number of iterations. It would be good to support a dynamic convergence criterion, too.
--
This message was sent by Atlassian JIRA
(v6.3.4#6332)