mahout-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Ted Dunning <ted.dunn...@gmail.com>
Subject Re: SGD AdaptiveLogisticRegression vs OnlineLogisticRegression
Date Mon, 24 Sep 2012 03:02:39 GMT
I think that there is an excessive stability issue, actually.

What seems to happen is that the adaptive part locks down the learning rate
too quickly.

This is related to several other issues:

- the cross fold learning paradigm is kind of dangerous since it depends on
the user not having duplicates and getting the sequencing right

- the cross fold learning setup could be simplified and sped up by just
using one tranche of held out data instead of doing full scale cross
validation.

- it looks like modified second-order methods are better for accelerating
final convergence in any case.  Vowpal wabbit uses L-BFGS.  Facebook has
reported good results with the use of update average.  Both of these are
conjugate gradient methods that provide *worse* convergence early in the
learning and  much *better* convergence later.

- Vowpal wabbit uses a trick to mitigate the effect of too large a learning
rate.  This makes the annealing schedule less of an issue.

I don't have time to chase these issues down just now and barely have time
to bring in some of the new clustering stuff.  Any commentary/contributions
would be appreciated.

On Sun, Sep 23, 2012 at 7:19 PM, Josh Patterson <josh@cloudera.com> wrote:

> I've seen some chatter in the group about issues with
> AdaptiveLogisticRegression;
>
> - is it simply a matter of "when to use it vs OLR"?
>
> - are there some stability issues with AdaptiveLogisticRegression?
>
> JP
>
> --
> Twitter: @jpatanooga
> Principal Solution Architect @ Cloudera
> hadoop: http://www.cloudera.com
>

Mime
  • Unnamed multipart/alternative (inline, None, 0 bytes)
View raw message