mahout-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Jake Mannix <>
Subject Re: is it possible to compute the SVD for a large scale matrix
Date Wed, 06 Apr 2011 18:58:41 GMT
Of course, for a data set of only 1GB in size, you don't need to map-reduce
it.  You can
use the regular sparse LanczosSolver in memory, and then you don't have to
about this 10's of seconds of startup time.

On Wed, Apr 6, 2011 at 11:25 AM, Ted Dunning <> wrote:

> The key is the k passes.  This bounds the time from below for large values
> of k since it typically takes 10's of seconds to light up a map-reduce job.
>  Larger clusters can actually be worse for this computation because of that.
> On Wed, Apr 6, 2011 at 11:16 AM, Jake Mannix <>wrote:
>> ...  Lanczos-based SVD, for k singular
>> values, requires k passes over the data, and each row which has d non-zero
>> entries will do d^2 computations in each pass.  ...
>> I guess "how long" depends on how big the cluster is!

  • Unnamed multipart/alternative (inline, None, 0 bytes)
View raw message