In my particular case, I just need "Scale" instead of "PlusWithScale",
and that can take advantage of sparseness.
My (er, Ted's) current approach is to sum SparseVectors. This takes
advantage of sparseness already.
Am I missing why a Scale/PlusWithScale implementation, when using
sparseness, would be notably faster?
On Sun, Dec 13, 2009 at 11:11 AM, Jake Mannix <jake.mannix@gmail.com> wrote:
> On Sat, Dec 12, 2009 at 5:34 PM, Ted Dunning <ted.dunning@gmail.com> wrote:
>
>> This is a key problem.
>>
>> Looks like we really need to think about versions of assign that only scan
>> nonzero elements. Something like assignFromNonZero.
>>
>
> This is easier than the rest of the stuff we are talking about here.
> Whenever
> you have a UnaryFunction f such that f.apply(0) == 0, or you have a
> BinaryFunction b such that b.appy(x, 0) == x or b.apply(0, x) == x, then
> assign
> should take advantage of this and iterateNonZero() on the vector which can
> be skipped over. In Colt, they actually specialcase out functions like
> Plus
> or Minus to take care of this.
>
> In our case, a particular, PlusWithScale is a very very common function
> for Vectors to do, and could be special cased itself, in AbstractVector.
>
> jake
>
