spark-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Xiangrui Meng <men...@gmail.com>
Subject Re: MLlib Collaborative Filtering failed to run with rank 1000
Date Fri, 03 Oct 2014 18:44:41 GMT
The current impl of ALS constructs least squares subproblems in
memory. So for rank 100, the total memory it requires is about 480,189
* 100^2 / 2 * 8 bytes ~ 20GB, divided by the number of blocks. For
rank 1000, this number goes up to 2TB, unfortunately. There is a JIRA
for optimizing ALS: https://issues.apache.org/jira/browse/SPARK-3541
and I put a (messy) implementation there. You can try that one and see
whether it helps.

Btw, did you check the test error when you increase the rank? It may
overfit when the rank is 1000.

Best,
Xiangrui

On Fri, Oct 3, 2014 at 10:17 AM, jw.cmu <jinliangwei1@gmail.com> wrote:
> I was able to run collaborative filtering with low rank numbers, like 20~160
> on the netflix dataset, but it fails due to the following error when I set
> the rank to 1000:
>
> 14/10/03 03:27:36 WARN TaskSetManager: Loss was due to
> java.lang.IllegalArgumentException
> java.lang.IllegalArgumentException: Size exceeds Integer.MAX_VALUE
>         at sun.nio.ch.FileChannelImpl.map(FileChannelImpl.java:745)
>         at org.apache.spark.storage.DiskStore.getBytes(DiskStore.scala:108)
>         at org.apache.spark.storage.DiskStore.getValues(DiskStore.scala:124)
>         at
> org.apache.spark.storage.BlockManager.getLocalFromDisk(BlockManager.scala:332)
>         at
> org.apache.spark.storage.BlockFetcherIterator$BasicBlockFetcherIterator$$anonfun$getLocalBlocks$1.apply(BlockFetcherIterator.scala:204)
>         at
> org.apache.spark.storage.BlockFetcherIterator$BasicBlockFetcherIterator$$anonfun$getLocalBlocks$1.apply(BlockFetcherIterator.scala:203)
>         at
> scala.collection.mutable.ResizableArray$class.foreach(ResizableArray.scala:59)
>         at
> scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:47)
>         at
> org.apache.spark.storage.BlockFetcherIterator$BasicBlockFetcherIterator.getLocalBlocks(BlockFetcherIterator.scala:203)
>         at
> org.apache.spark.storage.BlockFetcherIterator$BasicBlockFetcherIterator.initialize(BlockFetcherIterator.scala:234)
>         at
> org.apache.spark.storage.BlockManager.getMultiple(BlockManager.scala:537)
>         at
> org.apache.spark.BlockStoreShuffleFetcher.fetch(BlockStoreShuffleFetcher.scala:76)
>         at
> org.apache.spark.rdd.CoGroupedRDD$$anonfun$compute$2.apply(CoGroupedRDD.scala:133)
>         at
> org.apache.spark.rdd.CoGroupedRDD$$anonfun$compute$2.apply(CoGroupedRDD.scala:123)
>         at
> scala.collection.TraversableLike$WithFilter$$anonfun$foreach$1.apply(TraversableLike.scala:772)
>         at
> scala.collection.IndexedSeqOptimized$class.foreach(IndexedSeqOptimized.scala:33)
>         at
> scala.collection.mutable.ArrayOps$ofRef.foreach(ArrayOps.scala:108)
>         at
> scala.collection.TraversableLike$WithFilter.foreach(TraversableLike.scala:771)
>         at org.apache.spark.rdd.CoGroupedRDD.compute(CoGroupedRDD.scala:123)
>         at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:262)
>         at org.apache.spark.rdd.RDD.iterator(RDD.scala:229)
>         at
> org.apache.spark.rdd.MappedValuesRDD.compute(MappedValuesRDD.scala:31)
>         at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:262)
>
> Spark version: 1.0.2
> Number of workers: 9
> core per worker: 16
> memory per worker: 120GB
>
>
>
> --
> View this message in context: http://apache-spark-user-list.1001560.n3.nabble.com/MLlib-Collaborative-Filtering-failed-to-run-with-rank-1000-tp15692.html
> Sent from the Apache Spark User List mailing list archive at Nabble.com.
>
> ---------------------------------------------------------------------
> To unsubscribe, e-mail: user-unsubscribe@spark.apache.org
> For additional commands, e-mail: user-help@spark.apache.org
>

---------------------------------------------------------------------
To unsubscribe, e-mail: user-unsubscribe@spark.apache.org
For additional commands, e-mail: user-help@spark.apache.org


Mime
View raw message