mahout-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Hoa Nguyen <>
Subject Running Mahout on a Spark cluster
Date Fri, 22 Sep 2017 02:37:52 GMT
I apologize in advance if this is too much of a newbie question but I'm
having a hard time running any Mahout example code in a distributed Spark
cluster. The code runs as advertised when Spark is running locally on one
machine but the minute I point Spark to a cluster and master url, I can't
get it to work, drawing the error: "WARN scheduler.TaskSchedulerImpl:
Initial job has not accepted any resources; check your cluster UI to ensure
that workers are registered and have sufficient memory"

I know my Spark cluster is configured and working correctly because I ran
non-Mahout code and it runs on a distributed cluster fine. What am I doing
wrong? The only thing I can think of is that my Spark version is too recent
-- 2.1.1 -- for the Mahout version I'm using -- 0.13.0. Is that it or am I
doing something else wrong?

Thanks for any advice,

  • Unnamed multipart/alternative (inline, None, 0 bytes)
View raw message