Hi All,

I'm getting an error when trying to save a ALS MatrixFactorizationModel. I'm using following method to save the model.

model.save(sc, outPath)

I'm getting the following exception when saving the model. I have attached the full stack trace. Any help would be appreciated to resolve this issue.

org.apache.spark.SparkException: Job aborted.
        at org.apache.spark.sql.sources.InsertIntoHadoopFsRelation.insert(commands.scala:166)
        at org.apache.spark.sql.sources.InsertIntoHadoopFsRelation.run(commands.scala:139)
        at org.apache.spark.sql.execution.ExecutedCommand.sideEffectResult$lzycompute(commands.scala:57)
        at org.apache.spark.sql.execution.ExecutedCommand.sideEffectResult(commands.scala:57)
        at org.apache.spark.sql.execution.ExecutedCommand.doExecute(commands.scala:68)
        at org.apache.spark.sql.execution.SparkPlan$$anonfun$execute$1.apply(SparkPlan.scala:88)
        at org.apache.spark.sql.execution.SparkPlan$$anonfun$execute$1.apply(SparkPlan.scala:88)
        at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:147)
        at org.apache.spark.sql.execution.SparkPlan.execute(SparkPlan.scala:87)
        at org.apache.spark.sql.SQLContext$QueryExecution.toRdd$lzycompute(SQLContext.scala:950)
        at org.apache.spark.sql.SQLContext$QueryExecution.toRdd(SQLContext.scala:950)
        at org.apache.spark.sql.sources.ResolvedDataSource$.apply(ddl.scala:336)
        at org.apache.spark.sql.DataFrameWriter.save(DataFrameWriter.scala:144)
        at org.apache.spark.sql.DataFrameWriter.save(DataFrameWriter.scala:135)
        at org.apache.spark.sql.DataFrameWriter.parquet(DataFrameWriter.scala:281)
        at org.apache.spark.mllib.recommendation.MatrixFactorizationModel$SaveLoadV1_0$.save(MatrixFactorizationModel.scala:284)
        at org.apache.spark.mllib.recommendation.MatrixFactorizationModel.save(MatrixFactorizationModel.scala:141)


Thanks,
Madawa