systemml-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Sourav Mazumder <sourav.mazumde...@gmail.com>
Subject Re: DML script parsing error in Spark
Date Tue, 08 Dec 2015 19:12:23 GMT
Hi Shirish,

Passing 'B' as " " (as below) through cmdLineArg did not help. I get the
same error.

val cmdLineParams = Map("X" -> " ", "Y" -> " ", "B" -> " ")
val outputs =
ml.execute("/home/system-ml-0.9.0-SNAPSHOT/algorithms/GLM2.dml",
cmdLineParams)

Any idea what else I need to pass ? I'm trying to run the DMLs only on
Spark (no Hadoop) and trying to come out with a use case.

Regards,
Sourav

On Tue, Dec 8, 2015 at 9:27 AM, Shirish Tatikonda <
shirish.tatikonda@gmail.com> wrote:

> Sourav,
>
> In cmdLineParams, you also need to pass B (output location) --  it does not
> have a default value.
>
> In general, a DML script may have two types of "inputs":
>
>    1. $ parameters in the script (e.g., $X, $Y, $dfam, etc. in
>    GLM.dml) which gets populated from command line arguments. The default
>    arguments are specified via *ifdef() *in the script.
>    2. data that you pass-in via *read().*
>
> cmdLineParams in your example refers to (1).
>
> For read(), the exact method in which the data is passed depends on the way
> you invoke the script. In a typical scenario, the data comes from HDFS
> files. Additionally, in the context of MLContext, it may also come from
> RDDs/DataFrames via *registerInput()*. In the context of JMLC, the data can
> come in as an in-memory data structure (e.g., double[][] array).
>
> Hope that clarifies the difference.
>
> Shirish
>
>
> On Tue, Dec 8, 2015 at 8:25 AM, Sourav Mazumder <
> sourav.mazumder00@gmail.com
> > wrote:
>
> > Hi,
> >
> > I'm facing issue while parsing any DML script.
> >
> > What I'm trying something like -
> >
> > val ml = new MLContext(sc)ml.reset()
> > ml.registerInput("X", Xfc, 3569, 4)
> > ml.registerInput("Y", yDc, 1, 4)
> > ml.registerOutput("beta_out")
> >
> > val cmdLineParams = Map("X" -> " ", "Y" -> " ")
> >
> > val outputs =
> > ml.execute("/home/system-ml-0.9.0-SNAPSHOT/algorithms/GLM2.dml",
> > cmdLineParams).
> >
> > I'm getting following error -
> >
> > com.ibm.bi.dml.parser.ParseException: ERROR: Cannot translate the parse
> > tree into DMLProgram:null at
> >
> >
> com.ibm.bi.dml.parser.antlr4.DMLParserWrapper.doParse(DMLParserWrapper.java:250)
> > at
> >
> >
> com.ibm.bi.dml.parser.antlr4.DMLParserWrapper.parse(DMLParserWrapper.java:143)
> > at
> >
> >
> com.ibm.bi.dml.api.MLContext.executeUsingSimplifiedCompilationChain(MLContext.java:1285)
> > at
> > com.ibm.bi.dml.api.MLContext.compileAndExecuteScript(MLContext.java:1204)
> > at
> > com.ibm.bi.dml.api.MLContext.compileAndExecuteScript(MLContext.java:1150)
> > at com.ibm.bi.dml.api.MLContext.execute(MLContext.java:632) at
> > com.ibm.bi.dml.api.MLContext.execute(MLContext.java:667) at
> > com.ibm.bi.dml.api.MLContext.execute(MLContext.java:680) at
> > $iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC.<init>(<console>:33) at
> > $iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC.<init>(<console>:38) at
> > $iwC$$iwC$$iwC$$iwC$$iwC$$iwC.<init>(<console>:40) at
> > $iwC$$iwC$$iwC$$iwC$$iwC.<init>(<console>:42) at
> > $iwC$$iwC$$iwC$$iwC.<init>(<console>:44) at
> > $iwC$$iwC$$iwC.<init>(<console>:46) at $iwC$$iwC.<init>(<console>:48)
at
> > $iwC.<init>(<console>:50) at <init>(<console>:52) at
> .<init>(<console>:56)
> > at .<clinit>(<console>) at .<init>(<console>:7) at .<clinit>(<console>)
> at
> > $print(<console>) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native
> > Method) at
> >
> >
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
> > at
> >
> >
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> > at java.lang.reflect.Method.invoke(Method.java:497) at
> >
> org.apache.spark.repl.SparkIMain$ReadEvalPrint.call(SparkIMain.scala:1065)
> > at
> >
> org.apache.spark.repl.SparkIMain$Request.loadAndRun(SparkIMain.scala:1340)
> > at org.apache.spark.repl.SparkIMain.loadAndRunReq$1(SparkIMain.scala:840)
> > at org.apache.spark.repl.SparkIMain.interpret(SparkIMain.scala:871) at
> > org.apache.spark.repl.SparkIMain.interpret(SparkIMain.scala:819)
> >
> >
> > Regards,
> > Sourav
> >
>

Mime
  • Unnamed multipart/alternative (inline, None, 0 bytes)
View raw message