The following will convert the whole row to JSON.

import org.apache.spark.sql.functions.*
df.select(to_json(struct(col("*"))))

On Sat, Sep 9, 2017 at 6:27 PM, kant kodali <kanth909@gmail.com> wrote:
Thanks Ryan! In this case, I will have Dataset<Row> so is there a way to convert Row to Json string?

Thanks

On Sat, Sep 9, 2017 at 5:14 PM, Shixiong(Ryan) Zhu <shixiong@databricks.com> wrote:
It's because "toJSON" doesn't support Structured Streaming. The current implementation will convert the Dataset to an RDD, which is not supported by streaming queries.

On Sat, Sep 9, 2017 at 4:40 PM, kant kodali <kanth909@gmail.com> wrote:
yes it is a streaming dataset. so what is the problem with following code?

Dataset<String> ds = dataset.toJSON().map(()->{some function that returns a string});
 StreamingQuery query = ds.writeStream().start();
 query.awaitTermination();

On Sat, Sep 9, 2017 at 4:20 PM, Felix Cheung <felixcheung_m@hotmail.com> wrote:
What is newDS?
If it is a Streaming Dataset/DataFrame (since you have writeStream there) then there seems to be an issue preventing toJSON to work.


From: kant kodali <kanth909@gmail.com>
Sent: Saturday, September 9, 2017 4:04:33 PM
To: user @spark
Subject: Queries with streaming sources must be executed with writeStream.start()
 
Hi All,

 have the following code and I am not sure what's wrong with it? I cannot call dataset.toJSON() (which returns a DataSet) ? I am using spark 2.2.0 so I am wondering if there is any work around?

 Dataset<String> ds = newDS.toJSON().map(()->{some function that returns a string});
 StreamingQuery query = ds.writeStream().start();
 query.awaitTermination();