spark-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "assaf.mendelson" <>
Subject RE: [SS] Why does ConsoleSink's addBatch convert input DataFrame to show it?
Date Fri, 07 Jul 2017 10:41:56 GMT
I actually asked the same thing a couple of weeks ago.
Apparently, when you create a structured streaming plan, it is different than the batch plan
and is fixed in order to properly aggregate. If you perform most operations on the dataframe
it will recalculate the plan as a batch plan and will therefore not work properly. Therefore,
you must either collect or turn to RDD and then create a new dataframe from the RDD.
It would be very useful IMO if we can "freeze" the plan for the input portion and work as
if it was a new dataframe (similar to turning it to RDD and then creating a new dataframe
from the RDD but without the overhead of converting to RDD and back to dataframe), however,
this is not currently possible.


From: Jacek Laskowski [via Apache Spark Developers List] []
Sent: Friday, July 07, 2017 11:44 AM
To: Mendelson, Assaf
Subject: [SS] Why does ConsoleSink's addBatch convert input DataFrame to show it?


Just noticed that the input DataFrame is collect'ed and then
parallelize'd simply to show it to the console [1]. Why so many fairly
expensive operations for show?

I'd appreciate some help understanding this code. Thanks.


Jacek Laskowski
Mastering Apache Spark 2
Follow me at

To unsubscribe e-mail: [hidden email]</user/SendEmail.jtp?type=node&node=21930&i=0>

If you reply to this email, your message will be added to the discussion below:
To start a new topic under Apache Spark Developers List, email<>
To unsubscribe from Apache Spark Developers List, click here<>.

View this message in context:
Sent from the Apache Spark Developers List mailing list archive at
View raw message