because `coalesce` gets propagated further up in the DAG in the last stage, so your last stage only has one task.You need to break your DAG so your expensive operations would be in a previous stage before the stage with `.coalesce(1)`On Fri, Mar 9, 2018 at 5:23 AM, Md. Rezaul Karim <email@example.com> wrote:# Using repartition()However, I'm getting pissed off as writing the resultant DataFrame is taking too long, which is about 4 to 5 hours. Nevertheless, the size of the file written on the disk is about 58GB!Dear All,I have a tiny CSV file, which is around 250MB. There are only 30 columns in the DataFrame. Now I'm trying to save the pre-processed DataFrame as an another CSV file on disk for later usage.
Here's the sample code that I tried:
# Using coalesce()
myDF. coalesce(1).write.format("com.databricks.spark.csv").save("data/file.csv")--Sent from my iPhone