I am using pyspark. To transform my sample data and create model, I use stringIndexer and OneHotEncoder.

However, when I try to write data as csv using below command


I get UnsupportedOperationException

java.lang.UnsupportedOperationException: CSV data source does not support struct<type:tinyint,size:int,indices:array<int>,values:array<double>> data type.

Therefore, to save data and avoid getting the error I use


The above command saves data but it's in parquet format.
How can I read parquet file and convert to csv to observe the data?

When I use 

df = spark.read.parquet("1.parquet"), it throws:

ERROR RetryingBlockFetcher: Exception while beginning fetch of 1 outstanding blocks 

Your input is appreciated.

Best regards,