Hello!

I run local spark cluster with 64 cores total and perform data migration from protobuf to parquet. After consolidation number of protobuf files into one big parquet file I save it to hdfs and it takes a lot of time and uses only 1 core. 

To perform migration faster I start a lot of migration tasks in parallel. After some time I have all 8 active stages saving files and only 8 cores used. (See screenshot). Is there any way to increase the maximum number of active stages?

Thanks
Inline image 2