Yea this is a good suggestion; also check 25th percentile, median, and 75th percentile to see how skewed the input data is.
If you find that the RDD’s partitions are skewed you can solve it either by changing the partitioner when you read the files like already suggested, or call repartition(<int>) on the RDD before the bottleneck to redistribute the data amongst the partitions by executing a shuffle.
Can you check if the RDD is partitioned correctly with correct partition number (if you are manually setting the partition value.) . Try using Hash partitioner while reading the files.
One way you can debug is by checking the number of records that executor has compared to others in the Stage tab of the Spark UI.