How many partitions are in your input data set? A possibility is that your input data has 10 unsplittable files, so you end up with 10 partitions. You could improve this by using RDD#repartition().
Note that mapPartitionsWithIndex is sort of the "main processing loop" for many Spark functions. It is iterating through all the elements of the partition and doing some computation (probably running your user code) on it.
You can see the number of partitions in your RDD by visiting the Spark driver web interface. To access this, visit port 8080 on host running your Standalone Master (assuming you're running standalone mode), which will have a link to the application web interface. The Tachyon master also has a useful web interface, available at port 19999.