I was under impression that spark creators are trying to persuade user community not to use RDD api directly. Spark summit I attended was full of this. So I am a bit surprised that I hear use-rdd-api as an advice from you. But if this is a way then I have a second question. For conversion from dataset to rdd I would use Dataset.rdd lazy val. Since it is a lazy val it suggests there is some computation going on to create rdd as a copy. The question is how much computationally expansive is this conversion? If there is a significant overhead then it is clear why one would want to have top method directly on Dataset class.
Ordering whole dataset only to take first 10 or so top records is not really an acceptable option for us. Comparison function can be expansive and the size of dataset is (unsurprisingly) big.
To be honest I do not really understand what do you mean by b). Since DataFrame is now only an alias for Dataset[Row] what do you mean by "DataFrame-like counterpart"?