Is it possible to write a custom RDD in Python using PySpark? We have a HTTP api for reading time series data which supports range scans (so it should be easy to partition data) and we're considering using Spark to analyze that data. If we can't write a RDD in Python, is it possible to write one in Scala and then make use of it in Python land?