spark-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Diwakar Dhanuskodi <diwakar.dhanusk...@gmail.com>
Subject Spark streaming takes longer time to read json into dataframes
Date Sat, 16 Jul 2016 03:43:54 GMT
Hello,

I have 400K json messages pulled from Kafka into spark streaming using
DirectStream approach. Size of 400K messages is around 5G.  Kafka topic is
single partitioned. I am using rdd.read.json(_._2) inside foreachRDD to
convert  rdd into dataframe. It takes almost 2.3 minutes to convert into
dataframe.

I am running in Yarn client mode with executor memory as 15G and executor
cores as 2.

Caching rdd before converting into dataframe  doesn't change processing
time. Whether introducing hash partitions inside foreachRDD  will help?
(or) Will partitioning topic and have more than one DirectStream help?. How
can I approach this situation to reduce time in converting to dataframe..

Regards,
Diwakar.

Mime
View raw message