spark-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From pmatpadi <>
Subject How to preserve event order per key in Structured Streaming Repartitioning By Key?
Date Mon, 03 Dec 2018 22:22:30 GMT
I want to write a structured spark streaming Kafka consumer which reads data
from a one partition Kafka topic, repartitions the incoming data by "key" to
3 spark partitions while keeping the messages ordered per key, and writes
them to another Kafka topic with 3 partitions.

I used Dataframe.repartition(3, $"key") which I believe uses

When I executed the query with fixed-batch interval trigger type, I visually
verified the output messages were in the expected order. My assumption is
that order is not guaranteed on the resulting partition. I am looking to
receive some affirmation or veto on my assumption in terms of code pointers
in the spark code repo or documentation.

I also tried using Dataframe.sortWithinPartitions, however this does not
seem to be supported on streaming data frame without aggregation.

One option I tried was to convert the Dataframe to RDD and apply
repartitionAndSortWithinPartitions which repartitions the RDD according to
the given partitioner and, within each resulting partition, sort records by
their keys. In this case however, I cannot use the resulting RDD in the
query.writestream operation to write the result in the output Kafka topic.

1. Is there a data frame repartitioning API that helps sort the
repartitioned data in the streaming context?
2. Are there any other alternatives?
3. Does the default trigger type or fixed-interval trigger type for
micro-batch execution provide any sort of message ordering guarantees?
4. Is there any ordering possible in the Continuous trigger type?

Incoming data:


case class KVOutput(key: String, ts: Long, value: String, spark_partition:

val df = spark.readStream.format("kafka")
  .option("kafka.bootstrap.servers", kafkaBrokers.get)
  .option("subscribe", Array(kafkaInputTopic.get).mkString(","))

val inputDf = df.selectExpr("CAST(key AS STRING)","CAST(value AS STRING)")
val resDf = inputDf.repartition(3, $"key")
  .select(from_json($"value", schema).as("kv"))
  .selectExpr("kv.key", "kv.ts", "kv.value")
  .withColumn("spark_partition", spark_partition_id())
  .select($"key", $"ts", $"value", $"spark_partition").as[KVOutput]
  .sortWithinPartitions($"ts", $"value")

val query = resDf.writeStream
  .option("kafka.bootstrap.servers", kafkaBrokers.get)
  .option("topic", kafkaOutputTopic.get)
  .option("checkpointLocation", checkpointLocation.get)


When I submit this application, it fails with

Sent from:

To unsubscribe e-mail:

View raw message